Brent Simmons – inessential – You Choose

Brent Simmons wrote a great post on his blog that I enjoyed reading: https://inessential.com/2019/10/29/you_choose

You choose the web you want. But you have to do the work.
A lot of people are doing the work. You could keep telling them, discouragingly, that what they’re doing is dead. Or you could join in the fun.
Again: you choose.

https://inessential.com/2019/10/29/you_choose

I think this is so true! The web doesn’t have to be crappy, bloated, slow and covered with annoying ads. It also doesn’t have to be a corporate-owned platform like Twitter.

Tools to publish web content very cheaply are MORE available today than they ever have been! And RSS still exists, with plenty of ways to consume it.

So grab an RSS reader, like feedly.com or any you prefer, and start enjoying the clean web that still exists.

Using a Raspberry Pi 2 as a Router + Configuring Raspbian for IPv6 with Aussie Broadband

I recently decided to move away from using my Wifi access point as a router, and instead use an old my Raspberry Pi 2 as my router. I had a few reasons for doing this:

  • I wanted a more up-to-date device as my internet facing box. My Wifi AP hasn’t received any firmware updates in several years, so I don’t trust the security of it very highly.
  • I wanted to learn more about networking, particularly how to properly configure IPTables on Linux and how to see what traffic is flowing on my network.
  • I wanted to turn on the Aussie Broadband IPv6 beta, and get an internet-facing IPv6 network.

The Hardware

  • Raspberry Pi 2
  • Network Interface 1 – enxb827eb579e58 – Built-in Pi eth – internet facing
  • Network Interface 1 – enx3c18a0054c1e – Lenovo USB3 Ethernet adaptor (running at USB2 speed) – internal facing

This is sufficient for my network, which is limited to 50mb/s. However, if you have 100Mb/s or more you will hit the limit of what the built-in ethernet port on the Pi 2 can support. You may be able to go further with a Pi 4, which has gigabit ethernet and USB3.

The Software

  • Raspbian 10 Buster
  • NetworkManager – provides IPv4 networking
  • dhcpcd5 – provides IPv6 DHCPv6
  • fail2ban – brute force protection
  • iptables – firewalling and networking rules
  • iptables-persistent – load up my iptables rules on boot

Part 1 – Installation

To start, I just did a Raspbian Lite install, and then set up the Pi to provide SSH. I was then able to configure everything remotely.

Next, I wanted to install NetworkManager. I followed some instructions from here, but I didn’t remove dhcpcd5, as it is able to do DHCPv6 with Prefix-Delegation, something it seems isn’t possible with NetworkManager: https://raspberrypi.stackexchange.com/questions/29783/how-to-setup-network-manager-on-raspbian

sudo apt install network-manager
sudo apt purge openresolv 
sudo nano /etc/dhcpcd.conf
# Add to top of /etc/dhcpcd.conf
ipv6only

Network Manager configuration:

Make sure to edit the config files using the nmcli con edit command, eg nmcli con edit internet
Note in both configs the method=ignore line, as IPv6 is configured by dhcpcd5

root@routepi ###/e/N/system-connections> pwd
/etc/NetworkManager/system-connections
root@routepi ###/e/N/system-connections> cat internet.nmconnection
[connection]
id=internet
uuid=231de2fd-f890-49ba-baed-295fe30a5ee1
type=ethernet
interface-name=enxb827eb579e58
permissions=
timestamp=1565155463

[ethernet]
mac-address-blacklist=

[ipv4]
dns-search='lan';
method=auto

[ipv6]
addr-gen-mode=stable-privacy
dns-search=
method=ignore
root@routepi ###/e/N/system-connections> cat house.nmconnection
[connection]
id=house
uuid=4a683432-11e6-3c31-be8e-0603ca5fb6ce
type=ethernet
autoconnect-priority=-999
permissions=
timestamp=1563862423

[ethernet]
mac-address=3C:18:A0:05:4C:1E
mac-address-blacklist=

[ipv4]
address1=10.1.1.9/24
dns-search=
ignore-auto-dns=true
method=manual

[ipv6]
addr-gen-mode=stable-privacy
dns-search=
method=ignore

Dhcpcd5 configuration:

The main part of interest are the lines requesting an IA_NA and also an IA_PD from the internet facing interface, and assigning them to the internal interface. Aussie Broadband requires that you request both an IA_NA and an IA_PD, os this is the config to make it work:

allowinterfaces enxb827eb579e58
interface enxb827eb579e58
# Address from the /64
ia_na 1
# Request /56 and assign it to other interface
ia_pd 2 enx3c18a0054c1e/1

Whole file is here for reference:

# A sample configuration for dhcpcd.
# See dhcpcd.conf(5) for details.

# Allow users of this group to interact with dhcpcd via the control socket.
#controlgroup wheel
ipv6only
# Inform the DHCP server of our hostname for DDNS.
hostname

# Use the hardware address of the interface for the Client ID.
#clientid
# or
# Use the same DUID + IAID as set in DHCPv6 for DHCPv4 ClientID as per RFC4361.
# Some non-RFC compliant DHCP servers do not reply with this set.
# In this case, comment out duid and enable clientid above.
duid

# Persist interface configuration when dhcpcd exits.
persistent

# Rapid commit support.
# Safe to enable by default because it requires the equivalent option set
# on the server to actually work.
option rapid_commit

# A list of options to request from the DHCP server.
option domain_name_servers, domain_name, domain_search, host_name
#option classless_static_routes
# Respect the network MTU. This is applied to DHCP routes.
#option interface_mtu

# Most distributions have NTP support.
#option ntp_servers

# A ServerID is required by RFC2131.
require dhcp_server_identifier

# disable running any hooks; not typically required for simple DHCPv6-PD setup
script /bin/true

# Disable dhcpcd's own router solicitation support; allow slaacd
# to do this instead by setting "inet6 autoconf" in hostname.em0
noipv6rs

# Generate SLAAC address using the Hardware Address of the interface
#slaac hwaddr
# OR generate Stable Private IPv6 Addresses based from the DUID
#slaac private


allowinterfaces enxb827eb579e58
interface enxb827eb579e58
# Address from the /64
ia_na 1
# Request /56 and assign it to other interface
ia_pd 2 enx3c18a0054c1e/1


# Example static IP configuration:
#interface eth0
#static ip_address=192.168.0.10/24
#static ip6_address=fd51:42f8:caae:d92e::ff/64
#static routers=192.168.0.1
#static domain_name_servers=192.168.0.1 8.8.8.8 fd51:42f8:caae:d92e::1

# It is possible to fall back to a static IP if DHCP fails:
# define static profile
#profile static_eth0
#static ip_address=192.168.1.23/24
#static routers=192.168.1.1
#static domain_name_servers=192.168.1.1

# fallback to static profile on eth0
#interface eth0
#fallback static_eth0

SysCTL config to allow routing

Now we need to turn on some kernel options in the sysctl config file:

root@routepi ###~> cat /etc/sysctl.conf 
net.ipv4.ip_forward=1
net.ipv6.conf.all.forwarding=1
net.ipv6.conf.enxb827eb579e58.accept_ra=2

The ipv4 line sets the pi to forward IPv4 packets
The net.ipv6.conf.enxb827eb579e58.accept_ra=2 is important because without that, the routing table won’t be automatically updated with the routes provided by the Aussie Broadband Router Advertisments (RA’s). This config has three possible values:
0 = don't accept RA's
1 = accept RA's if not acting as a router
2 = accept RA's even if acting as a router
I found this info here: http://strugglers.net/~andy/blog/2011/09/04/linux-ipv6-router-advertisements-and-forwarding/

Fail2Ban Configuration

Important in any machine that is internet-facing is to have some sort of brute-force lockout system. I’m using fail2ban to help secure my SSH. However, because I run SSH on port 2, I need a little extra fail2ban config:

root@routepi ###~> cat /etc/fail2ban/jail.d/routepi.local 
[sshd]
port    = 2
ignoreip = 138.80.14.0/24, 10.1.0.0/16

IPTables configuration

Finally, to properly secure incoming traffic, I use IPTables. My iptables config is fairly lax, but provides the basics of preventing any incoming traffic without an established session from an internal device. I am using the iptables-persistent package to auto-load my iptables rules on boot. It can be installed with:

sudo apt install iptables-persistent
root@routepi ###/> cat /etc/iptables/rules.v4
# Jays IPTables on Pi
# enxb827eb579e58 - WAN interface
# enx3c18a0054c1e - LAN interface
# 10.1.1.13 - TVPi

*filter
:INPUT DROP
:FORWARD DROP
:OUTPUT ACCEPT
## INPUT rules
# ACCEPT related or established connections
-A INPUT -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
# icmp
-A INPUT -p icmp -j ACCEPT
# DHCP from ISP
-A INPUT -p udp --dport 68 -j ACCEPT
# DHCP for internal
-A INPUT -i enx3c18a0054c1e -p udp --dport 67 -j ACCEPT
# SSH
-A INPUT -p tcp --dport 2 -j ACCEPT
-A INPUT -p udp -m multiport --dports 60001:60099 -j ACCEPT
# DNS - only on internal
-A INPUT -i enx3c18a0054c1e -p udp --dport 53 -j ACCEPT
-A INPUT -i lo              -p udp --dport 53 -j ACCEPT
# We dont participate in mDNS, so drop it without logs
-A INPUT -p udp --dport 5353 -j DROP
# We dont participate in syncthing, so drop it without logs
-A INPUT -p tcp --dport 22000 -j DROP
-A INPUT -p udp --dport 22000 -j DROP
-A INPUT -p udp --dport 21027 -j DROP
# We dont participate in tvheadend, so drop it without logs
-A INPUT -p udp --dport 65001 -j DROP
# We dont participate in uuuggghhh NetBios, so drop it without logs
-A INPUT -p udp -m multiport --dports 137,138,139 -j DROP
-A INPUT -p tcp -m multiport --dports 137,138,139 -j DROP
# We dont participate in igmp, so drop it without logs
-A INPUT -p 2 -j DROP
# Log everything else before it's dropped - limit to 1/s
-A INPUT -i enxb827eb579e58 -m limit --limit 1/second --limit-burst 100 -j LOG --log-prefix :nf4_INPUT_ext_dropped:
-A INPUT -i enx3c18a0054c1e -m limit --limit 1/second --limit-burst 100 -j LOG --log-prefix :nf4_INPUT_int_dropped:
## End INPUT rules

# Forward all packets that are being DNAT'd
-A FORWARD -m conntrack --ctstate DNAT -j ACCEPT
# Forward all packets on the LAN side
-A FORWARD -i enx3c18a0054c1e -j ACCEPT
# Forward active connections on the WAN side
-A FORWARD -i enxb827eb579e58 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
COMMIT


*nat
:PREROUTING ACCEPT
:INPUT ACCEPT
:POSTROUTING ACCEPT
:OUTPUT ACCEPT

# HTTP(S) Forwarding
-A PREROUTING -m addrtype --dst-type LOCAL -p tcp --dport 80 -j DNAT --to-destination 10.1.1.13:80
-A PREROUTING -m addrtype --dst-type LOCAL -p tcp --dport 443 -j DNAT --to-destination 10.1.1.13:443
# UDP too for SPDY/HTTP3
-A PREROUTING -m addrtype --dst-type LOCAL -p udp --dport 443 -j DNAT --to-destination 10.1.1.13:443

# port 4 goes to TVPi SSH
-A PREROUTING -m addrtype --dst-type LOCAL -p tcp --dport 4 -j DNAT --to-destination 10.1.1.13:4
# MOSH ports
-A PREROUTING -m addrtype --dst-type LOCAL -p udp --dport 60200:60299 -j DNAT --to-destination 10.1.1.13:60200-60299
# port 3 goes to TVPi SSH
-A PREROUTING -m addrtype --dst-type LOCAL -p tcp --dport 3 -j DNAT --to-destination 10.1.1.11:22
# MOSH ports
-A PREROUTING -m addrtype --dst-type LOCAL -p udp --dport 60100:60199 -j DNAT --to-destination 10.1.1.11:60100-60199

# MASQ for packets that are being DNAT'd, so that they go back to the router
-A POSTROUTING -m conntrack --ctstate DNAT -j MASQUERADE

# MASQ (NAT) all packets that are accepted by the forwarding
-A POSTROUTING -o enxb827eb579e58 -j MASQUERADE
-A POSTROUTING -o enx3c18a0054c1e -m conntrack --ctstate RELATED,ESTABLISHED -j MASQUERADE
COMMIT
root@routepi ###/> cat /etc/iptables/rules.v6
# Jays ipv6 config
# enxb827eb579e58 - WAN interface
# enx3c18a0054c1e - LAN interface

*filter
:INPUT DROP [0:0]
:FORWARD DROP [0:0]
:OUTPUT ACCEPT [0:0]

## INPUT rules
# Allow related or established traffic
-A INPUT -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
# Allow NDP on all interfaces (it's link-local, so pretty safe)
-A INPUT -p icmpv6 --icmpv6-type router-solicitation      -j ACCEPT
-A INPUT -p icmpv6 --icmpv6-type router-advertisement     -j ACCEPT
-A INPUT -p icmpv6 --icmpv6-type neighbour-solicitation   -j ACCEPT
-A INPUT -p icmpv6 --icmpv6-type neighbour-advertisement  -j ACCEPT
-A INPUT -p icmpv6 --icmpv6-type redirect                 -j ACCEPT
-A INPUT -p icmpv6 --icmpv6-type 141                      -j ACCEPT -m comment --comment "inverse NDP" 
-A INPUT -p icmpv6 --icmpv6-type 142                      -j ACCEPT -m comment --comment "inverse NDP"
# Allow internal icmp
-A INPUT -p icmpv6 -i enx3c18a0054c1e -j ACCEPT
# Allow external/internal echo req/resp
-A INPUT -p icmpv6 --icmpv6-type echo-request -j ACCEPT
-A INPUT -p icmpv6 --icmpv6-type echo-reply   -j ACCEPT
# Multicast Receiver Notification messages
-A INPUT -p icmpv6 --icmpv6-type 130 -j ACCEPT -m comment --comment "Listener Query" 
-A INPUT -p icmpv6 --icmpv6-type 131 -j ACCEPT -m comment --comment "Listener Report" 
-A INPUT -p icmpv6 --icmpv6-type 132 -j ACCEPT -m comment --comment "Listener Done" 
-A INPUT -p icmpv6 --icmpv6-type 143 -j ACCEPT -m comment --comment "Listener Report v2" 
# SEND Certificate Path Notification messages
-A INPUT -p icmpv6 --icmpv6-type 148 -j ACCEPT -m comment --comment "Certificate Path Solicitation" 
-A INPUT -p icmpv6 --icmpv6-type 149 -j ACCEPT -m comment --comment "Certificate Path Advertisement"
# Multicast Router Discovery messages
-A INPUT -p icmpv6 --icmpv6-type 151 -j ACCEPT -m comment --comment "Multicast Router Advertisement" 
-A INPUT -p icmpv6 --icmpv6-type 152 -j ACCEPT -m comment --comment "Multicast Router Solicitation" 
-A INPUT -p icmpv6 --icmpv6-type 153 -j ACCEPT -m comment --comment "Multicast Router Termination" 
# Drop fake loopback traffic 
-A INPUT -s ::1/128 ! -i lo -j DROP
# Allow incoming DHCPv6 from ISP
-A INPUT -p udp --dport 546 -j ACCEPT
# SSH
-A INPUT -p tcp --dport 2 -j ACCEPT
-A INPUT -p udp -m multiport --dports 60001:60099 -j ACCEPT
# DNS - only on internal
-A INPUT -i enx3c18a0054c1e -p udp --dport 53 -j ACCEPT
-A INPUT -i lo              -p udp --dport 53 -j ACCEPT
# We dont participate in mDNS, so drop it without logs
-A INPUT -p udp --dport 5353 -j DROP
# We dont participate in syncthing, so drop it without logs
-A INPUT -p tcp --dport 22000 -j DROP
-A INPUT -p udp --dport 22000 -j DROP
-A INPUT -p udp --dport 21027 -j DROP
# We dont participate in tvheadend, so drop it without logs
-A INPUT -p udp --dport 65001 -j DROP
# We dont participate in uuuggghhh NetBios, so drop it without logs
-A INPUT -p udp -m multiport --dports 137,138,139 -j DROP
-A INPUT -p tcp -m multiport --dports 137,138,139 -j DROP
# Log everything else before it's dropped - limit to 1/s
-A INPUT -i enxb827eb579e58 -m limit --limit 1/second --limit-burst 100 -j LOG --log-prefix :nf6_INPUT_ext_dropped:
-A INPUT -i enx3c18a0054c1e -m limit --limit 1/second --limit-burst 100 -j LOG --log-prefix :nf6_INPUT_int_dropped:
## End INPUT rules

# Allow internal traffic out and external traffic in if rel/est
# This should also cover all icmpv6 error messages
-A FORWARD -i enx3c18a0054c1e -j ACCEPT
-A FORWARD -i enxb827eb579e58 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
# Allow imcpv6 echo
-A FORWARD -p icmpv6 --icmpv6-type echo-request -j ACCEPT
-A FORWARD -p icmpv6 --icmpv6-type echo-reply   -j ACCEPT

# TVpi - 2403:0000:0000:0000::7
# SSH
-A FORWARD -d 2403:0000:0000:0000::7 -p tcp --dport 4 -j ACCEPT
-A FORWARD -d 2403:0000:0000:0000::7 -p udp --dport 60000:60099 -j ACCEPT
# HTTP/s
-A FORWARD -d 2403:0000:0000:0000::7 -p tcp --dport 80 -j ACCEPT
-A FORWARD -d 2403:0000:0000:0000::7 -p tcp --dport 443 -j ACCEPT
-A FORWARD -d 2403:0000:0000:0000::7 -p udp --dport 443 -j ACCEPT


COMMIT

Finally – Backing Up

There’s always a risk of SD card failure on a Raspberry Pi, so I make sure to backup all my configurations into a git repository in my home directory. Here is the script I use to do so:

root@routepi ###~/l/router_conf> cat backup_router_conf.fish
#!/usr/bin/env fish

rsync -a /etc/dnsmasq.d ./
rsync -a /etc/iptables ./
rsync -a /etc/NetworkManager ./
rsync -a /etc/sysctl.conf ./
rsync -a /etc/dhcpcd.conf ./
rsync -a /etc/fail2ban/jail.d ./fail2ban/

Any time I make a change, I will run the script to backup any changed config files, and then commit + push them to my git server.

That’s it! Feel free to leave a comment if you have a similar setup, or suggestions on how to do things better.

How to install Kodi 18.3 on Raspberry Pi 3 with Raspbian 10 Buster

Since the release of Raspbian 10 Buster, I have really enjoyed using it on the Raspberry Pi 2 that I use as my network router. The main features I like are:

  • Newer services and libraries, including:
    • Nginx
    • MariaDB
  • Python 3.7
  • Fish Shell version 3.0.x

However, the Pi 3 I have hooked up to my TV wasn’t able to be upgraded, because the version of Kodi media player included with buster is currently 17.6. Also, it doesn’t have proper acceleration for the Raspberry Pi hardware, and doesn’t launch properly without X11 window manager running. However, I discovered that the Pipplware team has been packaging an improved Kodi version. See more about Pipplware here: http://pipplware.pplware.pt/

I found this guide (https://linuxsuperuser.com/install-latest-version-kodi-raspbian-jessie/) on how to use the Pippleware repo on Raspbian Jessie, but it isn’t fully compatible with Buster, so here’s the instructions to use the guide on Buster:

Make a backup of your SD card, so that you can rollback if needed.

https://thepihut.com/blogs/raspberry-pi-tutorials/17789160-backing-up-and-restoring-your-raspberry-pis-sd-card

Add the pipplware list to your APT sources list. Note the buster part. You can do this by running:

ADDED 2019-09-14 – As mentioned by Neil in the comments, before you add the pipplware repository you should uninstall any existing kodi packages, to prevent conflicts. Thanks Neil! You can do this by running:

sudo apt purge kodi kodi-data kodi-bin kodi-repository-kodi 

Now you can add the pipplware repository to your sources list:

sudo bash -c "echo 'deb http://pipplware.pplware.pt/pipplware/dists/buster/main/binary /' >/etc/apt/sources.list.d/pipplware.list"

Add the pipplware key to APT, so that software from their repository is trusted:

wget -O - http://pipplware.pplware.pt/pipplware/key.asc | sudo apt-key add -

Update the APT sources:

sudo apt-get update && sudo apt-get dist-upgrade

You should now be able to install the 18.3 version of Kodi:

sudo apt-get install kodi

How to enable BFQ IO Scheduler on Ubuntu 19.04 – with Ansible playbook to set it up

I was trying to enable the BFQ scheduler on my laptop running Ubuntu 19.04, because as I moved around and created some large files my computer became basically unusable. After some investigation I found the page on the BFQ scheduler and how it is meant to address these sorts of issues.

There is some instructions on StackExchange on how to do this, so what I’ve done is turn it into an Ansible playbook to make it easier to apply. Here’s the original link: https://unix.stackexchange.com/questions/375600/how-to-enable-and-use-the-bfq-scheduler/376136

Note, this playbook doesn’t edit the /etc/default/grub file, because Ubuntu 19.04 already uses blk-mq by default. Here is the playbook:

To use the playbook, just save it somewhere and run with ansible-playbook, then reboot. For example:

~/l/a/bfq> ls
enable_bfq_playbook.yml
~/l/a/bfq> ansible-playbook enable_bfq_playbook.yml --ask-become-pass 
SUDO password: 
 [WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all'


PLAY [localhost] *************************************************************************************************************************************

TASK [Gathering Facts] *******************************************************************************************************************************
ok: [localhost]

TASK [Enable kernel module loading] ******************************************************************************************************************
ok: [localhost]

TASK [Enable bfq by default] *************************************************************************************************************************
ok: [localhost]

PLAY RECAP *******************************************************************************************************************************************
localhost                  : ok=3    changed=0    unreachable=0    failed=0   

~/l/a/bfq> 

Running Nextcloud on the Raspberry Pi 3 – Nginx Reverse Proxy, fixes for upload timeouts and more

Under my TV I have a Raspberry Pi 3 running Kodi, which works great for watching/listening to all my media. However, with it idling and always-on, I thought it would be great to be able to use it as a Nextcloud server as well. Here’s some details on my setup, as well as a some fixes for errors I encountered. Diving right in, here’s some details of my Pi’s OS install:

root@tvpi ###~> uname -a
Linux tvpi 4.19.57-v7+ #1244 SMP Thu Jul 4 18:45:25 BST 2019 armv7l GNU/Linux
root@tvpi ###~> lsb_release -a
No LSB modules are available.
Distributor ID:	Raspbian
Description:	Raspbian GNU/Linux 9.9 (stretch)
Release:	9.9
Codename:	stretch
root@tvpi ###/etc> docker --version
Docker version 19.03.1, build 74b1e89

The setup I’m going for is:
HTTPS -> nginx proxy (in raspbian) -> nextcloud instance (in docker)

Setting up Nginx as a reverse proxy

The first part of the setup is to get nginx operating as a reverse proxy. The nginx server will do several things:

  • Redirect http traffic to https
  • Terminate https TLS traffic, and then proxy the traffic via http to the nextcloud server running in docker
  • Split traffic up based on the hostname used – to allow me to run other sites from the same Pi

To get a secure TLS configuration I am using Mozilla’s fantastic SSL Configuration Generator https://ssl-config.mozilla.org/. You then need some reverse proxy configuration, including setting X-Forwarded headers to allow the Nextcloud instance to block IP’s that are trying to brute force and also detect the correct hostname:

    location / {
        proxy_pass http://localhost:8088;
    }
    proxy_set_header X-Forwarded-Host $host;
    proxy_set_header X-Forwarded-Proto 'https';
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;

With my original setup I was seeing these errors:
"jtuckey/files/Documents/testfile.bin.upload.part" is locked
which would show as a 423 error in the Firefox dev-tools. According to Wikipedia this error is 423 Locked (WebDAV; RFC 4918). Here is screenshot of the problem:

These are the fixes I used for these issues with uploads, both with the maximum size of file uploads, and with file uploads timing out when I uploaded a large file. Nextcloud uploads large files in 10M chunks, and then re-combines them at the end into a single file. However, because of the Pi’s slow USB2 disk connection the re-combining can cause a timeout in the Nginx reverse proxy, which then re-tries the MOVE request, to which Nextcloud responds that the file is locked (it is already re-combining the file).

Also, when using the “file drop (upload only)” the uploads are not chunked, so to use that feature with large files you will need to set nginx’s max upload size to a very large value:

    # Allow uploads for NC - within the file app they upload in 10M chunks, however using file-drop
    # you need a value that is large enough for your largest file size
    client_max_body_size 5000M;

    # Allow long running connections, so NC can re-combine large files on a slow pi
    proxy_read_timeout 3600s;
    proxy_send_timeout 3600s;

Here’s the final nginx configuration I have:

Running docker from an external USB SSD

To get extra storage and performance, I use a USB attached SSD connected to my pi, which you can see in the picture. This external disk is mounted at /mnt/tosh. To have docker put it’s storage on the external disk what I did is create a symbolic link from /var/lib/docker to the external disk:

root@tvpi ###/v/lib> pwd
/var/lib
root@tvpi ###/v/lib> ls -lah docker
lrwxrwxrwx 1 root root 18 Jun  9 09:11 docker -> /mnt/tosh/dockerd/

Make sure you haven’t got any data in docker that you want to keep, like existing volumes. Once you are happy, rename/remove the old /var/lib/docker directory and link it to the path you want to use. To create the link you can run:

root@tvpi ###/v/lib> ln -s /mnt/tosh/dockerd/ docker

Running the Nextcloud Docker image via docker-compose

The next part of the setup is to run the nextcloud docker image. The best way I found to do this was to use docker-compose. However, docker-compose doesn’t install on raspbian because it is arm32, so instead I installed it using pip, with the help of pyenv to install the latest version of python and pip. Pyenv is available here: https://github.com/pyenv/pyenv. To get the install requirements setup I had to run the line from the https://github.com/pyenv/pyenv/wiki page:

sudo apt-get update; sudo apt-get install --no-install-recommends make build-essential libssl-dev zlib1g-dev libbz2-dev libreadline-dev libsqlite3-dev wget curl llvm libncurses5-dev xz-utils tk-dev libxml2-dev libxmlsec1-dev libffi-dev liblzma-dev

Once docker-compose is installed, I was able to use my docker-compose.yml file:

Note Line 15 – mapping in the extra config file for extending the apache2 timeout to help with file re-combining with large uploads. This is the contents of the apache_timeout.conf file:

# Timeout: The number of seconds before receives and sends time out.
Timeout 3600

Also take note of Lines 26 to 30 – where I set the IP Address Management. This allows setting the correct trusted_proxies value in the nextcloud config.php file.

Finally, also see in line 16 where I map in some extra external storage. I use Syncthing to do device synchronisation, because I find it’s syncing to be more reliable and efficient.

Running the Nextcloud cron job inside the docker container with a systemd timer

One situation the Nextcloud docker image has is that it doesn’t run cron within the container, so the nextcloud cron job isn’t regularly executed. To fix this, you need to run the job on a schedule on the host, to execute the cron.php file within the container. To do this, you can use cron, but I chose to use a systemd timer instead. The benefit this has is better logging and status checking. To create the config I used the excellent arch documentation here: https://wiki.archlinux.org/index.php/Systemd/Timers. Here are my nextcloud_cron.service and nextcloud_cron.timer files:

Setting the trusted_domains and trusted_proxies values in config.php

The final part of the configuration is to add a couple of extra values to the nextcloud config.php file. This should be added once you have started up the nextcloud container and run through the setup process. Once that’s complete, navigate to the nextcloud_config volume (will usually be at /var/lib/docker/volumes/nextcloud_config/_data) and add the extra values. Here’s what I added:

  'trusted_domains' =>  
  array (
    0 => 'localhost:8088',
    1 => 'nextcloud.jaytuckey.duckdns.org',
    2 => 'nc.jaytuckey.name',
  ),  
  'trusted_proxies' =>  
  array (
    0 => '172.16.0.1',
  ),  

Note the trusted_proxies value matches the subnet range we put in our docker-compose.yml file.

Bonus Points – An Ansible Playbook to configure most of these steps

To manage some of my configuration, I have created an ansible playbook to help set things up. Here’s my playbook:

How to get VMWare Remote Console to install on Kubuntu 19.04 – Probably works on other Ubuntu Versions also

When trying to set up my new Kubuntu install I went to install the VMWare Remote Console, and to my annoyance the installer would fail without any feedback on what was going wrong. Here’s what I needed to do to fix it. To start with , some details of my machines and setup:

Date: 2019-05-24
Machine details from inxi:
/m/u/h/j/S/notes> inxi
 CPU: Dual Core Intel Core i5-6200U (-MT MCP-) speed/min/max: 1468/400/2800 MHz Kernel: 5.0.0-15-generic x86_64 Up: 1h 49m 
 Mem: 4538.0/7719.6 MiB (58.8%) Storage: 238.47 GiB (54.3% used) Procs: 271 Shell: fish 3.0.2 inxi: 3.0.33

VMWare Remote Console Version: 10.0.4
VMware-Remote-Console-10.0.4-11818843.x86_64.bundle

To start with, running the installer from the command like I was getting this error:

~/d> sudo ./VMware-Remote-Console-10.0.4-11818843.x86_64.bundle --console
Extracting VMware Installer...done.
User interface initialization failed.  Exiting.  Check the log for details.

It wasn’t even clear what log file it meant, but after doing some searching on duckduckgo I discoversed the log file for the console installer is /var/log/vmware-installer. Therefore, to get an idea of the issues, I would tail that log like this:

root@jtuckey-x1-ubu:/var/log# tail -f /var/log/vmware-installer

Tailing that log showed me that there were some issues with libraries being used. To get the installer to run through I needed to install the packages:

  • libncursesw5 – for the console interface
  • desktop-file-utils – error caused when trying to run update-desktop-database command from this package

After this I was able to get the installed to run through completely. Hope this helps someone.

How to Play Age of Empires 3 on Ubuntu Linux using Steam Play – Solving the “Invalid CD Key – Error loading the PID Generator DLL”

This guide is based on one I found here: https://verybadfrags.com/2019/04/14/play-age-of-empires-iii-on-linux/

It was done with the following system:

Date: 2019-04-20
OS: Ubuntu 19.04
Steam Play Version: 4.2-3
Graphics: Nvidia GTX 860m with proprietary driver

This is to fix the error that you get when trying to run Age of Empires III: “Invalid CD Key! – Error loading the PID Generator DLL. The DLL could not be found! Please make sure the file is available in the installation directory and try again.”

  1. Enable steam play for all titles. Go to “Settings” -> “Steam Play” -> check “Enable Steam Play for all other titles”
  2. Launch Steam and install ‘Age of Empires III: Complete Collection’
  3. Run AoE3 for the first time, and let perform the first time setup. One you get to the “Product Key” box, click “Cancel”
  4. You now need to install “winetricks.” See https://github.com/Winetricks/winetricks. The easiest way to install on Ubuntu is to run sudo apt install winetricks
  5. You also need to install “protontricks”, which is a wrapper for winetricks that runs it against Steam Play installations. The version on the original article is out of date, so the newer version is here: https://github.com/Matoking/protontricks. To install this, run:
    1. sudo apt install python3-pip python3-setuptools python3-venv
    2. python3 -m pip install --user pipx
    3. ~/.local/bin/pipx ensurepath
    4. pipx install protontricks
  6. Now install the extra dependencies with protontricks: protontricks 105450 mfc42 winxp l3codecx corefonts
  7. Now you can relaunch the game and enter the CD Key.


How to Clean up VMware Horizon View pools without vCenter online and remove missing desktop pool from Global Entitlement using ADSI Edit

Recently I had to do a cleanup of a VMware Horizon 7 connection server, which involved removing all the existing desktop pools and recreating them. The trouble was, the old vCenter server had been removed, so when I tried to delete the pools using the Horizon Administrator Console, I got the error:

Server Error
Unable to connect to the vCenter Server

To fix this, I did the following:

Get the list of VM’s you want to remove

Using PowerCLI I was able to get a list of machines in the pool I wanted to remove. Install PowerCLI from the documentation here: https://docs.vmware.com/en/VMware-Horizon-7/7.6/horizon-integration/GUID-7C7C5239-6990-47E0-B9FB-29EC0EB0F5AC.html

Make sure to also install the VMware.Hv.Helper module from here https://github.com/vmware/PowerCLI-Example-Scripts by copying in to the C:\Program Files\WindowsPowerShell\Modules folder.

Then, after connecting to the Horizon View server, get a list of VM’s:

# Get all HV Machines
$ms = Get-HVMachine

# Show the HV Machine Names
$ms[0..2] | select -ExpandProperty Base | select Name

# Select just the machines you want
$to_remove = $ms | ?{$_.Base.Name -match '17a6-clst-p...'}

# To view a list of the machine names:
$to_remove | Select -ExpandProperty Base | Select name

# And export the list to a csv
$to_remove | Select -ExpandProperty Base | Select name | Export-Csv -NoTypeInformation ~\Downloads\to_remove.csv

Use SviConfig to delete machine records from the Horizon View Composer

This is from the documentation here: https://docs.vmware.com/en/VMware-Horizon-7/7.6/horizon-virtual-desktops/GUID-F0D595CB-4E7B-4DAE-B80B-DCDCE85E8DF2.html

Once you have copied the CSV to the composer server, you have to use the list to delete all the composer records for the machines.

# Run in an admin Powershell from path C:\Program Files (x86)\VMware\VMware View Composer
# Import the CSV
$to_delete = Import-Csv ~\Downloads\to_remove.csv

# Delete each item in the list
$to_delete | %{.\SviConfig.exe -operation=removesviclone -vmname="$($_.Name)" -AdminUser=<put admin account here> -AdminPassword="<put admin password here>"}

Remove the pool from any Global Entitlements

Important — This has to be done before removing the pool using ADSI Edit, in the next step. However, if you mess this up, as I did, I have a workaround as the last step.

To do this, just go into the global entitlements and remove the pool you are cleaning up from any of the global entitlements. If you don’t do this, you will see the global entitlement saying there are 2 pools in it, for example, but when you open the global entitlement to delete the pool you can’t see the local pool to remove.

Delete the pool using ADSI edit

To delete the actual pool and machine entries, follow the guide here: https://kb.vmware.com/s/article/2015112

The simple version is, you open ADSI edit to connect to server localhost and Distinguished Name/Naming Context dc=vdi,dc=vmware,dc=int.

You then create a query with the root of the search being OU=Servers,DC=vdi,DC=vmware,DC=int and the query string being (&(objectClass=pae-VM)(pae-displayname=17a6-clst-p*))

You can then check through the item the Applications and Server Groups OU’s to find the pool and delete it. However, make sure you have removed the pool from any global entitlements first.

Workaround – Delete Global Entitlement Local Pool member using ADSI edit

To do this, open ADSI edit on the connection server, and choose “Connect to”. Use localhost:22389 as the server, and DC=vdiglobal,DC=vmware,DC=int as the Distinguished Name/Naming Context.

Create a new query, as you did with deleting the pool using ADSI edit. This query should be in the new connection, and have the settings:

Name: Find global entitlement \
Root of search: OU=Entitlements,DC=vdiglobal,DC=vmware,DC=int \
Query String: (&(objectClass=pae-GlobalAssignment)(pae-LocalEntitlement=*17a6-clst-p*))

Note that the name of the local pool you haven’t removed from the global entitlement is in the query between the * characters.

This should give you one item, of type pae-GlobalAssignment. Open it up, and make sure the pae-LocalEntitlement attribute matches what you want to delete. If so, delete it.

You now shouldn’t have the incorrect number of local pools in your global entitlement.

Using xonsh shell with pyenv on Ubuntu 18.04 – and a few errors I had getting source-bash to work 😉

When I heard about the xonsh project – https://xon.sh I thought it sounded great, as I really enjoy the python language and am much more comfortable using python to write scripts than I am using bash. I also find the bash syntax very confusing, and am never sure of the right way to write a script or use a variable.

So, I decided to install it and give it a try! When using python on Linux I always use the pyenv tool to install and manage my python versions, as well as my virtualenvs. This tool allows me to easily install the latest version of python, as well as create a virtualenv for a specific project and keep all the packages that project uses separate from each other and my system python. It’s a great tool, and I can highly recommend using it.

pyenv installs all python versions into your user profile, and allows you to select from them on the command line using

To get all the pyenv commands, along with the pyenv command completion (which is very good), you add an init command to the end of your .bashrc file. This is how it looks in mine:

# Load pyenv automatically by adding
# the following to ~/.bash_profile:

export PATH="/home/jtuckey/.pyenv/bin:$PATH"
eval "$(pyenv init -)"
eval "$(pyenv virtualenv-init -)"

pyenv shell 3.7.0

Excellent! When running in bash this sets up pyenv, then sets my shell to use python 3.7.0. So let’s install xonsh:

jtuckey@jay-x1-carbon:~$ pip install --user xonsh
Collecting xonsh
  Using cached https://files.pythonhosted.org/packages/58/16/fce8ecc880a44dfcb06b22f638df0b6982ad588eb0f2080fbd5218a42340/xonsh-0.8.0.tar.gz
Installing collected packages: xonsh
  Running setup.py install for xonsh ... done
Successfully installed xonsh-0.8.0

Looking good so far! I can now use the xonsh command to hop into a xonsh shell, and then use all the goodness of a python-based shell:

jtuckey@jay-x1-carbon:~$ xonsh
jtuckey@jay-x1-carbon ~ $ ${'XONSH_VERSION'}
'0.8.0'
jtuckey@jay-x1-carbon ~ $ history info
backend: json
sessionid: 30e7e6d4-4945-4f99-9b65-d38a2ab61da2
filename: /home/jtuckey/.local/share/xonsh/xonsh-30e7e6d4-4945-4f99-9b65-d38a2ab61da2.json
length: 1
buffersize: 100
bufferlength: 1
gc options: (8128, 'commands')

Now that I have it installed and am getting all the good features, I naturally wanted it as my default shell. My first attempt at this was to simply put it the end of my .bashrc file:

export PATH="/home/jtuckey/.pyenv/bin:$PATH"
eval "$(pyenv init -)"
eval "$(pyenv virtualenv-init -)"

pyenv shell 3.7.0

xonsh

This works! However, to quit the shell, I need to exit twice:

jtuckey@jay-x1-carbon ~ <xonsh>$ exit
jtuckey@jay-x1-carbon:~$ exit

Ok, maybe let’s put it in an auto-exiting block. Back to the .bashrc:

# Load pyenv automatically by adding
# the following to ~/.bash_profile:

export PATH="/home/jtuckey/.pyenv/bin:$PATH"
eval "$(pyenv init -)"
eval "$(pyenv virtualenv-init -)"

pyenv shell 3.7.0

# Start xonsh shell
if xonsh; then
  exit
fi

This works even better. Now it will automatically exit when I’m done using xonsh, but if it can’t find xonsh it will stay in bash. However, with this setup, I try to do some work, which involves activating one of my pyenv virtualenvs. This gives me an error:

jtuckey@jay-x1-carbon ~ <xonsh>$ pyenv activate godev

Failed to activate virtualenv.

Perhaps pyenv-virtualenv has not been loaded into your shell properly.
Please restart current shell and try again.

Ok, so clearly the pyenv setup chunk from bash isn’t carrying across nicely from bash to xonsh. The path looks alright:

jtuckey@jay-x1-carbon ~ <xonsh>$ $PATH
\EnvPath(
['/home/jtuckey/.pyenv/plugins/pyenv-virtualenv/shims',
 '/home/jtuckey/.pyenv/shims',
 '/home/jtuckey/.pyenv/bin',
 '/home/jtuckey/.pyenv/plugins/pyenv-virtualenv/shims',
 '/home/jtuckey/.pyenv/shims',
 '/home/jtuckey/.pyenv/bin',
 '/home/jtuckey/.local/bin',
 '/usr/local/sbin',
 '/usr/local/bin',
 '/usr/sbin',
 '/usr/bin',
 '/sbin',
 '/bin',
 '/usr/games',
 '/usr/local/games',
 '/snap/bin']
)

Let’s have a look at what those two pyenv init commands. What they are doing is generating a bit of code using pyenv init - pyenv virtualenv-init - and then evaluating it in the local scope. We can easily see the code that’s being generated:

jtuckey@jay-x1-carbon ~ <xonsh>$ pyenv init -
export PATH="/home/jtuckey/.pyenv/shims:${PATH}"
export PYENV_SHELL=python3.7
command pyenv rehash 2>/dev/null
pyenv() {
  local command
  command="${1:-}"
  if [ "$#" -gt 0 ]; then
    shift
  fi

  case "$command" in
  activate|deactivate|rehash|shell)
    eval "$(pyenv "sh-$command" "$@")";;
  *)
    command pyenv "$command" "$@";;
  esac
}
jtuckey@jay-x1-carbon ~ <xonsh>$ pyenv virtualenv-init -
export PATH="/home/jtuckey/.pyenv/plugins/pyenv-virtualenv/shims:${PATH}";
export PYENV_VIRTUALENV_INIT=1;
_pyenv_virtualenv_hook() {
  local ret=$?
  if [ -n "$VIRTUAL_ENV" ]; then
    eval "$(pyenv sh-activate --quiet || pyenv sh-deactivate --quiet || true)" || true
  else
    eval "$(pyenv sh-activate --quiet || true)" || true
  fi
  return $ret
};
if ! [[ "$PROMPT_COMMAND" =~ _pyenv_virtualenv_hook ]]; then
  PROMPT_COMMAND="_pyenv_virtualenv_hook;$PROMPT_COMMAND";
fi

Ok, so if we can load these as bash, we can get all the commands correctly into xonsh. Fortunately there is a command in xonsh just for this:

jtuckey@jay-x1-carbon ~ <xonsh>$ source-bash --help
usage: source-foreign [-h] [-i INTERACTIVE] [-l LOGIN] [--envcmd ENVCMD]
                      [--aliascmd ALIASCMD] [--extra-args EXTRA_ARGS]
                      [-s SAFE] [-p PREVCMD] [--postcmd POSTCMD]
                      [--funcscmd FUNCSCMD] [--sourcer SOURCER]
                      [--use-tmpfile USE_TMPFILE]
                      [--seterrprevcmd SETERRPREVCMD]
                      [--seterrpostcmd SETERRPOSTCMD] [--overwrite-aliases]
                      [--suppress-skip-message] [--show] [-d]
                      shell files_or_code [files_or_code ...]

Sources a file written in a foreign shell language.

positional arguments:
  shell                 Name or path to the foreign shell
  files_or_code         file paths to source or code in the target language.
...

jtuckey@jay-x1-carbon ~ <xonsh>$ aliases
xonsh.aliases.Aliases(
{...
 'source-bash': ['source-foreign', 'bash', '--sourcer=source']
...}

So we should be able to just source the output of the pyenv commands. Let’s try it:

jtuckey@jay-x1-carbon ~ <xonsh>$ source-bash $(pyenv init - bash)
# Shell just hangs at this point....

Sooo…. that’s not working. My shell just locks up. I can’t even Ctrl-C to kill it.

What’s going on here? After a bit of investigation, what seems to be happening is this: when you use source-bash the bash shell launched to run the source is running the .bashrc file, which puts it back into xonsh, where it gets stuck. This makes sense, although it’s hard to work out. To test this, lets get rid of the xonsh lines from .bashrc and see if it works correctly then.

# Load pyenv automatically by adding
# the following to ~/.bash_profile:

export PATH="/home/jtuckey/.pyenv/bin:$PATH"
eval "$(pyenv init -)"
eval "$(pyenv virtualenv-init -)"

pyenv shell 3.7.0

# Start xonsh shell
#if xonsh; then
#  exit
#fi

And try running it manually

jtuckey@jay-x1-carbon:~$ xonsh
jtuckey@jay-x1-carbon ~ <xonsh>$ source-bash $(pyenv init - bash)
Skipping application of 'll' alias from 'bash' since it shares a name with an existing xonsh alias. Use "--overwrite-alias" option to apply it anyway.You may prevent this message with "--suppress-skip-message" or "$FOREIGN_ALIASES_SUPPRESS_SKIP_MESSAGE = True".
Skipping application of 'ls' alias from 'bash' since it shares a name with an existing xonsh alias. Use "--overwrite-alias" option to apply it anyway.You may prevent this message with "--suppress-skip-message" or "$FOREIGN_ALIASES_SUPPRESS_SKIP_MESSAGE = True".
jtuckey@jay-x1-carbon ~ <xonsh>$ 

Looking better. Now I’m back where I started, though, so how can I run xonsh as my shell. From the xonsh tutorial there is this line:

Alternatively, you can setup your terminal emulator (xterm, gnome-terminal, etc) to run xonsh automatically when it starts up. This is recommended.

However, in my gnome-terminal if I try setting xonsh as the command it can’t find xonsh. This is because the python 3.7 environment where xonsh lives is initialised during bash startup, not gui startup, so my gnome-terminal can’t find the xonsh command.

Ok, so how do I put something into my whole user environment? That’s what the .profile file in your user profile is for. Let’s put the pyenv init stuff into the end of my .profile file:

# Load pyenv automatically by adding
# the following to ~/.bash_profile:

export PATH="/home/jtuckey/.pyenv/bin:$PATH"
eval "$(pyenv init -)"
eval "$(pyenv virtualenv-init -)"

pyenv shell 3.7.0

Now with a logoff and back on, let’s see what happens. Now when I set gnome-terminal to run xonsh, it works. However:

jtuckey@jay-x1-carbon ~ <xonsh>$ pyenv activate godev

Failed to activate virtualenv.

Perhaps pyenv-virtualenv has not been loaded into your shell properly.
Please restart current shell and try again.


jtuckey@jay-x1-carbon ~ <xonsh>$ source-bash $(pyenv virtualenv-init - bash)
Skipping application of 'll' alias from 'bash' since it shares a name with an existing xonsh alias. Use "--overwrite-alias" option to apply it anyway.You may prevent this message with "--suppress-skip-message" or "$FOREIGN_ALIASES_SUPPRESS_SKIP_MESSAGE = True".
Skipping application of 'ls' alias from 'bash' since it shares a name with an existing xonsh alias. Use "--overwrite-alias" option to apply it anyway.You may prevent this message with "--suppress-skip-message" or "$FOREIGN_ALIASES_SUPPRESS_SKIP_MESSAGE = True".
jtuckey@jay-x1-carbon ~ <xonsh>$ pyenv activate godev

Failed to activate virtualenv.

Perhaps pyenv-virtualenv has not been loaded into your shell properly.
Please restart current shell and try again.

Hmmm, still not quite there, but closer.

When I heard about the xonsh project – https://xon.sh I thought it sounded great, as I really enjoy the python language and am much more comfortable using python to write scripts than I am using bash. I also find the bash syntax very confusing, and am never sure of the right way to write a script or use a variable.

So, I decided to install it and give it a try! When using python on Linux I always use the pyenv tool to install and manage my python versions, as well as my virtualenvs. This tool allows me to easily install the latest version of python, as well as create a virtualenv for a specific project and keep all the packages that project uses separate from each other and my system python. It’s a great tool, and I can highly recommend using it.

pyenv installs all python versions into your user profile, and allows you to select from them on the command line using

pyenv shell 3.7.0

To get all the pyenv commands, along with the pyenv command completion (which is very good), you add an init command to the end of your .bashrc file. This is how it looks in mine:

# Load pyenv automatically by adding
# the following to ~/.bash_profile:

export PATH="/home/jtuckey/.pyenv/bin:$PATH"
eval "$(pyenv init -)"
eval "$(pyenv virtualenv-init -)"

pyenv shell 3.7.0

Excellent! When running in bash this sets up pyenv, then sets my shell to use python 3.7.0. So let’s install xonsh:

jtuckey@jay-x1-carbon:~$ pip install --user xonsh
Collecting xonsh
  Using cached https://files.pythonhosted.org/packages/58/16/fce8ecc880a44dfcb06b22f638df0b6982ad588eb0f2080fbd5218a42340/xonsh-0.8.0.tar.gz
Installing collected packages: xonsh
  Running setup.py install for xonsh ... done
Successfully installed xonsh-0.8.0

Looking good so far! I can now use the xonsh command to hop into a xonsh shell, and then use all the goodness of a python-based shell:

jtuckey@jay-x1-carbon:~$ xonsh
jtuckey@jay-x1-carbon ~ $ ${'XONSH_VERSION'}
'0.8.0'
jtuckey@jay-x1-carbon ~ $ history info
backend: json
sessionid: 30e7e6d4-4945-4f99-9b65-d38a2ab61da2
filename: /home/jtuckey/.local/share/xonsh/xonsh-30e7e6d4-4945-4f99-9b65-d38a2ab61da2.json
length: 1
buffersize: 100
bufferlength: 1
gc options: (8128, 'commands')

Now that I have it installed and am getting all the good features, I naturally wanted it as my default shell. My first attempt at this was to simply put it the end of my .bashrc file:

export PATH="/home/jtuckey/.pyenv/bin:$PATH"
eval "$(pyenv init -)"
eval "$(pyenv virtualenv-init -)"

pyenv shell 3.7.0

xonsh

This works! However, to quit the shell, I need to exit twice:

jtuckey@jay-x1-carbon ~ <xonsh>$ exit
jtuckey@jay-x1-carbon:~$ exit

Ok, maybe let’s put it in an auto-exiting block. Back to the .bashrc:

# Load pyenv automatically by adding
# the following to ~/.bash_profile:

export PATH="/home/jtuckey/.pyenv/bin:$PATH"
eval "$(pyenv init -)"
eval "$(pyenv virtualenv-init -)"

pyenv shell 3.7.0

# Start xonsh shell
if xonsh; then
  exit
fi

This works even better. Now it will automatically exit when I’m done using xonsh, but if it can’t find xonsh it will stay in bash. However, with this setup, I try to do some work, which involves activating one of my pyenv virtualenvs. This gives me an error:

jtuckey@jay-x1-carbon ~ <xonsh>$ pyenv activate godev

Failed to activate virtualenv.

Perhaps pyenv-virtualenv has not been loaded into your shell properly.
Please restart current shell and try again.

Ok, so clearly the pyenv setup chunk from bash isn’t carrying across nicely from bash to xonsh. The path looks alright:

jtuckey@jay-x1-carbon ~ <xonsh>$ $PATH
\EnvPath(
['/home/jtuckey/.pyenv/plugins/pyenv-virtualenv/shims',
 '/home/jtuckey/.pyenv/shims',
 '/home/jtuckey/.pyenv/bin',
 '/home/jtuckey/.pyenv/plugins/pyenv-virtualenv/shims',
 '/home/jtuckey/.pyenv/shims',
 '/home/jtuckey/.pyenv/bin',
 '/home/jtuckey/.local/bin',
 '/usr/local/sbin',
 '/usr/local/bin',
 '/usr/sbin',
 '/usr/bin',
 '/sbin',
 '/bin',
 '/usr/games',
 '/usr/local/games',
 '/snap/bin']
)

Let’s have a look at what those two pyenv init commands. What they are doing is generating a bit of code using pyenv init - pyenv virtualenv-init - and then evaluating it in the local scope. We can easily see the code that’s being generated:

jtuckey@jay-x1-carbon ~ <xonsh>$ pyenv init -
export PATH="/home/jtuckey/.pyenv/shims:${PATH}"
export PYENV_SHELL=python3.7
command pyenv rehash 2>/dev/null
pyenv() {
  local command
  command="${1:-}"
  if [ "$#" -gt 0 ]; then
    shift
  fi

  case "$command" in
  activate|deactivate|rehash|shell)
    eval "$(pyenv "sh-$command" "$@")";;
  *)
    command pyenv "$command" "$@";;
  esac
}
jtuckey@jay-x1-carbon ~ <xonsh>$ pyenv virtualenv-init -
export PATH="/home/jtuckey/.pyenv/plugins/pyenv-virtualenv/shims:${PATH}";
export PYENV_VIRTUALENV_INIT=1;
_pyenv_virtualenv_hook() {
  local ret=$?
  if [ -n "$VIRTUAL_ENV" ]; then
    eval "$(pyenv sh-activate --quiet || pyenv sh-deactivate --quiet || true)" || true
  else
    eval "$(pyenv sh-activate --quiet || true)" || true
  fi
  return $ret
};
if ! [[ "$PROMPT_COMMAND" =~ _pyenv_virtualenv_hook ]]; then
  PROMPT_COMMAND="_pyenv_virtualenv_hook;$PROMPT_COMMAND";
fi

Ok, so if we can load these as bash, we can get all the commands correctly into xonsh. Fortunately there is a command in xonsh just for this:

jtuckey@jay-x1-carbon ~ <xonsh>$ source-bash --help
usage: source-foreign [-h] [-i INTERACTIVE] [-l LOGIN] [--envcmd ENVCMD]
                      [--aliascmd ALIASCMD] [--extra-args EXTRA_ARGS]
                      [-s SAFE] [-p PREVCMD] [--postcmd POSTCMD]
                      [--funcscmd FUNCSCMD] [--sourcer SOURCER]
                      [--use-tmpfile USE_TMPFILE]
                      [--seterrprevcmd SETERRPREVCMD]
                      [--seterrpostcmd SETERRPOSTCMD] [--overwrite-aliases]
                      [--suppress-skip-message] [--show] [-d]
                      shell files_or_code [files_or_code ...]

Sources a file written in a foreign shell language.

positional arguments:
  shell                 Name or path to the foreign shell
  files_or_code         file paths to source or code in the target language.
...

jtuckey@jay-x1-carbon ~ <xonsh>$ aliases
xonsh.aliases.Aliases(
{...
 'source-bash': ['source-foreign', 'bash', '--sourcer=source']
...}

So we should be able to just source the output of the pyenv commands. Let’s try it:

jtuckey@jay-x1-carbon ~ <xonsh>$ source-bash $(pyenv init - bash)
# Shell just hangs at this point....

Sooo…. that’s not working. My shell just locks up. I can’t even Ctrl-C to kill it.

What’s going on here? After a bit of investigation, what seems to be happening is this: when you use source-bash the bash shell launched to run the source is running the .bashrc file, which puts it back into xonsh, where it gets stuck. This makes sense, although it’s hard to work out. To test this, lets get rid of the xonsh lines from .bashrc and see if it works correctly then.

# Load pyenv automatically by adding
# the following to ~/.bash_profile:

export PATH="/home/jtuckey/.pyenv/bin:$PATH"
eval "$(pyenv init -)"
eval "$(pyenv virtualenv-init -)"

pyenv shell 3.7.0

# Start xonsh shell
#if xonsh; then
#  exit
#fi

And try running it manually

jtuckey@jay-x1-carbon:~$ xonsh
jtuckey@jay-x1-carbon ~ <xonsh>$ source-bash $(pyenv init - bash)
Skipping application of 'll' alias from 'bash' since it shares a name with an existing xonsh alias. Use "--overwrite-alias" option to apply it anyway.You may prevent this message with "--suppress-skip-message" or "$FOREIGN_ALIASES_SUPPRESS_SKIP_MESSAGE = True".
Skipping application of 'ls' alias from 'bash' since it shares a name with an existing xonsh alias. Use "--overwrite-alias" option to apply it anyway.You may prevent this message with "--suppress-skip-message" or "$FOREIGN_ALIASES_SUPPRESS_SKIP_MESSAGE = True".
jtuckey@jay-x1-carbon ~ <xonsh>$ 

Looking better. Now I’m back where I started, though, so how can I run xonsh as my shell. From the xonsh tutorial there is this line:

Alternatively, you can setup your terminal emulator (xterm, gnome-terminal, etc) to run xonsh automatically when it starts up. This is recommended.

However, in my gnome-terminal if I try setting xonsh as the command it can’t find xonsh. This is because the python 3.7 environment where xonsh lives is initialised during bash startup, not gui startup, so my gnome-terminal can’t find the xonsh command.

Ok, so how do I put something into my whole user environment? That’s what the .profile file in your user profile is for. Let’s put the pyenv init stuff into the end of my .profile file:

# Load pyenv automatically by adding
# the following to ~/.bash_profile:

export PATH="/home/jtuckey/.pyenv/bin:$PATH"
eval "$(pyenv init -)"
eval "$(pyenv virtualenv-init -)"

pyenv shell 3.7.0

Now with a logoff and back on, let’s see what happens. Now when I set gnome-terminal to run xonsh, it works. However:

jtuckey@jay-x1-carbon ~ <xonsh>$ pyenv activate godev

Failed to activate virtualenv.

Perhaps pyenv-virtualenv has not been loaded into your shell properly.
Please restart current shell and try again.


jtuckey@jay-x1-carbon ~ <xonsh>$ source-bash $(pyenv virtualenv-init - bash)
Skipping application of 'll' alias from 'bash' since it shares a name with an existing xonsh alias. Use "--overwrite-alias" option to apply it anyway.You may prevent this message with "--suppress-skip-message" or "$FOREIGN_ALIASES_SUPPRESS_SKIP_MESSAGE = True".
Skipping application of 'ls' alias from 'bash' since it shares a name with an existing xonsh alias. Use "--overwrite-alias" option to apply it anyway.You may prevent this message with "--suppress-skip-message" or "$FOREIGN_ALIASES_SUPPRESS_SKIP_MESSAGE = True".
jtuckey@jay-x1-carbon ~ <xonsh>$ pyenv activate godev

Failed to activate virtualenv.

Perhaps pyenv-virtualenv has not been loaded into your shell properly.
Please restart current shell and try again.

Hmmm, still not quite there, but closer. After some further research, I found this on a page about virtual environments and xonsh.

The usual tools for creating Python virtual environments—venv, virtualenv, pew—don’t play well with xonsh. …. Luckily, xonsh ships with its own virtual environments manager called Vox.

https://xon.sh/python_virtual_environments.html

Sounds like vox is the way to go. To load vox you just need to load it. However, by default it doesn’t look for virtualenvs in the pyenv path. This is easily fixable by setting the $VIRTUALENV_HOME variable:

jay@jay-alienware-ubuntu:~:x$ xontrib load vox                                                                               
jay@jay-alienware-ubuntu:~:x$ vox list                                                                                       
No environments available. Create one with "vox new".

jay@jay-alienware-ubuntu:~:x$ $VIRTUALENV_HOME = $HOME +'/.pyenv/versions'                                                   
jay@jay-alienware-ubuntu:~:x$ vox list                                                                                       
Available environments:
3.6.2/envs/test-install
3.6.2/envs/tests
3.6.4/envs/mdb
3.7.0/envs/xonshdev
jay@jay-alienware-ubuntu:~:x$ vox activate tests                                                                             
Activated "tests".

(tests) jay@jay-alienware-ubuntu:~:x$ vox deactivate                                                                         
Deactivated "tests".

jay@jay-alienware-ubuntu:~:x$  

Be aware that creating a virtualenv through vox will just use your currently set python version from pyenv. However, you can still use pyenv to create a new virtualenv of any version, and then use vox to activate it:

jay@jay-alienware-ubuntu:~:x$ pyenv virtualenv 3.6.4 newenv                                                                  
Requirement already satisfied: setuptools in /home/jay/.pyenv/versions/3.6.4/envs/newenv/lib/python3.6/site-packages
Requirement already satisfied: pip in /home/jay/.pyenv/versions/3.6.4/envs/newenv/lib/python3.6/site-packages
jay@jay-alienware-ubuntu:~:x$ vox activate newenv                                                                            
Activated "newenv".

(newenv) jay@jay-alienware-ubuntu:~:x$  

So I now have a fully working pyenv+xonsh setup! The only thing that’s missing is proper tab-completion of the pyenv commands, which is something I may look into in the future, but for now I’m able to do my work easily without leaving the xonsh shell, which is great!