Categories
Linux Systems Administration

My Personal Nextcloud Setup

I run nextcloud on my home server. The primary purpose is for file sync, which I also use to back up my photos from my Android phone, and also I use the Notes app synchronised with my phone. Here’s a bottom-up description of the server setup.

The Server Hardware

The server is an old desktop machine, with multiple disks attached. The server runs Ubuntu 20.04 with docker.io and docker-compose installed from the built-in repositories:

root@bigtuckey #~[12:56]> apt list docker.io
Listing... Done
docker.io/focal-updates,focal-security,now 20.10.7-0ubuntu1~20.04.1 amd64 [installed]
N: There is 1 additional version. Please use the '-a' switch to see it
root@bigtuckey #~[12:57]> apt list docker-compose
Listing... Done
docker-compose/focal,focal,now 1.25.0-1 all [installed]

I then use a ZFS array to store all my docker data. This array is made of two spinning-disks, put into a mirror – this allows for disk failure without downtime:

root@bigtuckey #~[2][12:58]> zpool status dockerdata
  pool: dockerdata
 state: ONLINE
  scan: scrub repaired 0B in 0 days 01:41:19 with 0 errors on Sun Aug  8 02:05:21 2021
config:

        NAME                                     STATE     READ WRITE CKSUM
        dockerdata                               ONLINE       0     0     0
          mirror-0                               ONLINE       0     0     0
            scsi-SATA_HITACHI_HUA72202_BFGL9ZXF  ONLINE       0     0     0
            scsi-SATA_HITACHI_HUA72302_YGHZA19A  ONLINE       0     0     0

errors: No known data errors

To optimise the array ZFS values I have followed Jim Salter’s guide here: https://jrs-s.net/2018/08/17/zfs-tuning-cheat-sheet/

I then use as ZFS filesystem mounted at /var/lib/docker/ to actually store the data:

root@bigtuckey #~[13:01]> zfs get mountpoint dockerdata/var/lib/docker
NAME                       PROPERTY    VALUE            SOURCE
dockerdata/var/lib/docker  mountpoint  /var/lib/docker  local

And a second ZFS filesystem to store the docker volumes, which is where all the important data actually lives. Here’s some interesting values – note that I use both dedup and compression to save space, which has always worked nicely for me:

root@bigtuckey #~[13:00]> zfs get all dockerdata/var/lib/docker/volumes | grep -v default
NAME                               PROPERTY              VALUE                    SOURCE
dockerdata/var/lib/docker/volumes  type                  filesystem               -
dockerdata/var/lib/docker/volumes  creation              Sat Jul  4  7:23 2020    -
dockerdata/var/lib/docker/volumes  used                  371G                     -
dockerdata/var/lib/docker/volumes  available             1.49T                    -
dockerdata/var/lib/docker/volumes  referenced            362G                     -
dockerdata/var/lib/docker/volumes  compressratio         1.05x                    -
dockerdata/var/lib/docker/volumes  mounted               yes                      -
dockerdata/var/lib/docker/volumes  mountpoint            /var/lib/docker/volumes  inherited from dockerdata/var/lib/docker
dockerdata/var/lib/docker/volumes  compression           lz4                      inherited from dockerdata
dockerdata/var/lib/docker/volumes  atime                 off                      inherited from dockerdata
dockerdata/var/lib/docker/volumes  createtxg             2251187                  -
dockerdata/var/lib/docker/volumes  xattr                 sa                       inherited from dockerdata
dockerdata/var/lib/docker/volumes  dedup                 on                       inherited from dockerdata
dockerdata/var/lib/docker/volumes  refcompressratio      1.03x                    -
dockerdata/var/lib/docker/volumes  written               13.4M                    -
dockerdata/var/lib/docker/volumes  logicalused           391G                     -
dockerdata/var/lib/docker/volumes  logicalreferenced     376G                     -

Data Recover / Backups

I use two pieces for my data safety – I keep regular snapshots using Jim Salter’s sanoid tool, and then I make offsite backups using restic combined with rclone to talk to onedrive.

Sanoid config:

root@bigtuckey #/e/sanoid[13:05]> cat /etc/sanoid/sanoid.conf

[dockerdata/var/lib/docker/volumes]
    hourly = 0

And then my restic backup script looks like this:

~/l/bin_scripts (master)[13:07]> cat backup_bigtuckey.fish 
#!/usr/bin/env fish

argparse --name=backup_bigtuckey 'h/help' 'f/force' 's/skip-online' -- $argv

if set --query _flag_help
  echo "run a backup of bigtuckey
pass -f / --force to do a force run
pass -s / --skip-online to skip running the online backups"
  exit 0
end

if set --query _flag_force
  echo "force is on"
  set FORCEFLAG --force
end

if set --query _flag_skip_online
  echo "skip-online flag is on"
  set SKIPONLINE true
end


## HEAD START
set -p PATH /root/.local/gitbin /root/.local/bin

set --export RESTIC_PASSWORD_FILE '/root/.ssh/restic_pass'
set --export RESTIC_REPOSITORY 'rclone:onedrive:personal/backups/restic/bigtuckey'
set --export RCLONE_RETRIES '10'
set --export RCLONE_RETRIES_SLEEP '1m'
set --export RCLONE_BWLIMIT 1500

# Status outputting for monitor to pick up
set statusfile /var/log/lastbackupcode
set overallstatus 0

function max
  echo Saving status code $argv[1]
  if test $argv[1] -gt $overallstatus
    set overallstatus $argv[1]
        end
  echo current overallstatus: $overallstatus
end

# define func to allow setting restic flags
function rbak
  if not set --query SKIPONLINE
    echo running: restic backup -v $FORCEFLAG $argv
    restic backup -v $FORCEFLAG $argv
    set resticstatus $status
    max $resticstatus
    return $resticstatus
  else
    echo --skip-online set - not running: restic backup -v $FORCEFLAG $argv
  end
end
## HEAD END

# Nextcloud data
rbak /var/lib/docker/volumes/nextcloud_data \
    --exclude '/var/lib/docker/volumes/nextcloud_data/_data/appdata_ocr0wfr3egim' \
    --exclude '/var/lib/docker/volumes/nextcloud_data/_data/*/files_trashbin' \
    --exclude '/var/lib/docker/volumes/nextcloud_data/_data/*/files_versions' \
    --exclude '/var/lib/docker/volumes/nextcloud_data/_data/*/cache'


# Other nextcloud data
rbak /var/lib/docker/volumes/nextcloud_apps
rbak /var/lib/docker/volumes/nextcloud_config
rbak /var/lib/docker/volumes/nextcloud_nextcloud

# Unifi controller
rbak /var/lib/docker/volumes/unificontroller_config

# Run a check of all recent backup files
if not set --query SKIPONLINE
  echo "running a check on recent backup files"
  set tmpdir (mktemp --directory)
  echo mounting restic backup in $tmpdir and checking all files newer that -2 days
  rclone mount onedrive:personal/backups/restic/bigtuckey $tmpdir &
  set PID (jobs --last --pid)  # Capture the pid of rclone
  sleep 30  # let the mount start up fully
# Check all file modified in the last 2 days
  find $tmpdir -mindepth 1 -mtime -2 -not -name config | xargs -n 1 sha256_checker.fish -v
  max $status
  sleep 30
  kill -INT $PID
  sleep 30
  fusermount -u $tmpdir
else
  echo "--skip-online set - not running a check on recent backup files"
end


echo "Maximum exit status is $overallstatus - writing to $statusfile"
echo -n $overallstatus >$statusfile

This means I have quick “rewind” recovery using ZFS snapshots, but also have off-site backups using my onedrive. I don’t recommend onedrive as a backup target, as it’s pretty slow and buggy, but I get it free with my work.

Running Nextcloud

I run nextcloud using docker-compose – here’s my compose file:

root@bigtuckey #~/l/c/nextcloud (master)[13:11]> cat docker-compose.yml 
version: '3'
services:
  nextcloud:
    image: nextcloud:21
    restart: always
    container_name: 'nextcloud_nextcloud_1'
    ports:
      - "8088:80"
    volumes:
      - nextcloud:/var/www/html
      - apps:/var/www/html/custom_apps
      - config:/var/www/html/config
      - data:/var/www/html/data
      - ./apache_timeout.conf:/etc/apache2/conf-enabled/apache_timeout.conf
    environment:
      NEXTCLOUD_ADMIN_USER: <number>admin
      NEXTCLOUD_ADMIN_PASSWORD: <pwd here>
      NEXTCLOUD_TRUSTED_DOMAINS: my-top-secret-domain.jaytuckey.name
      PHP_MEMORY_LIMIT: -1
volumes:
  nextcloud:
  apps:
  config:
  data:

I’m using the built-in sqlite database, and would highly recommend that for personal use. It’s unlikely you will ever need more performance than sqlite can provide in a personal setup.

You will note that I expose nextcloud on port 8088 – I then use an nginx reverse proxy to bring the traffic back to nextcloud. I run the reverse proxy on a different machine, but you could certainly run it on the same machine, and then just use 127.0.0.1:8088 as the proxy_pass destination:

# Jays Config
server {
    listen *:80 default_server;
    listen [::]:80 default_server;

    # Redirect all HTTP requests to HTTPS with a 301 Moved Permanently response.
    return 301 https://$host$request_uri;
}

server {
    # Server to ignore unknown hostnames
    server_name begone.jaytuckey.name;
    include /etc/nginx/snippets/tls_file_paths.conf;
    include /etc/nginx/snippets/mozilla_tls_config.conf;
    return 200 'shoo!';
}

server {
    server_name my-top-secret-domain.jaytuckey.name;
    
    include /etc/nginx/snippets/tls_file_paths.conf;
    include /etc/nginx/snippets/mozilla_tls_config.conf;

    # Allow uploads for NC - within the file app they upload in 10M chunks, however using file-drop
    # you need a value that is large enough for your largest file size
    client_max_body_size 10000M;

    location / {
        proxy_pass http://10.1.1.11:8088;
    }
    location = /.well-known/carddav {
        return 301 $scheme://$host:$server_port/remote.php/dav;
    }
    location = /.well-known/caldav {
        return 301 $scheme://$host:$server_port/remote.php/dav;
    }

    proxy_set_header Host $host;
    proxy_set_header X-Forwarded-Host $host;
    proxy_set_header X-Forwarded-Proto 'https';
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    # Allow long running connections, so NC can re-combine large files on a slow pi
    proxy_read_timeout 3600s;
    proxy_send_timeout 3600s;
}

And here’s the included files – I’ve made them using Mozilla’s excellent TLS config generator – https://ssl-config.mozilla.org/:

~/l/a/n/files (master)[13:16]> cat mozilla_tls_config.conf 
##### Mozilla SSL Config
    listen *:443 ssl http2;
    listen [::]:443 ssl http2;

    # certs sent to the client in SERVER HELLO are concatenated in ssl_certificate
    # The actual cert paths should be in inclued seperately
    #ssl_certificate /etc/ssl/private/cert.cer;
    #ssl_certificate_key /etc/ssl/private/key.pem;
    # verify chain of trust of OCSP response using Root CA and Intermediate certs
    #ssl_trusted_certificate /etc/ssl/private/cert.cer;
    
    ssl_session_timeout 1d;
    ssl_session_cache shared:MozSSL:10m;  # about 40000 sessions
    ssl_session_tickets off;

    # curl https://ssl-config.mozilla.org/ffdhe2048.txt > /path/to/dhparam.pem
    ssl_dhparam /etc/ssl/dhparam.pem;

    # intermediate configuration
    ssl_protocols TLSv1.2 TLSv1.3;
    ssl_ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384;
    ssl_prefer_server_ciphers off;

    # HSTS (ngx_http_headers_module is required) (63072000 seconds)
    add_header Strict-Transport-Security "max-age=63072000" always;

    # OCSP stapling
    ssl_stapling on;
    ssl_stapling_verify on;
##### End Mozilla SSL Config
~/l/a/n/files (master)[13:17]> cat tls_file_paths.conf 
# Cert files
ssl_certificate /etc/ssl/private/cert.cer;
ssl_certificate_key /etc/ssl/private/key.pem;
# verify chain of trust of OCSP response using Root CA and Intermediate certs
ssl_trusted_certificate /etc/ssl/private/cert.cer;

Upgrading the Version

Between minor versions (like 21.1.0 -> 21.1.1) I try to always stay up to date. Between major versions (like 20.0.4 -> 21.0.0) I will generally wait till the first point release of the new version is out. My rough process to upgrade from one version to the next is:

  • Use docker-compose to stop nextcloud running. By having nextcloud stopped I know that no changes will occur while I’m working on the upgrade
  • Run a manual ZFS snapshot, to allow me to retry the upgrade if needed.
  • Pull the latest docker image and the start the nextcloud instance again. I use docker-compose logs -f to watch the instance start up.
  • Once the upgrade is up and running I will check the /settings/admin/overview URL (Log in as admin user, go to the Setting -> Administration -> Overview)
  • Check for any warning or post-upgrade config changes that are needed.

There are two overview warnings that I can’t dismiss:

SQLite is currently being used as the backend database. For larger installations we recommend that you switch to a different database backend. This is particularly recommended when using the desktop client for file synchronisation. To migrate to another database use the command line tool: 'occ db:convert-type', or see the documentation

As mentioned earlier, for my use case I’m not concerned, so I ignore this warning.

Module php-imagick in this instance has no SVG support. For better compatibility it is recommended to install it.

This PHP module doesn’t seem to be included in the upstream nextcloud docker image. I’m not concerned about it, as it will just mean there are no thumbnails generated for SVG files.

Conclusion

I’ve been using this setup for a while, and it has worked nicely for me.

Got any questions, or is anything not clear? Contact me: https://jaytuckey.name/about/

One reply on “My Personal Nextcloud Setup”

Was here too 🙂 I use nextcloud for phone photos as well and rclone to cloud. Information I wanted unfortunately not covered. I see you’re using ubiquiti too. Hope all is going well for you. Trevor

Leave a Reply

Your email address will not be published. Required fields are marked *