Snapjib

Synchronize a photo album at multiple sites — This Bash script combines rsync, inotifywait, ffmpeg and mogrify to keep your collection of photos, videos and music synchronized at several sites, and ensures compatibility of your media with TV and tablet DLNA players.


Description

Snapjib is a Bash script that you can run on Linux, on (say) a headless media centre, to maintain a photo/video/music album at multiple sites. Snapjib features include:

  • Simple (but slightly risky) conflict resolution — The last deletion or creation of a file wins.

  • Auto-orientation — Fix files for devices that don't know how to rotate stills and movies from metadata.

  • Video conversion — Movies are converted to the codecs most broadly compatible with TVs and tablets.

  • Video compression — Transcoding of movies exploits deeper compression techniques.

  • Duplicate detection — Identical files under different names are hard-linked together.

  • Automatic media import — Take snaps on your phone, and plug it into your media centre. You'll hear a message saying the device is recognized, and shortly thereafter another one saying you can safely remove it. You'll then find that your photos have been removed from your phone and are being ingested by the media centre.

This software has three incarnations, legacy, stop-gap and webapp, all installed from the same source tree.

  • Legacy is a single Bash script /usr/local/bin/snapjib that tries to do everything with simple components. Your media library is kept in an ordinary file system, so you use your customary tools to re-organize things. A tool like minidlna can expose the library to home devices. Every file is lazily hard-linked to another file in a collection arranged by SHA256 hash of the file's contents. Incoming files are processed with various tools like ffmpeg, ready for the user to manually place them in the file system. rsync over SSH keeps remote sites in sync, and inotifywait watches for user activity in the file system (additions, moves, renames, deletions).

    It's okay for the most part, but inefficient for re-organizing your files, as it doesn't short-circuit fetching of remote files that have merely moved.

  • Webapp keeps files only in a file-system-based hash table, and a set of relatable, user-defined tags in a proper database. File re-organization amounts to changes to the tag database made through a web interface, so it's intended to be easily managed with, say, a tablet. When a change is made, the database is turned into a file-system hierarchy based on hard links into the hash table, and this snapshot of the library can be exposed through minidlna as before. Site synchronization is done with custom Bash scripts over SSH. A more advanced content ingest/import system is present that detects MTP devices and SD cards being plugged in, offloads their media, transcodes them, and incorporates them into the database.

    This should be a lot more robust and user-friendly, but it is simply not complete yet, and is on hold.

  • Stop-gap takes the legacy approach to storage (a simple file system), but also uses the enhanced content ingest/import from webapp, and uses an application-aware script for efficient synchronization.

Stop-gap is the one you should configure for at this time. Instructions for legacy will remain here as an addendum, but they won't be kept up-to-date.

Hardware

Snapjib is intended for running on a Raspberry Pi (or other similar single-board computer) installed at each site, with its own bulk storage, e.g., an SSD or NAS. It should normally run headless, i.e., without keyboard, monitor or mouse (but a speaker can come in useful). Just leave it somewhere handy so you can plug in devices to off-load media onto it. Make sure it has a ventilated case too.

I find that an RPi 3 has a tolerable amount of grunt, but with only USB2, large transfers to the system (like when you first set it up and migrate an existing collection) are really slow. An RPi 4 seems to be rather more comfortable, and has two USB3 ports that contend less with Ethernet, so it serves content more smoothly, and transcodes a bit faster too. However, you might find that its USB system has insufficient electrical power to support some devices; a Samsung phone and tablet I tried caused the USB system to repeatedly reset, disconnecting the SSD along the way. In that case, it might be wise to get a powered USB3 hub with some card slots, power the RPi from the hub to save you a power socket, and expand the available types of ports. It's not something I really wanted to try, as that would be to leave something rather unaesthetic in someone else's home.

I have experimented with an ODROID XU4Q, which has 8 cores, USB3, Gigabit Ethernet, eMMC, and the Ethernet and USB don't contend as much as in the RPi 3. I got it booting fairly stabily with a USB SSD plugged in, and FFmpeg has worked at least once on a reasonably sized file, but the machine tends to lose its Ethernet connection, bus-err, seg-fault, or even reboot when you run FFmpeg. I didn't get much stability out of Ubuntu Mate images either. It's been a while, so maybe the instability has been ironed out by now. Worth another look.

I've also had Gigabyte BRIX and an Intel NUC, both fanless, with x86 Ubuntus on them, serving other purposes as well as as a media server. The BRIX is more aesthetic than the NUC for a home appliance, IMHO. The NUC might be more robust in the long run (although I recently discovered that the full-size SD slot that persuaded me to get it in the first place seems to have lost its Linux drivers; look up rts5229 Linux drivers, but beware of borking your installation).

In almost all cases, I've kept the media on external SSDs to keep the host and storage decoupled. Getting a whopping great internal SSD, however, keeps unsighlty cabling down. Devices like the BRIX and NUC have space within them for embedded drives, which might be more appreciated.

In all cases, I've used a wired network connection, so that WiFi capacity can concentrate on serving the home's mobile devices.

I've also got small USB speakers attached to each of my media centres. These are not used to play media, but to notify a user when to unplug a camera or memory card after automatic import. As such, the speakers don't have to be of particularly good quality. However, I found that one of the most compact and aesthetic USB speakers I tried caused over-current errors which shut down USB on the BRIX, and subsequently prevented reboot. The NUC has some coloured LEDs on its front panel which can be software-controlled, so these might be an alternative way of signalling to the user.

Installation

I'm assuming that Debian-based distributions are being used on each server, e.g., Raspbian on an RPi, or Ubuntu minimal on an ODROID.

Raspberry Pi

Flash a lite version of Raspbian or similar onto a micro-SD card. The default user for Raspbian is pi, which can sudo the necessary privileged commands to install software, etc.

Ubuntu minimal/ODROID

Ubuntu minimal seemed to be the recommeded lightweight OS for ODROID when I first got it. Flash the image onto a micro-SD card or eMMC chip, and that should give you a default root account. Being minimal, it's convenient to install a few more basic packages, starting with a language pack to silence a few warnings:

apt install language-pack-en

Also set up timezone and keyboard. These are interactive commands, so run them one at a time:

dpkg-reconfigure tzdata
dpkg-reconfigure keyboard-configuration

Make sure you have the latest of everything. You might have to repeat the following commands a few times to get to the latest versions:

apt update
apt upgrade
reboot
apt install linux-image-xu3
reboot

Make sure that uname -r reports 4.14.3-85+ or later to cope with a USB SSD being plugged in during boot.

Get rid of anything superfluous if you like:

apt autoremove

Create an ordinary account that can sudo, set its password, and disable password access on root. Some of these commands are also interactive:

useradd -s /bin/bash -G sudo -m -d /home/pi pi
passwd pi
passwd -l root

So that the instructions below remain uniform, the commands above call the new account pi, but you could name it how you like.

BRIX, NUC, etc

These are x86-based mini PCs, so install a regular 64-bit Ubuntu server or similar on them in the usual way. You usually get a choice of privileged account on these systems, but the instructions below will still assume you've chosen pi for this, which is what you get by default on Raspbian.

Hostname

On each server, set the hostname the same, e.g., to media-centre. This should work on Raspbian and Ubuntu:

sudo hostnamectl set-hostname media-centre

On Raspbian, you could alternatively use an option in raspi-config:

sudo raspi-config

These instructions assume that all sites use the same local DNS domain, e.g., home. This is useful in that each server fulfils the same role at each site, so your device will find it under the same name (media-centre.home in this case) as you move from site to site.

If you want the host to have a different name (maybe you're re-tasking an existing machine), you could still create a DNS alias media-centre for it. Ideally, you'd configure your DNS server to do this, which is often integrated with DHCP, and both often integrated into your home gateway. If there seems to be no way to do this, but the DHCP server nevertheless honours hostname suggestions made by DHCP clients, you can trick it into giving the host's interface two addresses. See Two logical interfaces on one physical, on Ubuntu 18.04 without Netplan for details.

Essential software

Get the basic stuff:

sudo apt-get install bash-completion ntp ntfs-3g exiv2 mediainfo inotify-tools pv bc minidlna fuse-convmvfs git subversion rsync imagemagick par gawk jmtpfs udisks2

jmtpfs uses libmtp to mount MTP devices. You might need to build libmtp from source, as the package-maintained version might be old and have a bug relating to certain characters in filenames. The version with Ubuntu 20.04 seems to be new enough, but I think it still has some difficulty, perhaps with brackets. Here's how to get 1.1.18 installed:

sudo apt-get -y install libusb-1.0-0-dev
curl -o /tmp/libmtp-1.1.18.tar.gz 'https://deac-ams.dl.sourceforge.net/project/libmtp/libmtp/1.1.18/libmtp-1.1.18.tar.gz'
cd /tmp
tar xzf libmtp-1.1.18.tar.gz
cd libmtp-1.1.18
./configure
make
sudo make install
sudo ldconfig

A speech synthesizer is useful for automatic media import:

sudo apt-get install espeak-ng mbrola-en1 mbrola-us{1,2,3}

Don't worry if you can't get some of the mbrola packages on Raspbian. It will still work, but I find the extra voices a little easier to follow.

Time synchronization

If you have a machine which seems to be out-of-sync a lot when it boots up, enable synchronization with NTP at boot:

sudo timedatectl set-ntp true

To check whether this setting is enabled, look for NTP enabled in the status:

$ timedatectl status
      Local time: Mon 2018-01-01 10:27:48 GMT
  Universal time: Mon 2018-01-01 10:27:48 UTC
        RTC time: n/a
       Time zone: Europe/London (GMT, +0000)
     NTP enabled: yes
NTP synchronized: yes
 RTC in local TZ: no
      DST active: no
 Last DST change: DST ended at
                  Sun 2017-10-29 01:59:59 BST
                  Sun 2017-10-29 01:00:00 GMT
 Next DST change: DST begins (the clock jumps one hour forward) at
                  Sun 2018-03-25 00:59:59 GMT
                  Sun 2018-03-25 02:00:00 BST

It can also appear as:

systemd-timesyncd.service active: yes

Image/movie ingest

If you find you have to build FFmpeg:

sudo apt-get install build-essential pkg-config nasm yasm cmake libfreetype6-dev libtool autoconf zlib1g-dev libvpx-dev libxvidcore-dev libswscale-dev libvorbis-dev libtheora-dev libmp3lame-dev libomxil-bellagio-dev libopus-dev libass-dev libx265-dev

x264 library

On ODROID with Ubuntu minimal, use the standard x264 library:

sudo apt-get install libx264-dev

This also worked on ODROID:

mkdir -p ~pi/works
cd ~pi/works
git clone --depth 1 https://code.videolan.org/videolan/x264.git
cd x264
./configure --enable-static --enable-pic
make -j 4
sudo make install

(I found that building it on ODROID resulted in bus errors in FFmpeg, but using the apt-installed version was fine, at least once.)

For Raspbian, build it yourself:

mkdir -p ~pi/works
cd ~pi/works
git clone --depth 1 git://git.videolan.org/x264
cd x264
./configure --host=arm-unknown-linux-gnueabi --enable-static --disable-opencl
make -j 4
sudo make install

fdk-aac

Build and install fdk-aac if you don't want to use the built-in AAC codec with FFmpeg:

mkdir -p ~pi/works
cd ~pi/works
git clone --depth 1 git://github.com/mstorsjo/fdk-aac.git
cd fdk-aac
autoreconf -fiv
./configure --disable-shared
make
sudo make install

FFmpeg

Build and install FFmpeg:

mkdir -p ~pi/works
cd ~pi/works
git clone --depth=1 git://source.ffmpeg.org/ffmpeg.git
cd ffmpeg
./configure --enable-gpl --enable-libx264 --enable-libx265 --enable-nonfree --enable-libfdk-aac --enable-libmp3lame --enable-omx --enable-libvorbis --enable-libxvid --enable-libtheora --enable-libass --enable-libfreetype --enable-libopus --enable-libvpx --arch=armhf --target-os=linux --enable-omx-rpi --extra-libs="-lpthread -lm" --pkg-config-flags="--static"
make -j 4
sudo make install

On Ubuntu minimal for ODROID, drop --enable-omx-rpi. I also found I didn't need --target-os=linux either.

Anyone know how to get a robust FFmpeg on ODROID? The apt-installed package didn't crash, but the video track was also completely black.

Consulting FFmpeg: CompilationGuide/Ubuntu did help me get a working FFmpeg on ODROID/Ubuntu 18.04. However, due to a number of bus errors that knocked out the machine or its networking, I wasn't able to check the command history to see what I had done, so some steps might not have been recorded correctly.

FFmpeg on Buster/RPi4

The apt-get-installed version of FFmpeg on Raspbian Buster, which I'm trying on an RPi4, seems to be decent. However, if you want to add libfdk_aac to it, you need to build from source. Using the packaged version as a template, I came up with this:

sudo apt install build-essential pkg-config libtool autoconf libchromaprint-dev frei0r-plugins-dev gnutls-dev ladspa-sdk libaom-dev liblilv-dev libavc1394-dev libiec61883-dev libass-dev libbluray-dev libbs2b-dev libcaca-dev libcodec2-dev libdc1394-22-dev libdrm-dev flite1-dev libgme-dev libgsm1-dev libmp3lame-dev libopenjp2-7-dev libopenmpt-dev libopus-dev libpulse-dev librsvg2-dev librubberband-dev libshine-dev libsnappy-dev libsoxr-dev libssh-dev libspeex-dev libtheora-dev libtwolame-dev libvidstab-dev libvpx-dev libwavpack-dev libwebp-dev libx264-dev libx265-dev libxvidcore-dev libzmq3-dev libzvbi-dev libopenal-dev libjack-dev libcdio-paranoia-dev libsdl2-dev
mkdir -p ~pi/works
cd ~pi/works
git clone --depth=1 git://source.ffmpeg.org/ffmpeg.git
cd ffmpeg
./configure --extra-version='1+rpt1~deb10u1' --toolchain=hardened --libdir=/usr/lib/arm-linux-gnueabihf --incdir=/usr/include/arm-linux-gnueabihf --arch=arm --enable-gpl --disable-stripping --enable-avresample --disable-filter=resample --enable-avisynth --enable-gnutls --enable-ladspa --enable-libaom --enable-libass --enable-libbluray --enable-libbs2b --enable-libcaca --enable-libcdio --enable-libcodec2 --enable-libflite --enable-libfontconfig --enable-libfreetype --enable-libfribidi --enable-libgme --enable-libgsm --enable-libjack --enable-libmp3lame--enable-libopenjpeg --enable-libopenmpt --enable-libopus --enable-libpulse --enable-librsvg --enable-librubberband --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libspeex --enable-libssh --enable-libtheora --enable-libtwolame --enable-libvidstab --enable-libvorbis --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx265 --enable-libxml2 --enable-libxvid --enable-libzmq --enable-libzvbi --enable-lv2 --enable-omx --enable-openal --enable-sdl2 --enable-omx-rpi --enable-mmal --enable-libdc1394 --enable-libdrm --enable-libiec61883 --enable-chromaprint --enable-frei0r --enable-libx264 --enable-shared --enable-nonfree --enable-libfdk-aac
make -j 4
sudo make install

Note that I dropped the --enable-libmysofa switch, which prevented configuration even after installing libmysofa-dev. --enable-opengl was also dropped, as I couldn't find a library providing ES2/gl.h, which was the last thing sought when the configuration script failed. A few other things are probably redundant too, as far as Snapjib is concerned.

DNS and port forwarding

Ensure your sites have either fixed IPs, DNS names, or dynamic DNS names if your IPs are not stable. You might have to register with a dynamic DNS provider; most support a small number of free hostnames. When you have the account credentials, add them to the Dynamic DNS settings of your routers to ensure they are updated whenever your sites' IP addresses change. Check to see which providers your router supports before you register. Some sites also provide a client you can run on Linux, so support doesn't depend on your router's capabilities, though I prefer to get my router to be the client, as it reduces the number of points of failure.

Also configure your routers to port-forward SSH onto your RPis, etc. You don't have to use the default SSH port as the external, and I will use port 9191 in the configurations below. The internal port will always be 22, the default for SSH. Only TCP is used, not UDP.

Unprivileged account

Create a less privileged user library to own and manipulate bulk data:

sudo useradd -N -s /bin/bash -g minidlna -G audio -m -d /home/library library

Set up passwordless SSH access from this account to its counterparts on other hosts (example):

sudo -i -u library
ssh-keygen -t ecdsa

When prompted, use an empty passphrase.

Set up aliases to the corresponding accounts at the other sites:

# In ~library/.ssh/config of (say) site3
ServerAliveInterval 300

Host site1
User library
Hostname site1.mydyndns.domain
Port 9191

Host site2
User library
Hostname site2.mydyndns.domain
Port 9191

(The ServerAliveInterval setting might help prevent long-lived SSH connections through firewalls from terminating before synchronization has completed.)

Copy ~library/.ssh/id_ecdsa.pub from each RPi to the others, appending its contents to the file ~library/.ssh/authorized_keys. Use ssh-copy-id to do this:

ssh-copy-id site1
ssh-copy-id site2

Test that access is available between sites:

# e.g., from site3
ssh site1 echo yes

On the first instance of calling ssh or ssh-copy-id, you might be asked to accept the remote site's public host key. Once you've done this, you shouldn't be asked again. Try again, and if you get a yes without prompting, you're good to go. If not, there's still something wrong with your SSH set-up.

Authorizing the unprivileged account

Give the unprivileged account the right to invoke udisksctl mount and udisksctl umount, and so import files from SD cards and USB memory sticks:

sudo tee /var/lib/polkit-1/localauthority/50-local.d/uk.ac.lancs.snapjib.library.pkla > /dev/null <<EOF
[Allow legacy Snapjib to access removable media for auto-upload]
Identity=unix-user:library
Action=org.freedesktop.udisks2.filesystem-mount-other-seat;org.freedesktop.udisks2.eject-media-other-seat
ResultAny=yes
ResultActive=yes
ResultInactive=yes
EOF

Also give the unprivileged account the right to get details of USB devices, so it can scan and mount MTP devices:

sudo tee /etc/sudoers.d/99-snapjib-library-legacy > /dev/null <<EOF
library ALL = (root) NOPASSWD: /usr/bin/lsusb
library ALL = (root) NOPASSWD: /usr/bin/jmtpfs
library ALL = (root) NOPASSWD: /usr/bin/fusermount
EOF

Bulk storage

Create a location for bulk storage:

sudo mkdir -p /var/media{,-monitored} 

If you intend to keep the bulk data on a separate storage device, set it to be mounted as appropriate. For example:

# In /etc/fstab:
UUID=xxxxxxxxxxxxxxxx /var/media ntfs defaults,auto,nouser,idmap=none,uid=library,gid=minidlna,dmask=022,fmask=133 0 0

(I reformatted my SSDs as NTFS so they would support hard links.)

You can get the UUID with sudo blkid when the device is connected. You could alternatively give the device a volume label, and then reference it as /dev/disk/by-label/your-label.

Mount immediately (or reboot):

sudo mount /var/media

Set /var/media-monitored as a mirror of /var/media:

# In /etc/fstab:
convmvfs /var/media-monitored fuse auto,nouser,srcdir=/var/media,icharset=UTF-8,ocharset=UTF-8,allow_other 0 0

Mount immediately (or reboot):

sudo mount /var/media-monitored

I've had some problems with convmvfs when upgrading to Ubuntu 20.04, in which intermittent Permission denied errors would arise. I've revised Snapjib to minimize the amount of work it does through the mirror, especially when unattended, to alleviate this problem. However, since the user manages files manually through the mirror, you might still encounter them.

Filesystem events

Support more directory watches with inotifywait:

# In /etc/sysctl.d/local.conf (RPi)
# or in /etc/sysctl.d/60-snapjib.conf (ODROID)
fs.inotify.max_user_watches=524288

To apply at once without waiting to reboot:

sudo sysctl fs.inotify.max_user_watches=524288

…or perhaps a more general:

sudo service procps start

MiniDLNA/ReadyMedia configuration

Merge these with the defaults:

# In /etc/minidlna.conf
media_dir=PV,/var/media/Exposed/Gallery
media_dir=A,/var/media/Exposed/Music
root_container=B
friendly_name=My Media Centre
inotify=yes

Of course, you can give it a different friendly name, but it's helpful if all servers use the same name, so they will appear as the same device whichever site you're at.

Apply changes immediately, or reboot:

sudo service minidlna restart

Snapjib set-up

Get Binodeps installed:

mkdir -p ~pi/works
cd ~pi/works
svn co http://scc-forge.lancaster.ac.uk/svn-repos/misc/binodeps/branches/stable/ binodeps
cd binodeps
make
sudo make install

Get the source:

mkdir -p ~pi/works
cd ~pi/works
svn co http://scc-forge.lancaster.ac.uk/svn-repos/ss/snapjib/branches/stable/ snapjib
cd snapjib
cat > snapjib-env.mk <<EOF
CFLAGS += -O2 -g
CFLAGS += -std=gnu11
CPPFLAGS += -D_XOPEN_SOURCE=600
CPPFLAGS += -D_GNU_SOURCE=1
CPPFLAGS += -pedantic -Wall -W -Wno-unused-parameter
CPPFLAGS += -Wno-missing-field-initializers
CXXFLAGS += -O2 -g
CXXFLAGS += -std=gnu++11
CPPFLAGS += -D_FILE_OFFSET_BITS=64
EOF
make
sudo make install

You only need the _FILE_OFFSET_BITS setting for OSes without LFS on by default. I had to use it for Raspbian Buster.

Configure Snapjib:

# In ~library/.config/snapjib/config.sh
snapjib_home=/var/media-monitored
snapjib_raw=/var/media
prepdir=/var/media-monitored/Exposed/Gallery/unsorted

logdir=$HOME/.cache/snapjib

sites+=(site2)
sites+=(site3)

speaker=(espeak-ng -p 99 -s 120 -v mb/mb-us1 -d plughw:1,0)

Drop the speaker setting if you're not doing automatic import, or you don't have a speaker. Replace it with a different command if you have a different means of getting message to the user, like a small screen. The message will be appended as a single argument.

You can define more sophisticated remote configuration:


sites+=(parents)
sites+=(brother)

site_host[parents]=millwell.hopto.org
site_host[brother]=foobar.hopto.org

site_host sets the SSH host name (or configuration label). site_port and site_user set other SSH parameters. site_conf sets the remote configuration file, and defaults to .config/snapjib/config.sh. User-defined keys like parents and brother now only identify entries in these arrays, or index into a cache of timestamps of sites that Snapjib synchronized with.

Automatic import

Specify devices to automatically import media files from. This isn't very intuitive yet, as you have to write a couple of Bash functions, and understand a bit of regular expressions. First, plug your device into any machine you have the Snapjib scripts installed on, then run snapjib-scan to get its identifier:

$ snapjib-scan
mtp:04e8-XXXXXXXXXXXXXXXXXX          usb:001:009 Samsung Electronics Co., Ltd Galaxy (MTP)

You'll either get mtp:... or part:.... You should also mount your device on any machine to inspect its file structure. You then have the information to write your Bash functions in ~library/.config/snapjib/devices.sh:

#!/bin/bash

function recognize_device () {
    local devid="$1" ; shift
    declare -n intentvar="$1" ; shift
    declare -n namevar="$1" ; shift
    declare -n langvar="$1" ; shift

    case "$devid" in
        (mtp:04e8-XXXXXXXXXXXXXXXXXX)
            intentvar=scan
            namevar="John's phone"
            langvar=en
            return
            ;;
    esac

    return 1
}

function device_sources () {
    local devid="$1" ; shift
    declare -n dirsvar="$1" ; shift
    declare -n patsvar="$1" ; shift
    declare -n tagsvar="$1" ; shift

    case "$devid" in
        (mtp:04e8-XXXXXXXXXXXXXXXXXX)
            dirsvar+=(Card/DCIM/Camera)
            patsvar+=('[0-9]{8}_[0-9]{6}\.(mp4|jpg)$')
            tagsvar+=('')
            return
            ;;
    esac

    return 1
}

The first function recognizes the device as a whole, gives it a name (to be used in notifications), and indicates that it should be automatically mounted when plugged in, scanned for media files, have them moved off the device, and dropped into the ingest process. To support multiple devices, copy the emphasized sections and adapt.

The second function identifies which directories to scan, and which files to match. In this case, Card/DCIM/Camera is identified as one directory to scan. Subpaths matching the likes of 20180621_182340.mp4 are matched (8 decimal digits for the date, an underscore, 6 digits for the time). To scan multiple paths on the same device, replicate and edit these three lines:

            dirsvar+=(Card/DCIM/Camera)
            patsvar+=('[0-9]{8}_[0-9]{6}\.(mp4|jpg)$')
            tagsvar+=('')

The patsvar value must be an egrep regular expression. Regular expressions are patterns that compactly match particular strings. They're a great way to introduce a modicum of flexibility into a setting without making it totally programmable, but they can be arcane and error-prone, and it doesn't help that there are several similar dialects. It's not that hard to write a regular expression from scratch, but understanding one written by someone else (or an earlier version of yourself) can be tricky.

Always add entries in triples. If you add another dirsvar, you must also add a patsvar and a tagsvar, even though tagsvar isn't actually used!

A phone might have two directories for media, one on the phone's internal memory (under the directory Phone, say), and the other on a removable SD card (under Card). I prefer to move media only from the card, and keep persistent images (like mugshots) on the internal memory.

Yes, I will try and simplify this eventually, and webapp will automate it all.

Booting

Specify when to run Snapjib:

crontab -e
PATH=/usr/local/bin:/usr/bin:/bin

## Detect file movement in the presentation directory,
*/1 * * * * snapjib-monitor -q

## Normalize files deposited in the ingest directory.
*/5 * * * * snapjib-ingester -w 30 -q

## Move files off plugged-in devices into the ingest directory.
*/10 * * * * snapjib-importer -w 60 -q

## At 1:30am, ensure all files have hard links in the hash table.
30 1 * * * snapjib-unify -q

## At 2am, fetch files from other sites.
0 2 * * * snapjib-sync -q

## At 5am, clean out files that have moved or been removed.
0 5 * * * snapjib-purge -q

You might want to make each site synchronize with the others at a different time, i.e., stagger the snapjib-sync commands. The unification and purging steps are entirely local, so they can occur in parallel on each site.

Export by SSH (optional)

Append the client user's public key (in ~/.ssh/id_ecdsa.pub, for example) to ~library/.ssh/authorized_keys on each RPi. Use SSHFS to access the directory /var/media-monitored/.

Export read-only by NFS

Good security in NFS is probably not simple, but if you don't mind anyone on your home network being able to see all your media, a read-only NFS export is convenient. Beware of using NFS from laptops, as some applications lock up when trying to save a file if an automounted NFS resource is unavailable due to not being at home, even if you're not trying to save to it.

Install the NFS server:

sudo apt-get install nfs-kernel-server
# In /etc/exports
/var/media/Exposed *(ro,sync,no_subtree_check,no_root_squash,insecure)

Apply this change (or reboot):

sudo exportfs -a

Export read/write by SMB/CIFS

Windows machines should most easily have access through a Samba share:

sudo apt-get install samba
# In /etc/samba/smb.conf
[media]
   comment = My Media Centre
   path=/var/media-monitored
   browseable=Yes
   writeable=Yes
   public=no
   only guest=no
   create mask=0644
   directory mask=0755

Set a CIFS password for the library account:

sudo smbpasswd -a library

Reload the configuration (or reboot):

sudo service samba reload

Or maybe:

sudo service smbd reload

The media are now accessible under \\media-centre.home\media.

Reboot

sudo reboot

Client configuration

Smart TVs will likely have DLNA clients in them, and DLNA client apps exist for tablets and smartphones. DLNA allows them to automatically find whatever media server is local (under the name “My Media Centre” if you use the exact configuration suggested here), and view or play files on it, without any special configuration. Access is restricted only to your home network, and read-only, but that's all you'll need for TVs, and for tablets too most of the time.

Here are some options for accessing the resources in other ways, and on other devices…

On Windows/Mac/Linux via CIFS (R/W)

Use whatever software you have for accessing remote Windows shares to open \\media-centre.home\media. Use library as the username, and the password as specified by the smbpasswd command mentioned earlier.

On Linux via SSHFS (R/W)

sudo apt-get install sshfs
sudo mkdir -p /var/media

Get the host keys for each site (in /etc/ssh/ssh_host_ecdsa_key.pub, for example), and make them known to your client machine, perhaps by adding them to /etc/ssh/ssh_known_hosts, so it looks something like this:

# In /etc/ssh/ssh_known_hosts on a client machine

# Site 1 key
media-centre.home ecdsa-sha2-nistp256 AAAA....

# Site 2 key
media-centre.home ecdsa-sha2-nistp256 AAAA....

# Site 3 key
media-centre.home ecdsa-sha2-nistp256 AAAA....

So far, this seems to be the most robust way of getting the client to recognize the same hostname against multiple keys. Let me know if you have a better suggestion.

Lazily mount whichever device is local:

# In /etc/fstab
library@media-centre.home:/var/media-monitored /var/media fuse.sshfs defaults,allow_other,idmap=none,uid=USERNAME,gid=GROUPNAME,noauto,x-systemd.automount,_netdev,reconnect,CheckHostIp=no,IdentityFile=/home/USERNAME/.ssh/id_ecdsa 0 0

The lazy mount should prevent your client device from spending a long time booting up when no media centre is available, but let you access the media centre without doing anything more than opening up a folder (say).

As configured, you should find your media in /var/media/Exposed. Drop new photos and movies into /var/media/Incoming to start the ingest process, leaving the converted files in /var/media/Preparation.

On Linux via NFS (read-only)

On the client machine:

mkdir -p /var/media

Lazily mount:

# In /etc/fstab
media-centre.home:/var/media/Exposed /var/media nfs ro,noauto,nouser,x-systemd.automount,x-systemd.device-timeout=10,timeo=14,_netdev,soft 0 0

On Linux via DLNA (read-only)

On Linux, djmount acts as a DLNA client, and presents the media as a read-only filesystem.

sudo mkdir -p /var/upnp
sudo apt-get install djmount
# In /etc/fstab
djmount /var/upnp fuse ro,auto,allow_other,exec,sloppy,_netdev 0 0

At any one of your home sites, you should be able to read your media in /var/upnp/My Media Centre.

How it works

Snapjib has two independent tasks to perform. One is to ingest movies and still photos, giving them maximum compatibility with various devices, and choosing a name for them. The other is to synchronize replicated copies of a media library, supporting basic editing.

Exploited media tools

Snapjib works in conjunction with or makes use of several tools available on Raspbian Jessie Lite and Ubuntu (and surely other distributions), plus some you'll have to install yourself. Here are some of the media-related tools:

  • MiniDLNA/ReadyMedia — This makes media arranged in a file system available to devices on the LAN, e.g., TVs and tablets, over uPnP /DLNA.
  • ImageMagick — This is used to manipulate images, including auto-orientation of still images.
  • FFmpeg — This converts videos from a camera's native format to something that should be universally accepted.
  • exiv2 and mediainfo — These extract metadata from stills and videos, mainly dates, in order to choose a sensible name for ingested files.

How media ingest works

Snapjib uses inotifywait to watch an ingest directory $incdir, and detect when the user has added files and directories to it. It looks for events MOVED_TO and CLOSE_WRITE, and uses the format %e X%w%fX so that comma-separated events are listed first, followed by the full path of the file pertaining to the event until the end of the line, so that spaces may appear in and around filenames. The output of inotifywait is passed through pv -q -B 1M as a buffer of 1 mebibyte, which is supposed to prevent events being dropped while Snapjib is busy with another event, although I still have some doubts about it.

When a new file is detected in $incdir, it undergoes a normalization process according to its suffix:

  • Images that have an orientation other than 1 are passed through mogrify to give them the correct orientation, and saved in a hidden directory $convdir; other images are simply moved there.

  • Videos are first passed through ffmpeg, which improves compression by making use of B frames, auto-orients, and converts to the most portable codecs I could determine (H.264 and LC-AAC). -movflags +faststart is used to place the MP4 navigational data at the start of the file. A bespoke C program is used to set the alternative groups and a few other odd bits in the MP4 file. The final output is deposited in $convdir. (The intermediate files are also placed in $convdir under hidden names.)

  • Music files are not specially processed, just moved straight to $convdir.

inotifywait also watches $convdir for CLOSE_WRITE and MOVED_TO. To determine a creation time, stills are analysed with exiv2, and videos with mediainfo. Stills are also converted to RGB for hashing, and videos are hashed as they are. A new filename of the form YYYY-MM-DD_hh-mm-ss_HHHH is chosen by combining the date with the first few nibbles of the hash. The file is then moved from $convdir into $prepdir under the new name.

How synchronization works

convmvfs is used to create /var/media-monitored, a duplicate view of the media directory /var/media. System operations such as synchronization are applied to directories in /var/media, while the user operates on /var/media-monitored. For example, $syncdir points to /var/media/Exposed, while $pubdir points to /var/media-monitored/Exposed, so they should have the same contents. However, inotifywait only monitors directories in /var/media-monitored, so it doesn't pick up events caused by system operations that operate on /var/media directly.

(I tried using mount --bind to create the duplicate, but it's no good because events on either the original or the duplicate can be picked up by watching either. Let me know if you have a better suggestion.)

Using the same inotifywait as for ingest, Snapjib also monitors $pubdir for changes made by the user. The events DELETE and MOVED_FROM on the file $pubdir/$subdir$leaf trigger the time-stamping of a file called $syncdir/$subdir.action-$leaf-delete, while MOVED_TO and CLOSE_WRITE trigger the time-stamping of a file called $syncdir/$subdir.action-$leaf-retain.

During the unification phase, which should complete before synchronization, non-hidden files in /var/media/Exposed are sought that are not hard links into /var/media/.hashes. Each is hashed, and a hard link to it is placed under that hash. Very old unlinked hash files are deleted.

During synchronization, the local site first askes the remote for a list of all its hash files, and their mtimes. Hash files with only one hard link are excluded. Those which the local site doesn't have are fetched and installed. Those with older timestamps than the remote are touched to make them match.

Information about remote retention and deletion markers is then fetched, and the corresponding local files are made to match. Hash codes of files in /var/media/Exposed at the remote site but not at the local are then requested, and if the local site has those hash files, they are hard-linked from the local /var/media/Exposed.

During purging, existing files whose deletion markers are younger than their retention markers are deleted, but only if they are hard-linked under some other name, usually a hash file. Markers for long-deleted files are also deleted.

Note that unification, synchronization and purging are all done on /var/media, not /var/media-monitored, so they neither create nor update retention/deletion markers.


Sorry, this software is currently not publicly available, for… reasons.