Rob's Filespooler Guide

---

Filespooler is a Unix utility for managing queues in a decentralized and strictly ordered way. Filespooler takes commands from a source and writes them to jobs, which can be piped into a stream or written to a file system. A destination, not necessarily on the same machine, then processes the jobs and executes the commands in the exact order they were created, regardless of the order in which the jobs were sent or received.

Many tools already implement FIFO (first in, first out) queues, but Filespooler works differently from most them.

Filespooler has a variety of uses, from applying patches to incremental backups to simply ensuring that a series of programs are run in a particular order. Filespooler pairs well with NNCP, wrapping remote execution tasks into a series of files and using a single call to 'nncp-file' to run them.

Like with NNCP, I've written a series of notes for myself on how Filespooler works and how to run it. This is mostly a mirror of those notes.

Concepts

Filespooler works by managing two sequence files: one at the source, which controls the sequence number to be assigned to the next job, and one at the destination, when controls which job number to process next.

To use a queue, Filespooler makes a queue directory. It contains the following:

jobs/
nextseq
nextseq.lock

If 'jobs/' is synchronized across different locations, i.e. by using a symbolic link or copying with rsync, then 'nextseq' and 'nextseq.lock' only need to be present at the destination. This is actually the recommended configuration because it will prevent the source from processing the queue by accident.

The source also needs some supplementary files to write to the queue.

Workflow

At its core, Filespooler operates in a very straightforward way.

First, we prepare a job packet at the source. Filespooler does this in four steps:

If we want to process jobs using a stream or a pipe instead of a file system, we can go directly to processing the job at the destination. This has some caveats that the official documentation covers thoroughly.

Otherwise, we send the job packet to the job queue. Two events happen here:

Finally, we process queued jobs at destination. We do this in three steps:

The above process continues until one of three stop conditions is met.

Installation

Depending on your platform, Filespooler might be available as a package. On Ubuntu, Filespooler can be installed from the Universe repository:

My main distro is Fedora, however, and there I needed to build Filespooler from source.

If you build Filespooler from source, be sure to add '~/.cargo/bin' to your $PATH.

Example Queue Process

This section walks through a complete example of setting up a queue, adding a job to it, and processing the job.

First, create sample source and destination directories.

We then need to initialize the job sequence in the source. This can be done by hand, but it's more convenient to use Filespooler to do it.

Listing the source directory contents shows us the sequence file and lock:

Now we need to create a queue directory.

When we look inside the queue directory, we see queue sequence files and a 'jobs/' directory, but there are no jobs yet.

The queue sequence is initialized at 1. We can verify it:

With our source sequence and queue directory initialized, we're ready to create our fist job. As an example, we'll simply write a job containing some text to the queue. This can be done in several ways, but I will outline two here.

The first method is by writing everything to intermediate files. Start by storing some text in a file.

Next, create a job packet file out of the source file.

Finally, move the job file to the queue and rename it to Filespooler's internal name format.

The second method is to pipe all the data from the moment it's created to the moment it's written to the queue. This is often how Filespooler is invoked, since it's much faster and more compact. Our example can be written as a one-liner like so:

After creating the job file, the source sequence number has incremented. It actually increments when we invoke 'fspl prepare' and before we run 'fspl queue-write'.

The job is also visible in the 'jobs/' directory.

However, the job has not been processed yet, so the queue sequence number remains the same.

In case we want more information about the jobs in the queue than a simple 'ls' can tell us, Filespooler has its own mechanism to list jobs.

We can also drill down into a specific job and print specific information.

At last, it's time to process the jobs in the queue. In this example, we'll use the default behaviors: process the entire queue, and process it in order of sequence number (as opposed to processing by creation date).

The example command we run on the data is 'tee', which outputs the job payload to STDOUT and also writes it to a file.

The act of processing the job increments our queue sequence number.

The queue is now empty.

We can confirm this by checking the 'jobs/' directory in the queue. The job file is gone.

Finally, we see the file has reached its destination:

Congratulations, you've used your first Filespooler queue!

More Information

Filespooler has extensive documentation on each of its commands. You can find it online at salsa.debian.org.

(HTTPS) fspl.1.md

[This post was originally written on 2025-06-25.]

---

Up One Level

Home

[Last updated: 2026-01-28]