<?xml version="1.0" encoding="utf-8"?>
<feed xmlns="http://www.w3.org/2005/Atom">
<id>gemini://missbanal.net/</id>
<title>Willow&#39;s feed</title>
<updated>2025-08-25T09:37:35Z</updated>
<link href="gemini://missbanal.net/" rel="alternate"/>
<link href="gemini://missbanal.net/atom.xml" rel="self"/>
<entry>
	<id>gemini://missbanal.net/status-update-2025-08/</id>
	<title>Status update August 2025</title>
	<updated>2025-08-16T00:00:00Z</updated>
	<link href="gemini://missbanal.net/status-update-2025-08/" rel="alternate"/>
	<content type="html">Hey there! It&#39;s been a while as usual. Let&#39;s get into this.

As leaked on the last status update, I&#39;ve quit my daily job as a Web developer. I got no idea what I&#39;m going to do next. Even if I would love to work on LOSS, I&#39;m more realistic today. If you want to hire me, feels free to send an email ~


## IRC-gateway

I started working on a proof of concept that slowly became my most important Hare code-base. IRC-Gateway is designed to act as an intermediary between IRC client and external chat services as Slack, Mattermost, Discord, etc... Its goal is to map those services features to IRC v3 protocols. I&#39;ve pushed to the point it can connect on behalf of the user to Slack, fetch and retrieve access-token, and to retrieve their events from their channels.

Ultimately it should replace Weechat backends as wee-slack, or wee-matter, while being compatible with all IRC clients, not just Weechat.

It should take months to finish this, so I paused here. I&#39;d like for this project to be funded before continuing.

=&gt; https://git.sr.ht/~stacyharper/irc-gateway IRC-gateway
=&gt; https://github.com/wee-slack/wee-slack wee-slack
=&gt; https://sr.ht/~stacyharper/wee-matter/ wee-matter


## Hare-ev overhaul

Most of my recent projects are using the external library hare-ev. A large overhaul has been done recently. I passed a non-negligible time to adapt my programs to use the new APIs. That is for the best, as most of them were completely compatible with the new approach.

Everything is now designed around ev::req, which are by definition just objects that are cancellable. Defining ev::req subtypes allows wrapping together multiple requests together.

I&#39;ve also rewritten most of the hare-http library related code to work with this. The websocket parts are the most impacted. The API now really please me, and should suit most of your use cases.

=&gt; https://harelang.org/blog/2025-07-30-coming-changes-to-hare-ev/ hare-ev overhaul
=&gt; https://git.sr.ht/~sircmpwn/hare-http hare-http


## Built With Hare

Also, a new highlight dedicated page now exists to showcase Harelang programs. So I&#39;ve made some simple websites to document them:

=&gt; https://builtwithhare.org Built With Hare
=&gt; https://splitter.builtwithhare.org Splitter - Built With Hare
=&gt; https://mcron.builtwithhare.org Mcron - Built With Hare
=&gt; https://bonsai.builtwithhare.org Bonsai - Built With Hare
=&gt; https://sxmobar.builtwithhare.org Sxmobar - Built With Hare


## Sxmo stuff

My work on Sxmo is more sparse those days. But I&#39;m still around to merge patches here and there. Most notably the River support patchset has been applied upstream. With the edge i3 support, this means Sxmo will now support Dwm, Sway, i3, and River with the next release. Also, I&#39;ve re-worked a bit the Wvkbd rendering logic to support double-buffering.

=&gt; https://git.sr.ht/~proycon/wvkbd Wvkbd
</content>
	<author><name>Willow Barraco</name></author>
</entry>
<entry>
	<id>gemini://missbanal.net/my-perfect-music-synchronization-solution/</id>
	<title>My perfect music synchronization solution</title>
	<updated>2025-05-23T00:00:00Z</updated>
	<link href="gemini://missbanal.net/my-perfect-music-synchronization-solution/" rel="alternate"/>
	<content type="html">I&#39;m buying DRM free music for a long time, and I synchronized those music files across my devices using Syncthing. But my library weight growth, now weighting 60G… Some of my devices can&#39;t afford this! I had to find an alternative.

Years ago I switched to NFS and FS-cache. NFS is a pretty common folder net mounting. We just have to write a line in &#34;/etc/fstab&#34; to mount a remote folder locally. FS-cache is an additional layer to keep cache on the client machines. It will automatically store and invalidate cache, and will prune the less used blocks before the filesystem start to suffocate. This will considerably reduce the bandwidth usage for clients when you listen the same albums by example. But in the same time, you still need an active internet connection to access the files.

The whole problem is how to authenticate and secure NFS connections? For a long while I hosted my own OpenVPN server, so that all my machines can communicate securely. It was a bit of a pain to configure every new machines, because I had sign and move certificates...

Today NFSv4 can be easily connected to more secure mechanisms, the most legit one named Kerberos. So I spend some time to configure everything. Now that I have a great setup, I&#39;d like to share everything here, because I think it is very cheap to self-host, and not that hard to get everything working. Just a bit overwhelming at first…

In my case the NFS server, Kerberos kdc server, and kadmin service are hosted on the same machines: Your mileage may vary ~

I will not go into too many details, so my main sources are:

=&gt; https://web.mit.edu/Kerberos/krb5-latest/doc/index.html
=&gt; https://wiki.gentoo.org/wiki/Nfs-utils#Encryption
=&gt; https://wiki.alpinelinux.org/wiki/Setting_up_an_NFS_server#Kerberos_authentication
=&gt; https://wiki.archlinux.org/title/Kerberos

## Server config

Let&#39;s start with the server parts. Of course replace the &#34;willowbarraco.fr&#34; domain and realm by your own.

```
$ apk add nfs-utils krb5-server
```

### DNS config

```
$ dig music.willowbarraco.fr
music.willowbarraco.fr.	2703	IN	A	82.66.55.247
$ dig kerberos.willowbarraco.fr
kerberos.willowbarraco.fr. 2691	IN	CNAME	willowbarraco.fr.
willowbarraco.fr.	2691	IN	A	82.66.55.247
```

&#34;music.willowbarraco.fr&#34; must be a &#34;A&#34;, not a &#34;CNAME&#34; here. That&#39;s because Kerberos will reverse DNS domains, and a CNAME would behave differently.

### Firewall config

There are very few required ports. You have to configure your NAT to redirect those if needed.

```
# /etc/nftables.d/access.nft
#!/usr/sbin/nft -f

table inet filter {
	chain input {
		udp dport 88 accept comment &#34;accept Kerberos v5&#34;
		tcp dport 88 accept comment &#34;accept Kerberos v5&#34;
		udp dport 749 accept comment &#34;accept kadmin&#34;
		tcp dport 749 accept comment &#34;accept kadmin&#34;
		tcp dport 2049 accept comment &#34;accept NFSv4&#34;
	}
}
```

### Kerberos config

Add the &#34;default_realm&#34; and dedicated &#34;realms&#34; part.

```
# /etc/krb5.conf
[libdefaults]
 dns_lookup_realm = false
 ticket_lifetime = 24h
 renew_lifetime = 7d
 forwardable = true
 rdns = false
 default_realm = WILLOWBARRACO.FR

[realms]
 WILLOWBARRACO.FR = {
  kdc = kerberos.willowbarraco.fr
  admin_server = kerberos.willowbarraco.fr
 }
```

Then you can start the Kerberos services.

```
$ rc-service krb5kdc start
$ rc-service krb5kadmind start
$ rc-update add krb5kdc
$ rc-update add krb5kadmind
```

### Create Kerberos admin principal for remote admin

This admin principal is used to remotely administrate your Kerberos services. *Use a strong password here*.

```
$ kadmin.local
addprinc admin/admin
```

Write this file.

```
# /var/lib/krb5kdc/kadm5.acl
admin/admin	x
```

### NFS config

Add those exports.

```
# /etc/exports
/home/willow/music  192.168.1.0/24(rw,sec=krb5p,no_subtree_check,sync)
/home/willow/music  *(ro,sec=krb5p,no_subtree_check,sync)
```

Here I export with read-write for intern IPs, and read-only for outside IPs. Meaning I can only add albums to my library from inside my local network. &#34;krb5p&#34; is to Authenticate, and secure communications.

Edit this.

```
# /etc/conf.d/nfs
NFS_NEEDED_SERVICES=&#34;rpc.svcgssd rpc.idmapd&#34;
OPTS_RPC_NFSD=&#34;8 -V 4 --lease-time 30&#34;
```

&#34;-V 4&#34; to only use the NFSv4 version. And &#34;--lease-time 30&#34; to reduce the client re-connection time when they switch network interface. It is considered not safe to use a lower value.

### idmap config

Idmap is used to map your client machine users UID and GID to the NFS server users ones. So that you got the required ACLs to read and write files.

Edit those values.

```
# /etc/idmapd.conf
[General]
Domain = blue-balloon
Local-Realms = WILLOWBARRACO.FR,BLUE-BALLOON
[Translation]
Method = static,nsswitch
[Static]
stacy@WILLOWBARRACO.FR = willow
```

&#34;blue-balloon&#34; is my system hostname here. I use &#34;static,nsswitch&#34;, and this static rule, because on some machine I still got a &#34;stacy&#34; username. My NFS folder is owned by &#34;willow&#34;, so I have to map this manually to allow the &#34;stacy&#34; users to read/write. Every client &#34;willow&#34; users would get mapped to the machine &#34;willow&#34; user by the nsswitch, and the &#34;Local-Realms&#34; value.

### Create principal for the NFS server

The &#34;nfs/...&#34; format is important. &#34;ktadd&#34; store keys on the &#34;/etc/krb5.keytab&#34; file.

```
$ kadmin.local
add_principal -randkey nfs/music.willowbarraco.fr
ktadd nfs/music.willowbarraco.fr
```

Create end users. Those are the usernames you use on your client machines. Ideally you use the same usernames as you NFS folder, but you can map them manually, as seen previously when configuring idmap.

```
$ kadmin.local
add_principal willow
add_principal stacy
```

(some of my machine still use &#34;stacy&#34; as username, seen previoulsy).

If you plan to automatically generate user tickets, using pam by example, you have to use the same password as you machine user ones.

If you reached this, everything seems done for the server part.

```
$ rc-service nfs start
$ rc-update add nfs
```

# Client config

Those are the only steps you have to run on your client machines.

```
$ apk add nfs-utils krb5
```

Add this.

```
# /etc/krb5.conf
[libdefaults]
 default_realm = WILLOWBARRACO.FR
[realms]
 WILLOWBARRACO.FR = {
  kdc = kerberos.willowbarraco.fr
  admin_server = kerberos.willowbarraco.fr
 }
```

Create and store this host machine principal. Replace &#34;$HOSTNAME&#34; with your machine hostname. The &#34;host/...&#34; format is important.

```
$ doas kadmin -p admin/admin
add_principal -randkey host/$HOSTNAME
ktadd host/$HOSTNAME
```

Add this.

```
# /etc/fstab
music.willowbarraco.fr:/home/willow/music	/home/willow/music	nfs4	rw,fsc	0	0
```

You can optionnally configure fs-cache editing &#34;/etc/cachefilesd.conf&#34; if needed.

Make the &#34;nfsmount&#34; service depending on &#34;rpc.gssd&#34; by adding.

```
# /etc/conf.d/nfsmount
rc_need=&#34;rpc.gssd&#34;
```

Then everything should be fine for the client services.

```
$ doas rc-service cachefilesd start
$ doas rc-service nfsmount start
$ doas rc-update add cachefilesd
$ doas rc-update add nfsmount
```

Your system user will still not be able to access the mounted directory. This is because you have to generate tickets for your user too.

```
$ kinit
Password for stacy@WILLOWBARRACO.FR
```

Or you can use pam for this. For example, you can generate tickets while unlocking your sessions with swaylock. Of course your session passwords must match your realms ones for that.

```
$ apk add pam-krb5
```

Add this.

```
# /etc/pam.d/swaylock
# Unlock krb5 session
-auth            sufficient      pam_krb5.so minimum_uid=1000
```

At the end, you expect to list those tickets.

```
$ ls /tmp/ | grep krb
krb5cc_0
krb5cc_1000
krb5ccmachine_WILLOWBARRACO.FR
```

The &#34;machine&#34; one is generated by &#34;spc.gssd&#34; when &#34;nfs&#34; mounts the folders. The &#34;0&#34; one by your &#34;doas kadmin -p admin/admin&#34;. And the &#34;1000&#34; one by your pam trigger, or &#34;kinit&#34; command. Your user ticket expires after 24 hours, so you will have to regenerate one regularly.


It is possible I missed something. If you struggle while testing, or if found a problem, please contact me!
</content>
	<author><name>Willow Barraco</name></author>
</entry>
<entry>
	<id>gemini://missbanal.net/status-update-2025-05/</id>
	<title>Status update May 2025</title>
	<updated>2025-05-13T00:00:00Z</updated>
	<link href="gemini://missbanal.net/status-update-2025-05/" rel="alternate"/>
	<content type="html">Hello there, how are you? It&#39;s been a while right? Let&#39;s write a lot of &#34;I&#34; sentences \o/

If you&#39;ve been reading my previous post, knows that I haven&#39;t beat my PB, but that I recovered just fine, without too much of too bad moments. I feel fine, and wanna beat the world down! Am frustrated at my current professional situation. Still wants to do more open-source, rather that working for capitalists…

Even if I used this time to recover, I still did things for the LOSS ecosystem, as usual. Let&#39;s try to remember major milestones.


The most signifiant step probably is my work on the wlvncc project. This is a Wayland VNC client, that has been developed firstly to test wayvnc, its server counterpart. I&#39;ve been using it for sometime to take control to my tablet/laptop, from my main workstation. But had multiple issues with it… Let&#39;s write code!

I started rewriting the whole image compositioning code. The client buffer was scaled and positioned at the center manually, without taking care of the fractional scale value. The surface was also poorly responding to resize events, causing massive breakages when the server image was updated soon enough after a resize. I used the Wayland subcompositor, and viewporter, to delegate everything to the compositor. We dropped all internal scaling, meaning that the VNC client surface now stay sized the same way as the server one. The benefits are first that the whole surface is now scaled just once, and in a scaling agnostic manner. And second that resize events now only reconfigure the positioning values, without needing to re-allocate the buffers. Less code, fewer bugs, better performances!

Then we were suffering multiple issues with focus and unfocus events. Multiple time I broke the keyboard state, with no way to recover, rather than restarting the whole program. Or that some keys was left latched after a pointer unfocus. To handle those events correctly, I had to store some state data, as pressed keys, to be able to release them while unfocusing.

And the next big deal was about the multiple image buffering. I discovered that I lacked lot of knowledge around this problematic, as I very poorly understood the buffer rotation logic first. After some reading I still reported some internal problems caused by the current implementation. For example, in situations where the server image was refreshed at a different rate than the local compositor, it could cause some frame to be skipped when they arrive after a previous one, and before the next Wayland frame callback. After a long discuss, we figured that a simplification could improve the situation. The very last received frame should always be the one used to render the next image while compositioning, even if that means that some frame has been skipped in between. This is a VNC client, so we must try to minimize the delay between the user actions, and the images rendered locally.


Another big moment was my coming back to the hare-http third party library. I have cool projects in mind, and wants to use Hare for them. But first we have to upstream better client and server libraries. I sent new patches to implement request/response writing/reading methods. With Drew, we designed some cool stream APIs to help the user to write response to received request. The exact same API has been used for the event loop side, meaning a handler function could technically be used without even knowing if the socket is blocking or not.


Seems enough for now! If I had more time I would have written less.
</content>
	<author><name>Willow Barraco</name></author>
</entry>
<entry>
	<id>gemini://missbanal.net/status-update-2024-12/</id>
	<title>Status update December 2024</title>
	<updated>2024-12-30T00:00:00Z</updated>
	<link href="gemini://missbanal.net/status-update-2024-12/" rel="alternate"/>
	<content type="html">Hey you, it&#39;s been a while. How are you? Fine, thanks! Here are some quick updates, and personal news.

Last year I was unemployed, but now I have much less time doing free software. For the most part, I&#39;ve been continuing being around projects I love. Maintaining Sxmo, being a PostmarketOs laboratory rat with my Pinephone Pro, and playing with Hare.

As a side note, I finally found the bug I had with Bonsai. In very rare cases, it stopped responding to events, and so left Sxmo hardware buttons unresponsive. The issues have been fixed on hare-ev, and hopefully the event loop is now bullet-proof.

Also with recent Sway and Wlroots releases, and the change related to the new scene-graph api, it broke some software as Wvkbd and Bemenu. The events are dispatched in a different order, and so the software was behaving incorrectly.

On a personal side, I will undergo a big surgery in a week. Knowing myself, I could either code a lot in the following months, or losing myself completely in video-games. Hopefully beating my Celeste PB in the process.

Maybe I&#39;ll stop doing those status updates if they keep being that irregular. It is an easy format to share, but I should write them way more often for them to be relevant.

Oh, and by the way… Happy new year!
</content>
	<author><name>Willow Barraco</name></author>
</entry>
<entry>
	<id>gemini://missbanal.net/status-update-2024-08/</id>
	<title>Status update August 2024</title>
	<updated>2024-08-04T00:00:00Z</updated>
	<link href="gemini://missbanal.net/status-update-2024-08/" rel="alternate"/>
	<content type="html">Hey, it&#39;s been a while! But as I started a daily job, I don&#39;t have that much to summarize:

## Sxmo

We released last month Sxmo 1.16. With some patch tags, it went great. It is a tiny release, mostly containing fixes, integrating new apps, and supporting new devices.

Of my most notable patches, we can count (1) some refactorisations around wake scripts. They are now more usefull to the user outside the sxmo internals. It is now very easy for the user to write basic periodic jobs that wake the device from suspension, and keep it awake while doing so. (2) A red LED when the battery level is low. (3) Support for distro wallpaper to be selected instead of the Sxmo one when available. Mainly usefull for the new wallpapers from PostmarketOs.

Another notable thing is that Peter left the maintainer team. He did not announce this himself, and asked for us to do so. For some reason that I&#39;m not sure to understand, it was something hard to do for me. I think it took me three full weeks to officially announce it.

## Hare release, and so are my Hare childs

0.24.2 has been flagged recently, and so I took some time to clean my projects using Hare, and flagged new minors.

Bonsai and Sxmobar are already merged in Alpine packages.

=&gt; https://sr.ht/~stacyharper/bonsai/ Bonsai - A Finite State Machine structured as a tree that trigger commands
=&gt; https://git.sr.ht/~stacyharper/sxmobar/ Sxmobar - A status bar component manager


Splitter is on the way, mainly because hari depends on the new hare-harfbuzz.

=&gt; https://git.sr.ht/~stacyharper/splitter/ Splitter - A Speedrun GUI tool
=&gt; https://git.sr.ht/~sircmpwn/hari hari - UI toolkit for Hare
=&gt; https://git.sr.ht/~sircmpwn/hare-harfbuzz hare-harfbuzz - Harfbuzz wrapper for Hare


And a new one, mcron is already on the way to be packaged. I wrote this post some months ago to present it.

=&gt; https://sr.ht/~stacyharper/mcron Mcron - a sleeping Cron Job Scheduler
=&gt; /another-cron/ Did I write another cron job scheduler?
</content>
	<author><name>Willow Barraco</name></author>
</entry>
<entry>
	<id>gemini://missbanal.net/perfect-docker-bind-mount-permissions/</id>
	<title>How to keep clean your Docker bind-mounts files permissions?</title>
	<updated>2024-05-31T00:00:00Z</updated>
	<link href="gemini://missbanal.net/perfect-docker-bind-mount-permissions/" rel="alternate"/>
	<content type="html">If you ever used Docker to develop for any project, you might have already discovered some mess within your filesystem:

```
$ ls -lh
drwxr-sr-x 313 root root  12K May 29 19:18 node_modules
```

Very similar situations are produced with a PHP project, on `vendor/` or `var/` folders by example.

We all tried to do some kind of manual user ID mapping. If the user &#34;www-data&#34;, from inside our containers, use the same user ID than our host user, the process should produce inodes that our regular user have access to. Mhh?

Also, maybe it is possible, somehow, to configure our projects, or Docker itself, to keep clean our host filesystem. Can we configure docker with this &#34;user-namespace&#34;? Or better, could we run Docker as rootless? What about Podman?

There should be some way to do so, right? Quick answer: Have you considered wanting something else?


To elaborate my thoughts, I&#39;ll give some context, demonstrate with examples, and enumerate facts. We&#39;ll see what is possible, and what is not.

Some context first: My username is &#34;stacy&#34;, my uid and gid are 1000. I&#39;ll test (1) a rootfull Docker vanilla experience, (2) rootfull Docker with user-namespace mapping, (3) rootless Docker, and (4) rootless Podman. I&#39;m starting with an empty folder. We&#39;ll be doing the same commands, and we&#39;ll check what are the consequences over the filesystem.


Docker rootfull vanilla:

```
$ doas docker run --rm -it -v ./data/:/var/src -w /var/src alpine sh
/var/src # whoami
root
/var/src # id
uid=0(root) gid=0(root) groups=0(root),...
/var/src # mkdir foo
/var/src # ls -lhn
drwxr-sr-x    2 0        0           4.0K May 29 19:27 foo
/var/src # exit
$ ls -lhn
drwxr-sr-x 3 0 0 4.0K May 29 21:27 data
$ ls -lhn data/
drwxr-sr-x 2 0 0 4.0K May 29 21:37 foo
$ rmdir data/foo/
rmdir: failed to remove &#39;data/foo/&#39;: Permission denied
```

The `data/` folder has been created by the Docker daemon, as root, while creating the bind-mount. The `data/foo/` directory is also owned by root, created by the root process from the container, that is also root on the host system. My regular user can&#39;t remove those directories. That is the default experience.

In this situation there is no user mapping at all. 0 for the containers is also 0 for the host. Then: what if we give a user ID explicitly?

To do so we have to create `data/` ourselves, because 1000 will not have permissions over `/var/src` otherwise.

```
$ mkdir data
$ doas docker run --rm -it -u 1000:1000 -v ./data/:/var/src -w /var/src alpine sh
/var/src $ mkdir foo
/var/src $ ls -lhn
drwxr-sr-x    2 1000     1000        4.0K May 29 19:54 foo
/var/src $ exit
$ ls -lhn
drwxr-sr-x 3 1000 1000 4.0K May 29 21:54 data
$ ls -lhn data
drwxr-sr-x 2 1000 1000 4.0K May 29 21:54 foo
$ rmdir data/foo/
```

This is generally what I encounter on projects I work for. But it brings a lot of constraints we will demonstrate, because the daemon configuration may vary, or because some Docker images decide otherwise.


Docker rootfull with user-namespace over &#34;stacy&#34;:

```
$ cat /etc/sub[u,g]id
stacy:100000:65536
stacy:100000:65536
```

```
$ doas docker run --rm -it -v ./data/:/var/src -w /var/src alpine sh
/var/src # whoami
root
/var/src # id
uid=0(root) gid=0(root) groups=0(root),...
/var/src # mkdir foo
/var/src # ls -lhn
drwxr-sr-x    2 0        0           4.0K May 29 19:33 foo
/var/src # exit
$ ls -lhn
drwxr-sr-x 3 100000 100000 4.0K May 29 21:33 data
$ ls -lhn data/
drwxr-sr-x 2 100000 100000 4.0K May 29 21:33 foo
$ rmdir data/foo
rmdir: failed to remove &#39;data/foo&#39;: Permission denied
```

Here, we still are considered &#34;root&#34; from the container point of view. But we are actually creating directories with owner IDs 100000. This means that our regular user still can&#39;t remove those directories.

But it is worse than that: what happens if you try to give your IDs explicitly now?

```
$ doas docker run --rm -it -u 1000:1000 -v ./data/:/var/src -w /var/src alpine sh
/var/src $ mkdir foo
mkdir: can&#39;t create directory &#39;foo&#39;: Permission denied
/var/src $ exit
$ doas rm -rf data/
$ mkdir data
$ doas docker run --rm -it -u 1000:1000 -v ./data/:/var/src -w /var/src alpine sh
/var/src $ mkdir foo
mkdir: can&#39;t create directory &#39;foo&#39;: Permission denied
```

Hehe, surprised? Now that we are using the user-namespace, the ID 1000 from the container point of view does not match your host user ID at all.

In that situation, any preparation of the filesystem is expected to fail. I recommend avoiding bind-mounts completely when Docker is used this way.


Docker rootless:

```
$ docker run --rm -it -v ./data/:/var/src -w /var/src alpine sh
/var/src # whoami
root
/var/src # id
uid=0(root) gid=0(root) groups=0(root),...
/var/src # mkdir foo
/var/src # ls -lhn
drwxr-sr-x    2 0        0           4.0K May 29 19:40 foo
/var/src # exit
$ ls -lhn
drwxr-sr-x 3 1000 1000 4.0K May 29 21:40 data
$ ls -lhn data/
drwxr-sr-x 2 1000 1000 4.0K May 29 21:40 foo
$ rmdir data/foo
```

Okay, you might think we won right? We still are &#34;root&#34; from the container point of view, but we are actually creating directories as 1000.

Rootless also use the user-namespace for every user, except for &#34;root&#34;. So problems comes when we use another user:

```
$ mkdir data
$ docker run --rm -it -u 1000:1000 -v ./data/:/var/src -w /var/src alpine sh
/var/src $ whoami
whoami: unknown uid 1000
/var/src $ id
uid=1000 gid=1000 groups=1000
/var/src $ mkdir foo
mkdir: can&#39;t create directory &#39;foo&#39;: Permission denied
```

Rootless brings the same constraints as rootfull with user-namespace. It is now impossible to make IDs to match your host ones, except for the &#34;root&#34; user.

Unfortunately, some Docker images just refuse to works as &#34;root&#34;. I&#39;ve encountered the situation with OpenSearch images. But we can also argue that `php-fpm` can delegate its process pool to some &#34;www-data&#34;.

Fortunately, Podman give us some tools to deal with rootless:


Podman rootless:

```
$ mkdir data
$ whoami
stacy
$ podman unshare
$ whoami
root
$ id
uid=0(root) gid=0(root) groups=0(root),...
$ ls -lh
drwxr-sr-x 2 root root 4.0K May 29 22:08 data
$ ls -lhn
drwxr-sr-x 3 0 0 4.0K May 29 22:09 data
$ chown 1000:1000 data
$ ls -lh
drwxr-sr-x 2 stacy stacy 4.0K May 29 22:08 data
$ ls -lhn
drwxr-sr-x 3 1000 1000 4.0K May 29 22:09 data
$ exit
exit
$ ls -lh
drwxr-sr-x 2 100999 100999 4.0K May 29 22:08 data
$ docker run --rm -it -u 1000:1000 -v ./data/:/var/src -w /var/src alpine sh
/var/src # mkdir foo
/var/src # exit
$ ls -lhn data
drwxr-sr-x 2 100999 100999 4.0K May 29 22:09 foo
$ rmdir data/foo
rmdir: failed to remove &#39;data/foo&#39;: Permission denied
```

`podman unshare` is a wrapper over `unshare`. It runs the user command within the user-namespace that Podman uses. While we are &#34;unshared&#34;, we `chown 1000:1000 data`. From the host filesystem perspective, `data/` now belongs to 100999:100999. This looks like a mess, but it is actually what the container needs to write to this folder.

To be said vulgarly, `podman unshare` give root privileges to the user over its own user-namespace. The user is now free to manipulate the inodes permissions, to prepare the filesystem for the containers.


What did we learn?

It is impossible to reliably map the user ID to your host one while doing bind-mounts. Because if the daemon is configured to use user-namespace, or if it runs rootless, the IDs will mismatch.

It is always easier when the containers use &#34;root&#34;, because it makes more setups viable. But because some images may decide otherwise, user-namespace brings too many constraints over the filesystem.

Podman is the more appropriate solution to the rootless way. First because it implements it from a longer time, so we can expect a better support. But also because it provides `podman unshare`, so that the user never need root privileges to manipulate the filesystem too.


And to conclude to the introduction passive-aggressive hot-takes with more details. Linux is a multi-user operating system. Docker is in no way a black-box that can abstract, or iron out, the Linux ACLs. We have to deal with them, so let&#39;s do it in a correct and secure way.
</content>
	<author><name>Willow Barraco</name></author>
</entry>
<entry>
	<id>gemini://missbanal.net/git-super/</id>
	<title>Git Super - a tiny script for the stacked diff workflow</title>
	<updated>2024-05-30T00:00:00Z</updated>
	<link href="gemini://missbanal.net/git-super/" rel="alternate"/>
	<content type="html">Huya, I discovered today that the git workflow I&#39;ve been using for years have a name. Some folks on the internet call this: The stacked diff workflow

The idea is to commit over the default branch. And to forget that branches have ever existed.

```
$ git rebase -i origin/main
```

Open your text editor for you to re-order, drop, reword the commits that are not upstream.

```
$ git pull --rebase origin main
```

Pull the applied commits. Will eventually move your own commits when they get upstreamed. (btw: you should probably make `--rebase` the default).

```
$ git push origin &lt;commit&gt;:refs/heads/&lt;branch&gt; [-f]
```

Create (or replace) the remote `&lt;branch&gt;` with the pushed `&lt;commit&gt;`.

This last one is the verbose version of `git-push`. It is really not difficult to work with, when you get used to it.

The main problem is that you have to remember which branch name you used, to update it…

For this, I wrote a simple POSIX Shell script named `git-super`. It will store the branch name in dedicated `git-notes` references.

You can pass it a `&lt;commit&gt;` reference, or it will use `HEAD` by default.

It can also be used in the middle of an interactive rebase

```
$ git rebase -i origin/main
... editor fires
pick a679c8d makefile: improve logging
exec git super
pick 6b75321 docker-compose: fix service declaration
pick fc9467e src: fix memory leaks
```

Here we re-ordered the commit `a679c8d` right after `origin/main`, and then git will execute `git super`. The script will prompt for the branch name, and will push for us. Next time we&#39;ll use super with this commit, the push will update the same remote branch.

(The first time, `git super` will prompt for the remote it should use while pushing. `origin` is the default value, but you might have to use your writable forge &#34;fork&#34;)

Now we can just use `git super a679c8d` to update the same remote reference. We don&#39;t need to remember the branch name anymore.

The script can be found here:

=&gt; https://git.sr.ht/~stacyharper/dotfiles/tree/master/item/bin/git-super git-super
</content>
	<author><name>Willow Barraco</name></author>
</entry>
<entry>
	<id>gemini://missbanal.net/another-cron/</id>
	<title>Did I write another cron job scheduler?</title>
	<updated>2024-05-01T00:00:00Z</updated>
	<link href="gemini://missbanal.net/another-cron/" rel="alternate"/>
	<content type="html">Sxmo is a Desktop Environment that primarily target embedded devices with not-so-tiny screens we will name here &#34;Smartphones&#34;. It is based on a common Linux software stack, and in a more structuring way Tiling Windows Managers (at the moment sway, dwm).

=&gt; https://sxmo.org Sxmo - Simple X Mobile

Today to improve battery longevity, we rely a lot on System Suspension. Which means the OS will shut down most of the hardware, and only some subsystems as Modem and RAM Memory will be kept running. It means that the system is unusable until it woke up again. This happens of course when the user presses the power hardware button. But the system can also be woken up by the modem, on incoming calls or SMS.

But then come a common problem: How the user can schedule a task to run periodically?

This is a solved issue in a Linux world. Most of the time the system is always alive, so let&#39;s just wait for the correct moment to comes. A simple script could just wait for some time, run the user command, and then re-do. But if the user need a more complex scheduler, they generally use a Cronjob Scheduler. If you need to play a song every week-day at 07 AM to wake YOU up: write a crontab entry.

In Sxmo we tried to preserve some control over the system suspension to use Cronjob Schedulers. We wrote mnc which is a simple software that parse crontab files, and output the time in second until the next crontab entry (My Next Cron). Then we use this duration to prepare the suspension to sleep until this next job. The system would then sleep, and wake up just in time for the scheduler to trigger the task.

=&gt; https://git.sr.ht/~anjan/mnc mnc - find seconds to next cronjob

Yes! Yes? It never worked… Here is the ticket I wrote, and updated over time to track this:

=&gt; https://todo.sr.ht/~mil/sxmo-tickets/384 #384: Still unreliable rtc wake with cronjobs

It looks like the scheduler loose track of the passed time, while the device was sleeping. With chronie, the job will just never trigger. With fcron, a theoretically more safe with suspension cron scheduler, it can take up to 45 minutes, or more, for the task to actually start.

Those Cronjobs Scheduler was just not implemented with suspension in mind. Working around them does not work. But there was a simpler way. I started to implement Mcron, a mobile cron job scheduler.


=&gt; https://sr.ht/~stacyharper/mcron/ mcron: a sleeping Cron Job Scheduler

Mcron does job scheduling in a sleeping way. Instead of tracking time, and checking periodically if the job moments are now, it computes the duration until the next job moments. It creates timer file descriptors in the kernel space, and then it just waits for them to expires, with a simple read system call. And to make the system to wake up when it does, it uses the CLOCK_REALTIME_ALARM system clock.

```
CLOCK_REALTIME_ALARM (since Linux 3.11)
      This clock is like CLOCK_REALTIME, but will wake the system if
      it is suspended.
```

Designed this way, Sxmo does not have to do anything. It will work on any Linux operating system.


I program with the Hare programming language for some time now, and I recently used a lot the hare-ev event loop. I knew this was really trivial to implement with.

=&gt; https://harelang.org The Hare programming language
=&gt; https://git.sr.ht/~sircmpwn/hare-ev hare-ev - Event loop for Hare

Mcron is not packaged yet, because it still depends on some Hare edge patches. But everything has been upstreamed now. And I will package it with the next Hare release.

I had to send to the upstream stdlib a tiny patch to help with reading system group. And I worked with Byron Torres, that maintains the Hare date/time support, to fill a hole when creating date time object while switching to different timezone offset.

But, It just works.

I probably spent a whole weekend time implementing it, writing man pages documentation, and improving the duration computation algorithm. It makes sure the users belongs to the mcron group, and start the task with the correct user authorizations. I expect it is safe to use.

=&gt; https://sr.ht/~stacyharper/mcron/ mcron - a sleeping Cron Job Scheduler
</content>
	<author><name>Willow Barraco</name></author>
</entry>
<entry>
	<id>gemini://missbanal.net/status-update-2024-03/</id>
	<title>Status update March 2024</title>
	<updated>2024-03-19T00:00:00Z</updated>
	<link href="gemini://missbanal.net/status-update-2024-03/" rel="alternate"/>
	<content type="html">Hello, it&#39;s been a long time. Let&#39;s try to recap some of the things I have done the last few months.

## Sxmo

We continue to improve, to re-factorize, and to fix bugs as they come. Since October, we released 1.15.0 which include the hook unification. What it means is that we now have one &#34;version&#34; of the script hooks for all devices, which include the desktop mode, and the e-reader mode with the new Kobo Clara support. A lot of changes was required for that to land.

Also, since the latest Sway 1.9 release, I have dropped our sxmo-sway fork. A lot of discuss was needed on the Alpine packaging side to make this as smooth, transparent, and secure as possible. I tried to avoid the issues we had last time, when we moved to this fork.

## Wvkbd and Bemenu Wayland optimizations

Since sometimes I am trying to minimize the cost, and the visual aspect of the first frame rendering. To make this short, it is currently difficult to render the first frame correctly, with the correct scale value. Before that point, bemenu and Wvkbd was blurry on the first frame. This was specially visible on a low-tech device as the Pinephone.

Also, Wvkbd adapt the layout, and the height dimension if it detects it is in a landscape mode. It means it was at first taking both of its needed space on the first frame. It was not elegant at all on a tiny screen as a smartphone, because both Wvkbd and other programs have to update their dimension two times while opening the keyboard when landscaped.

The problem is that to receive those data from the compositor, both of these programs have to create an initial Wayland surface without knowing anything about the current context. This also depends a lot on the compositor. Some of them would send output data first, some of them would wait for a surface to be created and assigned a role. The Wlroots scene API would eventually continue to change how Sway behave about this on the next release.

With my recent patches, and on Sway 1.9, both of these programs render correctly, pixel perfectly, and does not make the other programs to dance too much while landscaped or not. But this will continue to change over time, and releases. So I&#39;ll continue to keep an eye on this.

## Hare speak HTTP?

I really want to be able to use Hare to build things for the web. One of my plan is to write an IRC-v3 to &lt;less-good-chat-platform-as-Mattermost-or-Slack&gt; single user bridge.
For this to happen, first we have to add HTTP support to a Hare library first. I sent some patches, and worked with Drew to push this topic forward. An initial hare-http server API should be merged in a close window.
I am now working on a hare-ev HTTP support, because we can&#39;t really use a blocking event loop in a real-world web service. What I need first is an easy way to scan the received UTF-8 buffer, while receiving it. At the moment there is no way to distinguish a matched token (an HTTP line), or the end of the current buffer.

## Himitsu remembering consentement

Today I am still using my own pass implementation (based on age, and not GPG) as password manager. I&#39;d like to move to Himitsu at some point, but some important features are needed for this to eventually happen:

- Synchronization between multiple devices
- Consentement remembering for some time

The first point could be mitigated with a Syncthing shared folder between device, I guess.

But there is currently no work-around for the second point. I have a lot of periodic scripts or programs that are configured to read password from my password manager. I don&#39;t want to be prompted every fifteen minutes while synchronizing my emails, calendars, whatever. Ideally I want for Himitsu to prompt me one time for every script, and I would check a &#34;don&#39;t ask me again for those entries for a week&#34; mark.
That what I started to implement, and sent to the Himitsu mailing list. At the moment it isn&#39;t persisted on disks, so it means it does not survive system reboot or daemon restarts yet. But I&#39;ll work on this next!


Hope I do not forget something important… Well, enough for today!
</content>
	<author><name>Willow Barraco</name></author>
</entry>
<entry>
	<id>gemini://missbanal.net/unification-of-my-website/</id>
	<title>Unification of my website</title>
	<updated>2023-11-02T00:00:00Z</updated>
	<link href="gemini://missbanal.net/unification-of-my-website/" rel="alternate"/>
	<content type="html">Hello! This is a very short message to notice that I squashed both my `www.` and `blog.` domains into one single web site, based on `www.`.

I changed the feed entry ids to match the urls, so you probably got double entries for previous posts. Sorry about that!

See you soon.
</content>
	<author><name>Willow Barraco</name></author>
</entry>
<entry>
	<id>gemini://missbanal.net/status-update-2023-10/</id>
	<title>Status update July 2023</title>
	<updated>2023-10-02T00:00:00Z</updated>
	<link href="gemini://missbanal.net/status-update-2023-10/" rel="alternate"/>
	<content type="html">Hey there, it&#39;s been a while! I&#39;ve had much fun those months. Let&#39;s dig in.

## Wvkbd

With Marteen we worked a lot to improve Wvkbd, the virtual keyboard we use on Sxmo.

We had a look at existing layers, and we cleaned up a lot of them. Our goal was to reduce the miss-input frequency. We limited the number of keys per layers, and placed them in consistent positions.

To reduce miss-inputs, we had to give more feedbacks to the user. When you press a key, you don&#39;t really know if your finger touched the correct square, until you look at the typed key in the focused surface. To help on this, I added a new popup surface that display the typed key on top of your finger. This wasn&#39;t as easy as it looks like, because I had to figure out how to use xdg_popups with layer_shells. It feels very pleasant to use, and we hope it will help typing overhaul.

I also re-worked the Wayland surface initialization flows, and added fractional-scaling-v1 support. Now I expect for Wvkbd to appear perfect on the first frame. We still are waiting for a release of Sway for this problem to be fully closed.

## Hare

I have spent most of the last month coding time with Hare. First to bump my personal projects to fix recent Hare API breaking changes. But also to clean up third-party libraries that I use, or that I want to use on new projects.

## Splitter

I am working on a Wayland GUI speed-running program in Hare. I mean a GUI program to time runs and splits, display gold splits, display time deltas, or compute the best possible time. This is the kind of program that the speed-runners use when they live streams, or record their tries.

The program is very simple and efficient, and need a single input data file INI formatted. Most aspects of the GUI are configurable like the font, colors, transparency. It is also possible to map global key-binds using the command line client. Both client and GUI communicate through a Unix socket. The code is almost done, and just need a bit of cleanup at this point.

This is one of the first serious Hare GUI program with a that broad scope. It is the result of the work of many developers, from Hare itself, to Hare libraries and sub-projects:

- hare-wayland give the interfaces to talks to the Wayland compositor. It depends on hare-xml.
- hare-cairo is the Cairo library bindings and allow us to draw colors and text on the buffers.
- hare-xkb is a library binding I wrote myself to interpret correctly keyboard typed key symbols.
- hare-ev as event-loop, because we need to redraw periodically the time values. We also need to connect the socket communications to it.

It takes a village!

I have contributed on those projects as I noticed missing features, or issues. I am very proud of this, cause it proves I understand Wayland as much as I thought, and that I can write a full GUI program from scratch by myself.

See you next time, in a month or two, or more :)
</content>
	<author><name>Willow Barraco</name></author>
</entry>
<entry>
	<id>gemini://missbanal.net/status-update-2023-07/</id>
	<title>Status update July 2023</title>
	<updated>2023-08-02T00:00:00Z</updated>
	<link href="gemini://missbanal.net/status-update-2023-07/" rel="alternate"/>
	<content type="html">Hello you, I&#39;ve been busy, so let&#39;s start this status update!

## Fossbill

Over the last months I developed the features that was necessary from a freelancer perspective. Last developments concern bill style customization, brand logos URLs, and minor workflow simplification.

The installation and update workflows are spotless, and I wrote a documentation companion website to help newcomers. The first stable versions had been released, the test coverage please me and will help me for a long time maintenance, the database migration process is simple.

Now I feel like I pushed this project forward enough. I don&#39;t expect much from it, but I would like for it to find its users. Also, it was very interesting to develop a Flask application from scratch, with every aspect of a fully working SaaS product.

=&gt; https://fossbill.org fossbill.org

This is the kind of project I like to works for. I would be very happy if I could live from those projects in the long run.

## Sxmo

Over the typical maintainer work, I also did a bit for Sxmo by myself.

We refactorized the Alpine packaging a lot. Before that point, Sxmo relied on sxmo-utils from aports, and on some meta packages from pmaports. We moved every dependency to new meta packages on aports itself. So now, installing sxmo-utils-common is enough to install everything needed to run a Sxmo &#43; Sway &#43; Pipewire Desktop Environment. Updating packages is now more clean and safe, and it helps to install Sxmo on every Alpine Os machine.

=&gt; https://lists.sr.ht/~mil/sxmo-devel/%3CCTYCDNHQP7GV.GKM858USWC4K%40yellow-orcess%3E More details here

Recently I upstreamed some of the configuration I had from my non-mobile machines. I use Sxmo on every personal computer I have, and feel like some of the glue I use are common enough for general use cases. In the long run I&#39;d like for Sxmo to works decently on every machine out of the box. If you ask why should I use Sxmo and not Sway directly, I would reply that Sxmo embark the needed configurations you&#39;ll need anyway.

## Bonsai and Sxmobar bump

Recent Hare development caused Bonsai and Sxmobar to fails while building. This is caused by more strict checks on some structure initialization, and minor changes in stdlib APIs. I had to dig into those issues, but fortunately I don&#39;t have to detail further. Hare is designed strongly typed, and so as soon as the builds passes, the software behave as correctly as before. Boring!

## Personal life

This is summer! I went to the sea, ate mussels, drank some Ricards, and walked overnight on the beaches. This is the first time I wore a swimsuit from about 3 years, and I felt fine.

My life is pleasant, but I&#39;ll soon have to review again my professional situation. If I can&#39;t build a decent income, I&#39;ll may have to look at regular job offers. At least to look at jobs on mainstream platforms If I can&#39;t advertise myself enough.
</content>
	<author><name>Willow Barraco</name></author>
</entry>
<entry>
	<id>gemini://missbanal.net/status-update-2023-06/</id>
	<title>Status update June 2023</title>
	<updated>2023-06-14T00:00:00Z</updated>
	<link href="gemini://missbanal.net/status-update-2023-06/" rel="alternate"/>
	<content type="html">Hey there! It&#39;s been a while.

Two months ago I quit my job, and from that moment, I did other stuff than coding. I neglected some aspects of my life for way too long. I developed friendships, took care of my health, felt in love.

But while I invested less time in coding than I thought I would, I still did some things.

## Sxmo - regular maintenance

Nothing very particular here, but I still flagged a release to sxmo-utils. Main goal was to bump the nerd-fonts icons with the new 3.0.0 release. The Alpine Linux aarch64 pipeline was down for a bit, but the root issue is solved, and it should land soon enough (I hope).

## Fossbill - feature complete

I continued to work on Fossbill with an unregular velocity. Still I think the core feature set is now complete. My to-do list for this project now is:

- UI style overhaul
- Dark theme
- Data import/export
- SaaS feature scope (disableable feature)

I also would like to make the installation process easier. It still needs too much manual step to my tastes.

## Hare/Helios

Two months ago I had much fun coding for Hare and Mercury driver. I actually made a working RTC driver, my very first driver! Next step was to implement CPU frequency determination to the Helios kernel, but at this point I think it was too much for me. Anyway I learned a lot while doing so, and I still plan to continue this way.

Good surprise was when Drew proposed to me to join the Hare maintainer team. I still feel illegitimate, but I should be better when I will actually make some aarch64 failing tests to pass.

## Professional situation

I took time to set up my freelance situation. Now I have the microstructure to actually work as a developer freelance. I still need customers, but I didn&#39;t advertise myself anywhere yet. My budget still allow me to keep living for months. At some point, if I still lack the marketing aspect to make it to works, I would eventually fall back to regular job looking.

Now I feel I&#39;m back to my computer. There is lot to do, and time pass fast. Take care of yourself.
</content>
	<author><name>Willow Barraco</name></author>
</entry>
<entry>
	<id>gemini://missbanal.net/status-update-2023-03/</id>
	<title>Status update mar 2023</title>
	<updated>2023-04-01T00:00:00Z</updated>
	<link href="gemini://missbanal.net/status-update-2023-03/" rel="alternate"/>
	<content type="html">Hello you. Time to recap what happened to me this month.

This should be quick. I don&#39;t know if the clock change broke something, but this month passed fast!

## Professional situation

In short: 20 days to freedom!

I&#39;ll have plenty of time to finalize Fossbill. This will be my very first focus because I want a reliable, and working project that I could be proud of. I really don&#39;t expect too much of it. If it becomes just a proof of my values, and a tool I can rely on, it would be enough to me. 

Next I would like to help as I can hire.sr.ht. This is a work in progress SourceHut subspace to seek at hackers. I would prefer to not advertise myself on some rival headhunter platforms. I wish to work on FOSS projects as much as possible.

And depending on the situation, I also plan to study kernel driver development. I&#39;ll probably finally take time to dig into Helios. I expect the whole code base to be lighter than reading Linux code directly. Hoping the learning curves would be softer, as I will be more fluent with Hare than with Linux C specific ecosystem.

## Sxmo released

Some days ago we took some time to release Sxmo, and all subprojects. It went as expected, and we didn&#39;t encounter much unplanned edge cases.

I just fixed and flagged some minor patches into a 1.14.1 sxmo-utils, to cover a really specific cases where sxmo wasn&#39;t checking wakelocks as expected, if an idle inhibiter programs was running. Example if mpv was playing something while the device went in screen-off with a user manual action.

## Fun with NFS &#43; FS-Cache &#43; Cachefiles &#43; OpenVPN

Before today, I used a simple Syncthing shared folder to sync my musics between all my devices. The problem is that this music folder now weight ~50 Go, which is a problem for my lightest devices (rockpro64, pinephones). From years and years, I looked at solutions or alternatives to avoid this problem. But today I think I found a way.

To quote kernel.org: FS-Cache is a module that provides a caching facility to a network filesystem such that the cache is transparent to the user.

The more obvious, and FS-Cache supported network filesystem is NFS. Mhh, I have to set up a bunch of things before I could listen to my musics again...

In my Alpine client machines, running the cachefilesd daemon is all I have to do for the FS-cache/Cachefiles part. I can optionally edit the config file /etc/cachefilesd.conf to change the fscache folder path, or the culling limits.

Next I have to prepare my server machine to offer the music folder as a NFS. Install nfs-utils, and edit /etc/exports so that the client machines can mount the music path. Then start the NFS daemon and it is done.

I initially used read only export rules, but now I have to find a better way. NFS doesn&#39;t offer authentication, nor encryption by itself. To have a secured and private NFS, I have to set up a VPN. 

I avoided this as much as possible through years. But now, it is time. I am ready.

And in fact, setting up a self-hosted OpenVPN server isn&#39;t that bad. The most tricky part is to understand, prepare, and sign carefully the certificates. The basic idea is to give to OpenVPN the CA certificate, so that it can check that the client use signed ones, with the same CA.

On Alpine, we can install easy-rsa to help on this step. Then we rsync the folder /usr/share/easy-rsa/ to a specific folder, one for each client, and one for the CA. Ideally this should be done on all different systems, so that the private keys never leaks.

I&#39;ll not dig into too many details, here is a good step-by-step:

=&gt; https://github.com/OpenVPN/easy-rsa/blob/v3.0.0-rc1/README.quickstart.md

With all signed client and CA certificates, I am pleased to discover that the rest of the OpenVPN config is very simple. Configuring clients is also very straightforward. Start every daemon, and check with `ip a` that it works.

Now that all of my machines talks to each other in a secured way, I can mount my NFS folder through this VPN. The FS-cache works as expected and the cache growths as my system read the files. If I never listen to a music, it never gets downloaded. If storage become rare, the less used cached files will be culled out of the local storage. It is perfect!


That&#39;s all for this month! Have a nice weekend! Thanks for reading me.
</content>
	<author><name>Willow Barraco</name></author>
</entry>
<entry>
	<id>gemini://missbanal.net/status-update-2023-02/</id>
	<title>Status update feb 2023</title>
	<updated>2023-02-25T00:00:00Z</updated>
	<link href="gemini://missbanal.net/status-update-2023-02/" rel="alternate"/>
	<content type="html">Hey there! Let&#39;s try to recap what I&#39;ve done this month.

## Professional situation

I&#39;ll keep it short this time, promised! I finally resigned. This situation was painful to me, and unnoticeable for my employers. This was going nowhere better, so I decided I&#39;ll choose how and when it will end.

I still got a two-month prior until my contract actually ends. It will continue to be painful but at least, I see the end now.

Fortunately to me, my lifestyle and personal situation don&#39;t require important revenues. So I will have months, maybe a year before it really become precarious. I&#39;ll have even more time if I manage to obtain unemployment revenue dues.

And if nothing works, well I would just knock at to some company doors, and see if they need another developer, for another proprietary ERP web suite.

I would say my mental health is way better :)

## Sxmo support Sway upstream!

My monthly spam on the channel #sway-devel finally worked! Two Sway folks reviewed the touch seat operation implementation, and after some iteration, we had it merged.

=&gt; https://github.com/swaywm/sway/pull/6455 Sway Merge Request - Implement seatop_touch

To close this topic, let&#39;s recap what it does, and why we needed it that much:

On Wayland compositors, the applications only receive events that concern their own surface. A program that is not currently focused don&#39;t receive keyboard key press, nor that they receive touch or pointer events outside their &#34;window territory&#34;.

When I worked on sxmo to support Sway, I had to choose which dmenu like program we will use on this compositor. I chose bemenu because it was the more mature, and the more complete solution. But bemenu was not supporting pointer nor touch events yet. I had to implement those!

While coding it, I quickly noticed that the bemenu Wayland backend stopped receiving my touch motion events as soon as I was leaving its surface. It was a problem because the behaviors I was intended to implement would require those events. On sxmo menus, if you hold a touch contact, and leave the surface by the top/bottom edge, it should ask you to go to the previous/next page if you release.

What was missing: If you initiate a touch from a surface, the futures events related to this touch (motion or release), should continue to be noticed to the initial surface. Touch position should be relative to the surface. Also, x and y could be negatives or higher than the surface weight or height.

In the Sway nomenclature, it is called a seat operation. Touching a surface should initiate this kind of operation, and make next events to behave differently. This is what the Sway merge request is all about!

## About bemenu

This week I also took time to test the master branch of bemenu. I saw they implemented two features that we will really appreciate on sxmo, border and fixed height.

I haven&#39;t much to say about borders. This make the bemenu surface more visible while above other terminals. I just noticed they somehow failed to render correctly the right border. It took me some minutes to figure out they forgot to compute the width using the correct scaling value. It means the right border actually was far away outside the surface.

Now let&#39;s speak about the fixed height. Previously, bemenu height was recomputed while the entries count was shrinking with filters. It was unpractical on desktops cause the filter line and results was moving on almost every keystroke. And then on mobile it was also unpractical cause filtering was shrinking the height so that it became painful to use touches to select an entry. Enforcing a fixed height made the bemenu positions predictable.

While trying this on my Pinephone sxmo, I noticed that the newly generated blank space above the last entry was causing behavior issues with the pointer and touches. I pushed a fix to make it more pleasant to use on mobile.

I am now waiting for bemenu to release, and I&#39;ll add the needed configuration in sxmo-utils.

Peter is also working on stripping some part of the sxmo-dwm status bar so that it will &#34;make dwm and sway look identical&#34;. I think he will be pleased to see that now bemenu looks almost exactly like dmenu.

## Sxmo suspension overhaul

Aren, a prolific sxmo contributor, brought this topic some weeks ago. Thanks to him, long debates, implementation tentatives, and iterations, we re-implemented almost completely how and when we suspend in sxmo. A bit of context:

In sxmo we got three state which are &#34;unlocked&#34;, &#34;locked&#34;, and &#34;screenoff&#34;. While in screenoff, we want to suspend basically as soon as possible. But some rules could restraint this. If an ssh session is connected, a player is running, a modem related action is in progress, we have to hold suspension until there is no more &#34;reason&#34;.

We had a script named `sxmo_mutex.sh` which was serving this purpose. It could `lock` or `unlock` to maintain the list of reason, and `hold` until the reason list is empty. Additionally, we have an extendable hook to check periodically a bunch of condition, and update the reasons (ssh, mpris player, etc).

The fun part is that we could actually rely on the kernel itself to implement this.

If Linux is compiled with the enabled capability (CONFIG_PM_WAKELOCKS), it will expose to the user space some interfaces to manipulate wake locks.

Reading `/sys/power/wake_lock` give the list of the current wake locks. Writing to this file register a new one, with a conditional argument &lt;timeout&gt; in nanosecond after which the wake lock will automatically expire. Writing to `/sys/power/wake_unlock` remove a wake lock.

We replaced our `sxmo_mutex.sh` calls to a `sxmo_wakelock.sh` new script that expose a simple API to manage wake locks. Then we had to choose how we should use this to actually suspend.

First option was to write &#34;mem&#34; to `/sys/power/autosleep`. It would enable opportunistic suspension. When no wake locks remains, the kernel will suspend by itself.

To keep some kind of control we just have to toggle a kind of &#34;not_screenoff&#34; wake lock while switching from one state to another.

Unfortunately we still got some edge cases features that prevent us from relying on this. In sxmo we make it possible to ask for a specific suspension time. It is mostly useful to wake up on time for a cronjob by example. We chose for now to keep some kind of control over suspensions.

Instead, we chose to use `/sys/power/wakeup_count`. Reading this file block until there is no remaining wake lock. Writing an integer to this would fail if the number is not the current wake lock count. If writing succeed, it will make the kernel to abort the following suspension if new wake locks has been registered in between.

With this, we implemented a very simple `sxmo_autosuspend.sh` daemon that will just loop over and try to suspend when no wake lock remains:

=&gt; https://git.sr.ht/~mil/sxmo-utils/tree/master/item/scripts/core/sxmo_autosuspend.sh sxmo_autosuspend.sh

More documentation here:

=&gt; https://github.com/torvalds/linux/blob/db77b8502a4071a59c9424d95f87fe20bdb52c3a/Documentation/ABI/testing/sysfs-power Linux ABI - sysfs-power

List of the benefits:

- Less code on our side to manage the list (and hopefully a more robust code from the kernel)
- We decoupled suspension from the screenoff hook itself. `sxmo_autosleep.sh` now is a daemon supervised by superd.
- We used wake locks &lt;timeout&gt; values everywhere except in some specific cases (&#34;not_screenoff&#34; by example). Hopefully, it will fix all &#34;my phone was awake when I found it&#34; issues.
- The kernel gives useful wake lock statistics with `/sys/kernel/debug/wakeup_sources`
- This open possibilities to other programs and daemons to manage their own wake locks so that sxmo does not suspend while they are busy. We should now implement this on eg25-manager and ModemManager, additionally to their elogind support.

## The other stuff

I have done a bunch of other stuff, mostly for sxmo this month. I&#39;ll not cover everything with deep explanations but here some tl;dr:

Peter initiated some status bar icon improvement. Recent releases of Nerd Font added a lot of new cool icons that we really needed. Say &#34;bye bye&#34; to the thermometer that display the modem signal strength. We are waiting for nerd-font update to be merged in aports so that we could merge everything in sxmo-utils. Peter also got additional patches ready to be applied later.

I worked on sxmo-dwm to drop the patch to support multi-key, so that I could write a new one to support key-up and key-down events. It now allows us to connect sxmo-dwm to Bonsai and so, unify both our Sway and Dwm environments. Now I can grow more complex Bonsai tree, to handle more complex situations than &#34;sequential hardware button clicks&#34;

Fossbill didn&#39;t have much of my attention this month. Still, I took some time to adapt the bills workflow and manage an extra &#34;quote&#34; state between &#34;draft&#34; and &#34;bill&#34;. I also configured and integrated SQL migrations with Alembic, so that futures changes will be easier to do.

And finally, I managed to configure automatic builds of my personal aports. There is also a recipe to cross-compile aarch64 packages using qemu-user. It allows me to skip compiling software on my poor devices. It is such a pleasure to drop nightly recipes, and then upgrade on all my devices to install them.


I took more time, and used tools to help me to write this. Hopefully with less english mistakes! If you notice some remaining problems, feels free to contact me. Thanks for reading!
</content>
	<author><name>Willow Barraco</name></author>
</entry>
<entry>
	<id>gemini://missbanal.net/status-update-2023-01/</id>
	<title>Status update jan 2023</title>
	<updated>2023-01-31T00:00:00Z</updated>
	<link href="gemini://missbanal.net/status-update-2023-01/" rel="alternate"/>
	<content type="html">Hello there! I expected that this status update could be shorter than the previous one, but then I started to tell the my life&#39;s tale. So, let&#39;s get started!

## Personal situation and backstory

In those early 2023 days, I went to my ex-boss and explained that I had to leave. That my values are too disaligned with those of the company. That was the tl;dr:

I have been working for Synbioz for three years, a french web-development company that focuses on Ruby on Rails development. The company markets some values that are aligned to mine. Things weren&#39;t perfect, but it was fine enough. We were using the Google toolsuite for everything, from mails to meets, but were also self-hosting Mattermost and some other okay-tier programs. We were hosting client services under AWS, but were using GitLab and signing our emails with gpg. It was barely good enough for me to stay, but I had other concerns in my personal agenda that were more important.

This company has recently been sold to Ouidou, another French company that was founded just 4 years ago. This company already staffs 150 employees around every corner of France. Its fast growth already worried me, because I didn&#39;t choose to work for a company with a headcount of about 10 just to end up working for a mega-corporation. The toolset rapidly and dramatically changed, moving from Google (already bad) to Microsoft… Outlook never allowed me to use emails in my setup . Our GitLab plan has been stopped, breaking some CI jobs and removing some features. Our access to our password solution has been closed. Even in the marketing corner, no place for some remaining Synbioz values.

So, I knew I would have to leave at some point.

I asked politely to leave; I planned to start my own self business. To help me with that project, the safest approach was for me to finish my work contract with what we call in France a &#34;Rupture conventionnelle&#34;. Both the employee and the company agree that they should split. This isn&#39;t a fire nor a resignation and it allows the employee to earn some unemployement assurance. It also would grant me some financial stability to start a fresh project.

My ex-boss agreed, but explained that he isn&#39;t the one that making this kind of decision anymore. He asked the HRM, who rapidly refused. The answer was that it would cut into their margins. I could leave anytime cause they basically don&#39;t need me. I just have to resign.

I felt betrayed because I has been put in a precarious situation without any notice. This made me angry, sad and frustrated, and really affected my mental health. At this point, I am still unsure of what I should do and how.

In a perfect world, I would begin my self-employment and work for projects I love. I code for good software that will last, and I build a stable ecosystem of partnership and self-hostable SaaS solutions. I do some consulting and missions on the condition these concern FOSS projects. I take more time to seriously study the Linux kernel and low-level language development (I have more appetite for those than for web development today). I work from home, on my own schedule and rely on a FOSS toolsuite of self-hosted or partnered solutions that I share values with.

I&#39;m terrified that I might not be capable of building this stable lifestyle, but I know that I should at least try to!

## Fossbill

I was thinking about software I would need as a freelancer, and the most missing part to me was be a billing tool. I asked the Fediverse what FOSS freelancers used, and found no clear answers. This was a perfect occasion to play with Flask again!

I started to code for fossbill.org, a simple and fast billing solution with a very tight scope. I need to be able to:

- generate drafts and bills easily
- compute invoice amounts and taxes
- send drafts and bills by email
- keep a log of activity (creation date, sent date)
- mark bills as paid

I think this should cover most of my micro-freelancer needs, at least while starting. It is a great occasion for me to try to build a SaaS service and to gain some experience in that business model. Even if it doesnt take roots, this made me build a real world Flask &#43; SQLAlchemy tool from scratch.
This project still is under development and some domain features are still lacking. Some other features I still have to implement cover the business plan:

- data import/export
- user payment registration
- customer invoice generation (for the SaaS service itself)
- user customisation configuration

## Following sxmobar - Sxmo status reforged

Last status I wrote I initiated an overhaul of the sxmo status bar component displaying. This works is now complete and is waiting for its dependencies to be released. Here are some details:

The design is simple: sxmobar only manages the bar components. There is no battery nor network nor any kind of embarked monitoring. You can use it like so:

```sh
sxmobar -w -o pango|tty|plain # display and follow the bar on updates
sxmobar -a a-component-name 10 &#34;My component&#34; # add (and override) a component &#34;My component&#34; with order priority 10
sxmobar -d a-component-name # Delete the component
```

You then use monitoring tools that trigger updates to bar components.

This implementation has to be robust and keep a consistent state of the components. Both processes do NOT communicate through IPC nor sockets. Concurrent addition, update or deletion shares a single state file. Re-writing this state file triggers a redraw of the status bar in every sxmobar process that watches. The bar should be displayable as plaintext, tty or pango markup. Which means that the bar can be displayed as swaybar content and as a ssh tmux status line at the same time. Bar components have a foreground and background color, a style and a weight that is specified when adding them.

I had to add some missing syscall glue code to Hare for inotify and flock to respectively watch for file rewrite, and prevent concurrent reads and writes.

## See yaa

Mhh, and I think this covers my productive time this month. I was less active than usual, as I had difficulties preventing me to focus on interesting problems.

I still don&#39;t have the structure or freedom I want yet, but if you want to hire me for some cool projects, I am ever open to propositions!

Willow, out.
</content>
	<author><name>Willow Barraco</name></author>
</entry>
<entry>
	<id>gemini://missbanal.net/status-update-2022-12/</id>
	<title>Status update dec 2022</title>
	<updated>2023-01-03T00:00:00Z</updated>
	<link href="gemini://missbanal.net/status-update-2022-12/" rel="alternate"/>
	<content type="html">Here my first status update post. I would like to write some of those from now. Just to have a periodic public report of topics I&#39;m working on and things I&#39;ve done.

## Personal situation

This end of year was a perfect occasion for me to take holidays. It&#39;s been some years I didn&#39;t gone 3 weeks free. This was a perfect moment to clear my todo list of things, to apply some important sxmo patchs and to code for fun.

## sxmo-sway rebased over 1.8

Unfortunately we have to continue rebasing the touch seat operation patch above sway release for some time. I periodically ping sway folks for it to be reviewed and to land in master. The rebase wasn&#39;t hard and just some minor fixups was necessary. Nothing broke in the process and now some weeks has passed without issues.

## Hare developments

The development of hare-ev arrived in a perfect moment for me to play with it. As I wrote bonsai, an final state machine we use on sxmo to trigger multikey actions, I learnt to hate polling over a list of file descriptor. I still lack some understanding of kernel basics but I slowly close the gap.

Hare-ev allow to write hare programs that responds to events, requests, responses in a very easy way. It basically does what you would expect. Register files, connect actions on writes / reads. Plus it arrived with some sugars to handle socket connections, timer triggers, etc.

Still I struggled with some issues. It took me some time to report them in a clean enough way to be analysed upstream. Now that most problems are fixed and merged I was able to re-implement some things in bonsai.

=&gt; https://git.sr.ht/~sircmpwn/hare-ev hare-ev

## Bonsai improvements

Bonsai daemon and clients communicate over a unix socket. The daemon also had to handle delay transitions in an asynchronous way. I was using a very ugly way, forking to a sleeping process that would eventually be killed, or woke up to notice to the main process the state had to be updated. Noticing through a pipe I was barelly cleaning up afterward.

Rewriting all of this with hare-ev was a real pleasure and simplified all of that crap. I also took time to learn more in detail how to manage more cleanly some memory allocations. The v1 was full of big memory leaks…

I still have to wait for some dependency to be released to finalize this work. Until that point, I continue to run master on my local pinephone and didn&#39;t noticed problems for days now. The daemon is way more stable and light so I&#39;m very impatient to release it.

=&gt; https://sr.ht/~stacyharper/bonsai/ Bonsai

## Sxmo status reforged

In sxmo we used a very simple script sxmo_status.sh to show the bar, manage components, or to tail it to the Sway bar. Over time we added some pango markups to show some colors or font style. Pango markup doesn&#39;t works yet on the dwm bar, nor on plain tty.

I initially tried to handle it in with special components nomenclature, that would be stripped accordingly in those environments. The problem is that it make sxmo_status.sh too complex for a shell script. And worsely, the hook that compute the components became horrible to write and to maintain. It was very easy to forget some pango components and to break the global structure. It also broke the simple separator logic. In all ways, it wasn&#39;t the good approach.

Some days ago I initated a Hare program (again) to manage those use cases cleanly. The component structure got a name, priority, content, style enum, fg and bg color enums. The tail daemon will output in plain, tty or pango. Multiple daemons will be able to tail accordingly in Sway bar, tmux status line or wathever.

I hope to finish this work this week so that I can try it on sxmo very soon. I&#39;ll have to beta test it a lot cause a crash on the daemon would break completly the status bar without easy way to restart it.

## Conky 1.16 released with Wayland support

A very surprising news came from the Conky folks recently. I received a mail noticing activities on a 2014 old ticket about Wayland support for conky. On sxmo we built a dedicated solution to display things on Sway desktop cause we wasn&#39;t expecting for it to arrive never.

I immediatly tried master to give early feedbacks. One of the main dev took time to cleanup builds and I updated the recipe in the Alpine side for it to land correctly. There still is some minor problems with colors and rescalling but the team seems very responsive to my tickets. I think this could be stable in a very short time.

=&gt; https://github.com/brndnmtthws/conky/releases/tag/v1.16.0 Conky 1.16.0 release message

## Plans for future

This year will not be easy for me. I&#39;m planning to quit my job because I would like to live from foss development. Or at least to try to. It&#39;s been years now I&#39;m doing this in my free time and I now feel a real dichotomy with my professional life. My current company has be bought recently and I&#39;m now in a structure I really don&#39;t like. Using tools I never would have used before.

This is also why I took time to cleanup my websites and to refresh this blog. I must start to give public feedback of what I&#39;m doing. And to detail why I think those matter.

Happy new year to you :) see ya!
</content>
	<author><name>Willow Barraco</name></author>
</entry>
<entry>
	<id>gemini://missbanal.net/how-to-manage-shell-daemons/</id>
	<title>How to manage shell daemons</title>
	<updated>2022-01-14T00:00:00Z</updated>
	<link href="gemini://missbanal.net/how-to-manage-shell-daemons/" rel="alternate"/>
	<content type="html">While writing shell script, it is pretty common to need to manage some daemons.

Example inspired from sxmo-utils where we got a sxmo_modemmonitor.sh script that listen to dbus signals to dispatch notifications.

```
#!/bin/sh

dbus-monitor --system &#34;interface=&#39;org.freedesktop.ModemManager1.Modem.Voice&#39;,type=&#39;signal&#39;,member=&#39;CallAdded&#39;&#34; | \
	while read -r line; do
		notify-send &#34;$line&#34;
	done &amp;

dbus-monitor --system &#34;interface=&#39;org.freedesktop.ModemManager1.Modem.Messaging&#39;,type=&#39;signal&#39;,member=&#39;Added&#39;&#34; | \
	while read -r line; do
		notify-send &#34;$line&#34;
	done &amp;
```

This script got a huge issue. It will exit itself after reaching the end of the file. That mean you cannot control the two dbus-monitor subprocess anymore. You&#39;ll have to kill each of them manually.

```
#!/bin/sh

dbus-monitor --system &#34;interface=&#39;org.freedesktop.ModemManager1.Modem.Voice&#39;,type=&#39;signal&#39;,member=&#39;CallAdded&#39;&#34; | \
	while read -r line; do
		notify-send &#34;$line&#34;
	done &amp;

dbus-monitor --system &#34;interface=&#39;org.freedesktop.ModemManager1.Modem.Messaging&#39;,type=&#39;signal&#39;,member=&#39;Added&#39;&#34; | \
	while read -r line; do
		notify-send &#34;$line&#34;
	done &amp;

wait
wait
```

This is a better idea. The two wait will make the script to wait for subjobs to finish. But we still doesnt manage the subprocesses.

If you want to be able to clear subprocesses with Ctrl&#43;c or with a kill signal you will do something like this:

```
#!/bin/sh

dbus-monitor --system &#34;interface=&#39;org.freedesktop.ModemManager1.Modem.Voice&#39;,type=&#39;signal&#39;,member=&#39;CallAdded&#39;&#34; | \
	while read -r line; do
		notify-send &#34;$line&#34;
	done &amp;
PID1=$!

dbus-monitor --system &#34;interface=&#39;org.freedesktop.ModemManager1.Modem.Messaging&#39;,type=&#39;signal&#39;,member=&#39;Added&#39;&#34; | \
	while read -r line; do
		notify-send &#34;$line&#34;
	done &amp;
PID2=$!

gracefulexit() {
	kill &#34;$PID1&#34;
	kill &#34;$PID2&#34;
	exit 0
}
trap &#34;gracefulexit&#34; INT TERM

wait &#34;$PID1&#34;
wait &#34;$PID2&#34;
```

This is the most common way to handle subprocess in shell scripts. But it got one main big issue:
`$!` is the pid of the while loop only, not the `dbus-monitor`, not the both of them.

In consequence, killing those PID in the trap handler will leave alone the dbus-monitors subprocesses unmanaged.

So how to manage those? This is the clean way:

```
#!/bin/sh

FIFO1=&#34;/tmp/fifo1&#34;
mkfifo &#34;$FIFO1&#34;

dbus-monitor --system &#34;interface=&#39;org.freedesktop.ModemManager1.Modem.Voice&#39;,type=&#39;signal&#39;,member=&#39;CallAdded&#39;&#34; \
	&gt;&gt; &#34;$FIFO1&#34; &amp;
PID1=$!

while read -r line; do
	notify-send &#34;$line&#34;
done &lt; &#34;$FIFO1&#34; &amp;
PID2=$!

FIFO2=&#34;/tmp/fifo2&#34;
mkfifo &#34;$FIFO2&#34;

dbus-monitor --system &#34;interface=&#39;org.freedesktop.ModemManager1.Modem.Messaging&#39;,type=&#39;signal&#39;,member=&#39;Added&#39;&#34; \
	&gt;&gt; &#34;$FIFO2&#34; &amp;
PID3=$!

while read -r line; do
	notify-send &#34;$line&#34;
done &lt; &#34;$FIFO2&#34; &amp;
PID4=$!

gracefulexit() {
	kill &#34;$PID1&#34;
	kill &#34;$PID2&#34;
	kill &#34;$PID3&#34;
	kill &#34;$PID4&#34;
	rm &#34;$FIFO1&#34;
	rm &#34;$FIFO2&#34;
	exit 0
}
trap &#34;gracefulexit&#34; INT TERM EXIT

wait &#34;$PID1&#34;
wait &#34;$PID2&#34;
wait &#34;$PID3&#34;
wait &#34;$PID4&#34;
```

This use named pipe fifo files to separate both commands and to grab each pids.

Here is two tips to make it simple and clean:

* Use a PIDS variable to aggregates them, then for loop to wait or kill them.
* Use a start_daemon abstraction on external programs daemons to avoid named pipes

```
#!/bin/sh

daemon_pids_cache=&#34;$(mktemp)&#34;
start_daemon() {
	&#34;$@&#34; &amp;
	printf &#34;%s\n&#34; &#34;$!&#34; &gt;&gt; &#34;$daemon_pids_cache&#34;
}
stop_daemons() {
	while read -r PID; do
		kill &#34;$PID&#34;
	done &lt; &#34;$daemon_pids_cache&#34;
	rm &#34;$daemon_pids_cache&#34;
}

PIDS=&#34;&#34;

start_daemon dbus-monitor --system &#34;interface=&#39;org.freedesktop.ModemManager1.Modem.Voice&#39;,type=&#39;signal&#39;,member=&#39;CallAdded&#39;&#34; | \
	while read -r line; do
		notify-send &#34;$line&#34;
	done &amp;
PIDS=&#34;$PIDS $!&#34;

start_daemon dbus-monitor --system &#34;interface=&#39;org.freedesktop.ModemManager1.Modem.Messaging&#39;,type=&#39;signal&#39;,member=&#39;Added&#39;&#34; | \
	while read -r line; do
		notify-send &#34;$line&#34;
	done &amp;
PIDS=&#34;$PIDS $!&#34;

gracefulexit() {
	stop_daemons
	for PID in $PIDS; do
		kill &#34;$PID&#34;
	done
	exit 0
}
trap &#34;gracefulexit&#34; INT TERM EXIT

for PID in $PIDS; do
	wait &#34;$PID&#34;
done
```
</content>
	<author><name>Willow Barraco</name></author>
</entry>
<entry>
	<id>gemini://missbanal.net/perfect-znc-setup/</id>
	<title>Perfect ZNC Multi-Client Setup</title>
	<updated>2021-05-28T00:00:00Z</updated>
	<link href="gemini://missbanal.net/perfect-znc-setup/" rel="alternate"/>
	<content type="html">By default, ZNC handle the playback. When no client is connected, it will store missed messages and play them back when you log in with an IRC client.

But if you got multiple IRC client in multiple device, login with one of them will make the others to never receive the messages.

clientbuffer is a module to fix that! It will track what message all of your IRC clients received.

## Configure clientbuffer

We have to compile it as it is not a base module.

```
$ apt install unzip znc-dev
$ curl -o v1.0.48.zip https://github.com/CyberShadow/znc-clientbuffer/archive/refs/tags/v1.0.48.zip
$ unzip v1.0.48.zip
$ cd znc-clientbuffer-1.0.48/
$ znc-buildmod clientbuffer.cpp
$ mkdir -p /var/lib/znc/modules/
$ cp clientbuffer.so /var/lib/znc/modules/
```

## Connect to ZNC

Use this as username: &#34;username@CLIENTNAME/network&#34;

Here CLIENTNAME will be used by clientbuffer to identify this client. Make it unique per client.

## Enable clientbuffer on each network

```
/msg *controlpanel set AutoClearChanBuffer $me False
/msg *controlpanel set AutoClearQueryBuffer $me False
/msg *status LoadMod clientbuffer autoadd timelimit=86400
```

* AutoClearChanBuffer and AutoClearQueryBuffer must be toggled to false. It make ZNC to not mark every messages as read cause we want clientbuffer to handle them.
* autoadd make clientbuffer to automatically load a client when a new one connect itself.
* timelimit make the history size to not exceed one day. You could remove it if you want all history in all of your clients.
</content>
	<author><name>Willow Barraco</name></author>
</entry>
<entry>
	<id>gemini://missbanal.net/i-am-a-mobile-developer/</id>
	<title>Am I a mobile developer? (staceee SXMO co-maintenership)</title>
	<updated>2021-05-24T00:00:00Z</updated>
	<link href="gemini://missbanal.net/i-am-a-mobile-developer/" rel="alternate"/>
	<content type="html">I bought the Pinephone almost a year ago. The only real environment I wanted to use was SXMO. It was, and still is, the OS that fit the more with my idea of an ideal phone exploitation system. In the first place it was really unpleasant to use. The virtual keyboard caused me a lot of missclicks. The phone was not configured to wake up from suspension on incoming call or incoming sms messages. The UI was not scaled to be readable. The application lacked lot of shortcuts or dedicated configuration to be pleasant to use.

=&gt; https://sr.ht/~mil/Sxmo/ SXMO: Simple X Mobile

Anyway, I felt that the core concept was right:

* Use existing softs and make them easier to use on a phone. No devide dedicated Android like &#34;applications&#34;.
* Use simple scripts and programs. This is inspired from suckless softwares.
* Be a Linux phone. We target powerusers that want the terminal not far away.
* Be easy to develop, to maintain, to share.

I rapidly took the code in hand and started to code and fix issues I uncoutered. I started to send patches to the SourceHut mailing list. It was my first, real life conditions, contributions with this brillant forge (I love you drew…). The patches started to pile up as Miles (~mil), the initial developer, had other things to take care.

Some months later, other contributors contacted Miles to know if co-maintainership was something doable. Miles decided that Marteen (~proycon) and Anjandev (~anjan) will be given access to the repositories to co-maintain the project.

This was a relief, the patches now was unstacking themselves in the mailing list. As months passed, majors features, required reworks, and solved issues was merged in the main repos.

I continued to contribute a lot and to answer questions in the irc channel and the mailing list.

The main noteable works I remember:

* Svkbd, the virtual keyboard, was enhanced. The layout was reworked to be easier to use. It looks way better and offer cool features needed for daily usage.
* Crust, the thing that make the phone wake from suspension on modem notice, was now working. The modem monitoring script was fetching sms and incoming calls correctly. The phone could be used as a phone!
* Xorg HiDPI was configured. All main components (menus, status bar, terminal) was now customizable. Our phone was now sexy (mine is pink).
* Sone core components as dmenu (menus) and St (terminal) was now able to react to gestures.
* The incoming call pickup behavior has been rework to make it easier to do.

The time passed, SXMO was better and better! It was so exciting to contribute with this crew.

The saturday 15 May 2021, ~proycon and ~anjan gave a talk to the Alpine Conf about SXMO. They covered the main features and answered a lot of questions. The project really is starting to bring attention (I hope).

=&gt; https://diode.zone/videos/watch/b52e7c40-87cb-4479-a4cc-c11b1bfa8806 Video of SXMO at Alpine Conf 2021

Some weeks ago, I received a mail from ~proycon and ~anjan. They asked me if I was interested to co-maintain SXMO with them.

Of course, I replied yes :D

So I&#39;ll continue to be present in #sxmo (moved to oftc). To reply to mails, requiring patches to be improved. To bring this project to the skies.

Cheers!
</content>
	<author><name>Willow Barraco</name></author>
</entry>
<entry>
	<id>gemini://missbanal.net/i-just-built-a-keyboard/</id>
	<title>I just built a keyboard</title>
	<updated>2021-01-25T00:00:00Z</updated>
	<link href="gemini://missbanal.net/i-just-built-a-keyboard/" rel="alternate"/>
	<content type="html">I get hyped after some videos from hackers how explained how to build a keyboard on Youtube.

This slowly started to be one of my todo things. I never did soldering nor electronics but, meh, this was a good starting point.

=&gt; /img/my_keyboard.jpg Here the result. I&#39;m so proud!

After my first keyboard (and first mistakes), here what I learnt.

## Tools

* Soldering station (mandatory)
* Little screwdriver (magnetic one if possible)
* Keycap remover (very usefull)

## Material

I bough all the keyboard parts on

=&gt; https://kbdfans.com/ KBDFans

You need:

* Keyboard PCB

The main board of the keyboard.

* USB-C cable

To connect the PCB to the computer.

* A Case

Basically the ground and border of the keyboard.

* Keycaps

The things you actually put your finger on.

* Switches

The toggles under the keycaps. There is lot of different colors for different feelings. The more common are brown cherries.

* A plate

It is going on the PCB. We will clips switches on it.

* Stabilizers

There is only one switch under Enter or Space but the keycap do not swing from one side to the other. This is stabilizers!


You should double check your differents part, keeping in mind the keycaps count and ensuring it would fit in the plate and PCB.

## Recipe

You cannot put switches or stabilizers in the wrong orientation. Grouped holes on the PCB are for the same key. Using one or the other will only inpact the key layout.

* Inspect your future layout and place the stabilizers on the PCB where needed.
* Put the switches on the plate then slowly place the plate on the PCB. Some switches pins can be twisted, slowly put them straigh back if needed.
* Move the different switches one by one until all pins that are behind the switches fit correctly and are visible from behind the PCB.
* Do not sold now. Place some keycaps on the switch, mainly on the borders. The middle keys are hard to missplace but the border can have multiple positions. You do not want to desolder at the end (but it will maybe happen and that&#39;s ok)
* Sold when you are sure of the switches position
* Remove some keycaps that cover the screws.
* Screw the PCB on the case
* Add the missing keycaps
* Enjoy your keyboard, Yay!

## Soldering

This was definitely the funniest part. Soldering really is trivial and you should not fear this part.

You should configure your soldering tool to something between 360C and 400C.

Place the tool on the pins. You should be on contact to the pin and the PCB.

Touch the tool with your soldering wire. The wire will melt and you have to wait until it flow. It should looks like a pyramid, almost covering the pin.

Be carefull to not touch the PCB differents component or you could destroy them.

After two or three pins, you&#39;ll be confortable with this. Stay patient and enjoy the work.

## Software

The keyboard should just work out of the box (you can&#39;t say this!). Anyway, you still can write, hack then flash the software on the PCB.

For my PCB, let&#39;s read

=&gt;  https://docs.qmk.fm/#/ the QMK documentation

or use

=&gt; https://config.qmk.fm/ the online QMK graphical configurator

## More to see?

Sone keyboard builder only speak about lube. I do not really carred about this but this somehow feel important (or not?).

I should check about this and see how this give a better feeling.

Maybe using different switchers for some key could be cool. Having a different feeling for Esc or Space by example.
</content>
	<author><name>Willow Barraco</name></author>
</entry>
<entry>
	<id>gemini://missbanal.net/what-the-push/</id>
	<title>What&#39;s the push?</title>
	<updated>2021-01-22T00:00:00Z</updated>
	<link href="gemini://missbanal.net/what-the-push/" rel="alternate"/>
	<content type="html">To know what will be forcely pushed you can do this:

```
git push origin d575e61a:a-super-dev -fn
```

You can notice the `-fn`. The distant repository will then give you something like

```
To git.sr.ht:~stacyharper/a-good-project.git
 &#43; cfc907c4...d575e61a d575e61a -&gt; a-super-dev (forced update)
```

You now can `git diff cfc907c4..d575e61a` (removing one dot) to display the difference.

If it seems correct, you then can use the same git push command, removing the last `n` argument.
</content>
	<author><name>Willow Barraco</name></author>
</entry>
<entry>
	<id>gemini://missbanal.net/why-gemini/</id>
	<title>The web is dead</title>
	<updated>2020-09-19T00:00:00Z</updated>
	<link href="gemini://missbanal.net/why-gemini/" rel="alternate"/>
	<content type="html">Mozilla Firefox is falling and becoming a sponsorship showcase, only subsist in the mainstream line Google Chrome and other Chromium derivates, Safari which has never been a concurrent, and other minor alternatives as Webengine ones.

At writing time, Chrome owns 81% of the web browser usage statistics. This huge usage monopoly give Google a real power over the web evolutions. Pushing new features as standards forces concurrent to follow implementations or to become obsoletes.

WWW tend towards complexity. A this time, it is no longer possible for someone or worse some company to build a new web browser. The scope of feature a web browser have to provide is now way to large to be developed, worsly to be defined.

So the only web browser that exists and will exists in the future are the ones that already exists at the moment.

Here was a rapid part of the state of the world wide web at this writing time.
However, the WWW firstly was a way to easily share contents based on a simple HTTP markup language. A simple but complete web browser could be wrote by a small group in a pretty decent amount of time. This gave the web its decentralized, not owned and XXX spirit.

Somehow people lost the control over the web and we are now in a state where:

* browse the web over low performance device is horrible
* browse the web over a smartphone is a pain in the ass
* we have to try hard and stay paranoid to protect our privacy
* we have this kind of shitty lost/lost situation where we have to consent over cookies in every single website we browse
* we use material thousand more powerful, we read the exact same kind of content but the web is way less reactive than before

To summarize all this crap: we lost balance in the relation feature provided / resource cost. And it seems what is done can&#39;t be undone

## Restart all, learn from errors

Gemini is a way to share content over a pretty simple protocol and a low feature markup language. Its name obviously came from the space Gemini project, after the Mercurial and before the Apollo one.

The Gemini project goal was to test and master all the required steps to bring a Human from Hearth to the Moon and back. The launcher was cheap but reliable. The projects cost tend to be closer to the Mercurials one but the possibilities of the launchers and modules was closer than the Apollo projects.

The idea is that Gemini, as content sharing protocol, provide minimal and scopes features. Its goal is not to go on moon but not to be lacking of important features.

* A simple header line bringing information about mime-type and return code
* A markup language inspired by markdown with less feature
* Usage by design of SSL/TLS

As you can see:

* no style markup language
* no interactive front end scripting
* no cookies
* no request headers

In consequences:

* no privacy issue, as you never give user data other than the resource you want.
* no performance lacking
* easy implementation (a simple but usable browser can be write in an afternoon)
* content can be read without special rendering

## The Gemini markup language

This is inspired by markdown but is not markdown. The reason is that Gemini tries to disallow some implementations to use other unwanted features that markdown could brings.

Here a Gemini markup example:

```
# A title 1

## A title 2

### A title 3

=&gt; /foo An url

=&gt; gemini://my.capsule.com/foo Another url

* A list item
* Another list item

&#39;&#39;&#39;
  a  pre-formated
     content
&#39;&#39;&#39;
```

Everything not listed here is not allowed by the language. As you can see, only 3 title levels exists. The way to display it is up to the client choice. This is not the content owner that choose how to display it.

## Conclusion

The Gemini community still is pretty new but very active. People take pleasure to write personal capsules (the Gemini way to refer to websites) and other blog related content. People use this environment to share thoughts easily. There already is two search engine and aggregated feeds to browse content from the Geminiverse.
</content>
	<author><name>Willow Barraco</name></author>
</entry>
<entry>
	<id>gemini://missbanal.net/unit-tests-the-curse-worse-than-the-causes/</id>
	<title>Unit tests, the curse worse than the cause</title>
	<updated>2020-08-12T00:00:00Z</updated>
	<link href="gemini://missbanal.net/unit-tests-the-curse-worse-than-the-causes/" rel="alternate"/>
	<content type="html">Criticizing unit tests is uncommon. Anyway I&#39;ll try to explain why I think we should stop writing them.

Unit testing is a practice that generally follow when you learning about the SOLID principle. Having a single responsibility per subject is commonly narrated to simplify testing.

```
class FooRepository
  def get_foos
    ...
  end
end

class GetFoosUseCase
  def initialize(foo_repo)
    @foo_repo = foo_repo
  end

  def call
    ...
  end
end
```

One of the approach to test this code is to write unit tests that will check every class independently. The idea is that if every tests passes, then the whole project should be fine. This structure, with injected dependency truly give the hand to the testes to create dedicated contexts around tested subjects.

```
class TestGetFoosUseCase
  mocked_repo = Mock.new.to(&#34;get_foos&#34;).reply([&#34;a mocked foo&#34;])

  subject = GetFoosUseCase.new(mocked_repo)

  def test_get_foos
    assert subject.get_foos == [&#34;a mocked foo&#34;]
  end
end
```

Do you think this test is stupid and tautological ? I think most of contextual tests actually are. Anyway, to write this kind of tests you got to know how the `GetFoosUseCase` works as you got to know that a call to `@foo_repo.get_foos` is done somewhere. That why I think the ideological &#34;write tests first&#34; is a myth in the unit test world.

So now my point is: You though very hard to write SOLID classes and method, limiting coupling as much as you can, and THEN you wrote a completely coupled test suite, as the tests got to know how the context around the subject is working.

* Refactoring the application
* Changing a method API
* Adding à dependency
* Writing a single line of code somewhere

Will force you to rewrite mocks in every test that use them, most of the time. This kind of test cost lot of time and are insane to maintain.

And this is not the only trade-off of unit tests

As you got to mimic the context, you&#39;ll write lot of code, generally way more than the code you actually want to check. A common &#34;scientific&#34; estimation is that a developer add 3 bugs every 100 lines of codes. Tests are code so if a test do not pass, which one between the code or the test do you trusts more? The SOLID code or the other one?

If think we should definitely stop thinking about code coverage but more about code value. What is the value of this code that mocks a dependency? Is it maintainable? Does this code will brings robustness to my project? And at what cost?

Furthermore, unit tests do not bring more robustness that functional ones.

```
class TestGetFoosUseCase
  def test_get_foos
    assert container(&#34;GetFoosUseCase&#34;).get_foos == [&#34;a fixtured foo&#34;]
  end
end
```

I think this test code got a huge value, way more than the previous one. It cover way more code as it will go through the `GetFoosUseCase` and the `FooRepository` classes. It will stay the same if you re-factorise both those classes so it is easier to maintain. You can add features in the `GetFoosUseCase` and this test will still be valid and tells you if it still return the fixtured foo.

Does this functional test is perfect? Not at all. But it will perfectly serve the automatic test suite purpose:

* is rapid to write and easy to maintain
* detect common errors
* give a rapid feedback about the code, detecting re-factorisation mistakes

Both approach only test a ridiculous portion of the whole code flow possibilities. It is impossible to write tests that cover all cases. Having a 100% code coverage DO NOT MEAN that every cases are covered ! Don&#39;t be delusional about your tests. They&#39;ll fail you because bugs are bugs and you can&#39;t predict them. Write simple code. Evaluate your code value. It can be positive, bringing features, being maintainable, being debuggable. Or it can be negative, having a single almost useless purpose, being hard to understand, requiring regular rewrite.
</content>
	<author><name>Willow Barraco</name></author>
</entry>
<entry>
	<id>gemini://missbanal.net/from-vim-to-kakoune/</id>
	<title>From Vim to Kakoune</title>
	<updated>2020-08-02T00:00:00Z</updated>
	<link href="gemini://missbanal.net/from-vim-to-kakoune/" rel="alternate"/>
	<content type="html">I recently switched from Vim to kakoune.

## The problem with Vim

Trying to stick to the _worse is better_ philosophy brings me to thinks regularly on my tool stack. One of the most complex tool I use everyday was the venerable Vim. Each time I was thinking about it, Vim was not really a strict Unix following subject.

Vim got lot of features out of the file edition necessary scope:

* Directory management
* Split and tab management
* Embed a terminal
* Embed a ctags management
* File inclusion scrolling and definitions searching

I once said &#34;Vim is awesome, I use it from years and I continue to learn things about it regularly&#34;. I think today that something is wrong about this simple fact.

From this statement, I tried to search alternatives. My goal was:

* Text editions features that Vim provides
* No out of scope features that Vim provides

In short, a worse is better Vim like text editor.

## Discovering Kakoune

I rapidly found a tool named Kakoune. I jumped on the design doc page and what I read glued me on the spot:

&gt; Kakoune is a code editor. It is not an IDE, not a file browser, not a word processor and not a window manager. It should be very efficient at editing code, and should, as a side effect, be very efficient at editing text in general.

&gt; Being limited in scope to code edition should not isolate Kakoune from its environment. On the contrary, Kakoune is expected to run on a Unix-like system, along with a lot of text-based tools, and should make it easy to interact with these tools.

&gt; Kakoune should be fast, fast to use, as in a lot of editing in a few keystrokes, and fast to execute.

&gt; Kakoune is inspired by Vim, and should try to keep its commands close to Vim’s if there are no compelling reasons to change. However self-consistency is more important than Vim compatibility.

I jump on the car and used it some days to have a better look. Here the difference with Vim:

## Visual is the new Normal mode

In kakoune you don&#39;t have a visual mode. In fact, the normal mode is the visual mode. You always got a selection and it is, at minimal, your cursor. So in Kakoune you don&#39;t `x` do delete the char bellow your cursor. You&#39;ll just `d` (which is honestly logic).

When you move your cursor with `hjkl` or with `wb`, you&#39;ll in fact select the given subject. In consequence, replacing the next word is `wc`. You extend your selection with capitalized subject. You can `wWWW` to select the next 4 words.

In kakoune you don&#39;t `dd` to suppress the current line. I always found this `dd` logic inconsistent anyway. In kakoune you `xd` where `x` is the way to select a line.

## Subject &#43; Verb a lesson from Verb &#43; Subject

One of the most destabilizing thing about Kakoune is the way operation are done.

If you want to remove the next 4 words with Vim, then you `d4w`. But guess what, you miss-counted the words and you had to remove only 3 words. So you have to `u` then `d3w` again.

Kakoune use a more interactive way to do that. In Kakoune you&#39;ll `4W` then seeing you miss-counted, you&#39;ll `B` to reduce the last word from selection, then `d`.

This is a more visual editor. This way is more interactive and less programmatic. There is some advantages to this approach:

* You make less mistakes as you can see the selected range
* Someone behind you can understand actions you are doing as they see them

## Multi-selection by nature

Kakoune is designed to allow multi-selection in its core. This features allow powerful possibilities. Let&#39;s take an example to show how Kakoune make things easier than in Vim.

You want to change a variable name from &#34;foo&#34; to &#34;bar&#34; in your buffer.

In Vim you&#39;ll do it in one operation `:%s/foo/bar/g`

In Kakoune, you&#39;ll do it with multiples operations, with more feedback but in short `%sfoo&lt;Cr&gt;cbar&lt;Esc&gt;`.

Firstly you `%` which will select all the buffer content. Then you `s` to &#34;select&#34; which will split the selection on pattern. You type `foo&lt;Cr&gt;` and now only &#34;foo&#34; words are selected in the buffer. Then you press `c` to change those selected parts and you type `bar&lt;Esc&gt;`. The input count is lower than in Vim. In general cases, Kakoune got a better score than Vim in Vimgolf. Plus, you got less chances to have unexpected impact on the content as you can see interactively your commands. Plus you can &#34;select&#34; in chain to reduce granulately your next action. Plus, after your change, you still can `i` content or do other actions as your selections persists.

## No out of scope features

Having no netrw and no splits was firstly hard to manage cause my workflows was not accustomed to it. But Kakoune got something I really loved to have in Vim and this close the gap. Kakoune is server-client based.

When you run `kak` you in fact start a server and connect a client instance to it. You can read in the bottom-right corner the session id. You then can `kak -c session-id` to connect another Kakoune client. All session clients share buffers, clipboard content, macros, etc. Having splits in Kakoune is as having multiple terminal in the same session. This is your tmux or WM role to manage the layout. In fact, you&#39;ll commonly `:new` from a Kakoune client to generate a new client in another window. Plus, if you are in a tmux context `:new` will automatically connect a new client in another split.

From this point, I wanted to wrap a common file browser as nnn from kak to have a full, independent file browser tool to browse my projects and open file to Kakoune. Fortunately someone did that before me, and this person did it Right!

=&gt; https://github.com/alexherbo2/connect.kak Connect

Connect is some scripts that allow to `:connect-terminal` which will open a new terminal instance with some specific environment. By example, $EDITOR will be &#34;edit&#34; which is `.config/kak/autoload/connect/paths/commands/edit` which will open files in the linked kak instance. You then just `nnn` and `e` the files you want to edit in Kakoune. Connect got some modules to directly `:nnn` from Kakoune and this will open a linked nnn instance from the current buffer directory.

## Conclusion?

I really love the Kakoune approach. We are really using the one software for one task approach here. We got a simple and powerful text editor and we branch to it single and powerful tools. Maintaining this independents software is saner and this garantee those projects to last longer.

I really recommends you to give Kakoune a try as I think it is Vim text edition feature complete and it is a saner project.
</content>
	<author><name>Willow Barraco</name></author>
</entry>
</feed>
