2025-07-03 fail2ban some more

This is a continuation of 2025-06-16 Ban autonomous systems.

2025-06-16 Ban autonomous systems

I kept wondering why the "recidive" jail never found any repeated offenders from the "butlerian-jihad" jail. I think I know why, now. The "recidive" jail uses the following:

Far to the right, it uses `HOST` and that only matches a single IP number. If you examine the regular expression generated and scroll over far enough to the right, you'll see the named groups `<ip4>` and `<ip6>`.

I decided to create an additional jail.

In my own `/etc/fail2ban/jail.d/alex.conf` I added a second jail:

The first one uses the filter `/etc/fail2ban/filter.d/butlerian-jihad.conf` which remains empty. Remember, entries are added to this jail via a cron job discussed in an earlier post.

earlier post

The second one uses a new filter `/etc/fail2ban/filter.d/butlerian-jihad-week.conf` defining the date pattern and the regular expression to detect "failures" (i.e. a hit).

The important part is that this uses `<SUBNET>` instead of `<HOST>`. If you scroll over to the right, you'll find a new `<cidr>` group:

And it seems to be working.

The Munin graph shows how the butlerian-jihad-week jail immediately jumps to 3000 members

I had to restart this particular jail a few times. Using `--unban` makes sense because those deserving of a new ban will be discovered immediately as the `findtime` was set to one day up above.

​#Administration ​#Butlerian Jihad ​#fail2ban

**2025-07-05**. Two days later.

2025-07-03-fail2ban-some-more-2.jpg

**2025-07-06**. Hm. I made a change to Emacs Wiki search, hoping to get rid of the DuckDuckGo dependency:

I was hoping that it would have very little effect. At about the same time, however, load started creeping up. The question is whether this is caused by so many search requests or not. There aren't many search requests in the logs, and the process monitors don't show unusually activity for the Emacs Wiki processes. Therefore, I think the answer is that the problem lies elsewhere. But where?

Somewhere around the 3rd of July load minimum seems to raise up from 0.5 to 1.0

This virtual server has two cores so load should remain below 2.0, ideally.

Somewhere around the 3rd of July the number of hosts banned for a week goes up from 2000 to more than 7000

Is it the processing of all the bans? I don't think so, since the firewall had many thousands of banned networks before.

Is it the extra cron jobs monitoring the logs? I don't think so because there's no 15min or 20min periodicity to see.

And note how load does come back down to 0.5 for a very short moment around midnight from the 4th to the 5th and in the early morning hours of the 6th.

How strange.

**2025-07-07**. Maybe just a fluke. I mean, if these defences actually worked the way I'd want them to, then an actual attack would feel like a fluke, right? 😄

The load graph shows that the current value is 0.5 although the average is still 1.6.

Also of note: The number of banned-for-a-week IP numbers and networks is up to 7900.

**2025-07-08**. And just now I found out the hard way that things weren't working as well as they ought to.

Around 18:00 Munin just stops working.

Load was over 140 when I checked in.

I had over 80 processes attempting to serve Community Wiki requests.

In the last two hours, I had 6629 requests and 3939 of them were for dynamically generated Recent Changes and RSS feeds.

For example:

Once I had confirmed that the victim was `/home/alex/communitywiki2.pl`, I killed them all:

Also stopped respawns:

All right, time to launch some scripts.

1180 new entries, banned!

Another 553 banned.

And 1305 more.

Hm, strange. 🤔 Why aren't they all banned in one go? Ah! I think I see: `asncounter` only prints the top 10 autonomous systems by default!

So I'm going to add a new line to my `/etc/cron.d/butlerian-jihad`, all on one line, with appropriate time expressions, excluding my own IP numbers, just in case, and so on. You know the drill.

Now if only Munin would start graphing again. Looking at `/var/log/munin/munin-update.log` I guess I need to `rm /var/run/munin/munin-update.lock`. Let's see if that helps. 😄

Nearly 3000 entries added to the short-term butlerian-jihad jail (1 hour ban).

Sadly, load started climbing again. 40. 50. In total, 44 processes were trying to serve Community Wiki.

The banning by autonomous system doesn't seem all that efficient any more. Looking at the last 20 suspicious entries for Community Wiki and seeing that each one is from a different autonomous system.

I've decided to lower the limit from 30 down to 10 expensive requests per ASN! 🫣

And with that, 6922 networks are now banned.

**2025-07-09**. As I was trying to start my netnews client (`tin`), I got a message saying that it wouldn't connect to the server as load was too high (over 17). Wow! Now here's a client that respects the server's needs!

I lowered my limit from 10 to 5 and manually ran my command without waiting for the cron job:

Ran it a while ago: 2187 banned. Ran it again just now: 430 banned.

The distribution was very international. My limit goes against that first number, the count.

At least we can all agree that it's no longer just Emacs Wiki and China! Remember 2024-11-25 Emacs Wiki and it's still China and 2024-11-25 Emacs Wiki and it's still China. Now it's Community Wiki and the USA, Brazil, Uzbekistan, the Philippines, Vietnam, New Zealand, Iraq, India, the Arab Emirates, Morocco, Bolivia, Uruguay, France, Saudi Arabia, Egypt, Argentina, Mexico, Jordan, Great Britain, Pakistan, South Africa, Bahrain, Russia, Taiwan, Indonesia, Kazakhstan.

2024-11-25 Emacs Wiki and it's still China

2024-11-25 Emacs Wiki and it's still China

With apologies to Mercutio: *A plague on all your houses!*

**2025-07-11**. Here's something that confuses me: CPU is around 30% and yet load average is at 10.

A screenshot of htop shows that CPU is slightly over 30% but load average is over 10.

I added the following to my `/etc/apache/conf-enabled/blocklist.conf`:

I'm really starting to think that I have to rewrite my applications because of these AI scrapers. One more example of how they are costing society.

The wiki that has been working fine since 2003 would need to protect expensive end-points behind POST requests even though most of them do not involve "posting" any edits: