llms

A collection of notes about LLMs, one tech worker's take. Nothing crazy here and very likely nothing you haven't read or heard before. I'm keeping things here for a historical record and just to retain some memories.

Early 2026

"This is becoming a mess very fast"

My coworker recently told me that he was an AI optimist for a very long time and believed in how much more productive the engineers will become and how much more empowered the non-tech folks will be in creating things that would have otherwise taken a long time. And in a way, this is exactly what he's observing, just not in the way he imagined.

What really is happening now is that several designers and product managers created a channel where they vibe the next version of a prototype that is then being sold to the executive team as the vision and the future of the core product (it's a small company: one brand, one product). They follow a process where each vibes something in isolation, then brings it to the group. Other group members react with rocket and sailboat emojis. But it's a collection of people, loosely related through their roles in the org, who build essentially in isolation and reinforce anxiety in the group by distracting others by what seems to be a never-ending, rapid-fire stream of updates that read like "look ma, I made it, I don't know how it works and how to rebase on main, but Claude will figure it out".

At the same time, the code from that vibing group is seeping into the real system in the form of vibed PRs that try to replicate the ideas of the living prototype. There are, of course, sevral problems with that.

The first and the main problem is that the system has been in development for decades. It's not that the stack is old or anything, because, admittedly, the stack is very modern. It's just that the system is massive, so any vibed changes impose a greater risk of missing some edge cases than add the benefit of solving an actual problem by adding more code (which is rarely a good solution, to begin with). As of mid-day Monday, there were half a hundred open PRs to be reviewed, created over the weekend (because, you know, anxiety) and Monday morning. And the stream never ends.

The second problem is that people who produce such slop don't necessarily agree and truly believe that they should stop. They accept the argument that they don't know the system and therefore can't be responsible for making a change nor maintaining it. Moreover, it's not anywhere near the expectations for their role. But their argument is always "let's just ship it and see what customers say". Except the engineers know the customers will point out lack of compliance with a range of non-functional properties of the product, be they stability, user preferences, accessibility, or anything in between.

The third (and nowhere near final) problem is that the executives see the progress on the greenfield prototype and get excited. They then see a significantly slower progress on the real living system and get sad panda. What do they do? They "try to fix the problem" by empowering what I believe is a bunch of people who were picked for a wrong job to empower everyone in the org, without ever talking to the engineers, to create the next awesome thing, which essentially means throwing poorly generated code that no one had read over the fence into the engineering, which then must maintain it on production and keep up with the anxiety.

So it's a hype cycle, compressed into a duration of less than a day, that grows on itself, and I can totally understand why some of the engineers that were originally happy about the promise of AI are no longer feeling it. The LLMs are now used to build more features that don't comply with what is promised to the customers in terms of reliability, stability, security, privacy, accessibility, usability, you name it. All while these concerns are increasingly sidelined. My coworker and I wish it was the other way around, but nope.

after: A purposefully under-researched, yet lucky choice of headphones

o hai