2020-12-12 Computer Competency
Recently, @hisham_hm wrote: āWe need dumb tech and smart users, and not the other way around.ā He expanded on that on his blog: Smart tech ā smart for whom? He talks about the distinction between smart devices and computers and picks game consoles as an example:
Smart tech ā smart for whom?
⦠they are not universal machines for you, the owner. For me, my Nintendo Switch is just a game console. For Nintendo, it is a computer: they can install any kind of software in it in their next software update. ⦠From Nintendoās perspective, the Switch is a universal machine, but not from mine.
At the time, I was more interested in the concept of smart users. @phoe asked: āIs there any industry standard for ensuring that we get smart users? Any best practices to follow?ā
What do you think enables smart users? Good question! Iād say allowing people to use a tool without a simplified interface, and to share both data (files, URLs) and behavioural changes (Excel macros, configuration files, Emacs lisp files, and so on) are two examples for independent expertise growth. People can figure something out, add functionality in some way, and communicate this improvement to others without having to ask anybody for their blessing. You donāt have to recompile the tool, and the tool provides a way to extend itself in a shareable way. Expertise can develop, and the transfer from person to person means that domain-specific expertise can develop. You can adapt the editor for your team. You can write Excel sheets for your department.
@dredmorbius wrote something related about the minimal viable user on Reddit. Itās not the same thing, but itās related. There, he explores the problems that arise in software development. One of them is complexity. A solution should be as simple as possible but no simpler. Conversely a complex problem requires a complex solution. You can cut every Gordian Knot. And yes, there are always places where complexity arises by necessity: whenever interfacing with complex domains: shells, editors, development environments, databases, emails.
Rereading that collection of thoughts brings back the OECD report. Itās devastating, and raises the question of what āsmart usersā might actually mean. The Nielsen Norman Group has a great summary. They count four levels of proficiency, if you know how to use a computer. This is important, because a full 26% of the adult population was unable to use a computer. A quarter! 14% are ābelow level 1ā. They can perform a simple, straight forward tasks like ādelete this email message.ā Thatās 40% of the adult population. 29% of the population are at level 1. They can use a widely used tool like email software or a web browser. They can perform straight forward tasks like āfind all emails from John Smith.ā Thatās 69% of the adult population. Another 26% of the population are at level 2. They can perform multi-step tasks like āfind a sustainability-related document that was sent to you by John Smith in October last year.ā Thatās 95% of the population. Only the remaining 5% can solve problems that involve setting sub-goals and assessing progress, evaluating relevance, reliability and so on. The example task provided is to determine āwhat percentage of the emails sent by John Smith last month were about sustainability.ā
5%. This is underappreciated. I certainly did not appreciate this.
To me, this means that Iāve made peace with the fact that there will forever be different tech stacks, sadly. There is no point in getting people to use GNU/Linux and Emacs and all that, unless theyāre extremely simplified. Iām not saying that Windows or macOS are specifically better because theyāre also hard to use. These kinds of general machines are hard to use. All of them. These people are confused by the note-taking app on your phone because it magically involves your email account via IMAP. Even I find that confusing!
What makes is fundamentally impossible to solve this problem? Why is computing so much harder than driving a car? @yaaps said, ācomputer technologies have actively sabotaged the capacities of the user base.ā And that is true. But thatās not the only problem. A computer is not a car. Many people know how to drive a car. Is it because of a grand unified user interface, good manuals, the ability to tinker with cars? Not at all.
In my experience, everything other than the pedals is random. Manual transmission or not, where the lights and the window wipes are, how to drive backwards, and so on. I remember sitting in a rental car with my wife in France and we couldnāt leave the parking lot. A certain sequence of actions was required to start it up and we didnāt know it. And yet, the number of controls of a car is minuscule compared to a computer.
The computer is more complex than a car, and people have much less experience. There is an āembodimentā in the car driving experience: here you are in the car. Turn a wheel, make a curve. Hereās the road. Hereās a car. Hereās a parking lot. All these things we know from walking around, from play, from life as kids. They relate through each other through space and physics, and we can observe their interactions. We can infer the rules of speed, of momentum, of breaking and turning, from experience, from our body reacting to physical forces. We all start without that on a computer. Or at least my generation did. And older people are worse ā and Iām not convinced that people get better.
Turning back to the OECD report on computer skill levels. Even if computers are being designed like simple tools, dumbed down, how much more gain can we expect in computer skill levels by changing that design? 7% instead 5% would be a 40% gain! But what about all the people that donāt know how to use a computer. They arenāt being helped. How will they get the experience people have with roads and cars whether they want it or not? I donāt think there is a way. Not any more. These people have lives and jobs, families and responsibilities, and they donāt need computers, they donāt want computers, and they donāt benefit from computers.
Maybe if we made people fear computers for spying on them, if we forced them to use computers to partake in civil life, like they need a car to go shopping in some parts of the world. Sadly, weāre getting there, slowly, and Iām not liking it.
That is why I end up being OK with simple devices for people with other priorities in life and old style personal computers ā universal machines ā for people who want and need them.
And we can have all these elements at play, all at the same time. I love text. I love programming. Thatās why I use a laptop with GNU/Linux and Emacs. I donāt love tinkering with graphic cards and I donāt like upgrading my computers. Thatās why I buy a gaming console every one or two decades and use them to play games. I think I stopped gaming on the PC after ⦠Wing Commander II or something like that! š
That reminds me of something @rafial recently posted:
ļ½¢Random insight of the night: every couple years, someone stands up and bemoans the fact that programming is still primarily done through the medium of text. And surely with all the power of modern graphical systems there must be a better way. But consider:
- the most powerful tool we have as humans for handling abstract concepts is language
- our brains have several hundred millennia of optimizations for processing language
- we have about 5 millennia of experimenting with ways to represent language outside our heads, using media (paper, parchment, clay, cave walls) that donāt prejudice any particular form of representation at least in two dimensions
- the most wildly successful and enduring scheme we have stuck with over all that time is linear strings of symbols. Which is text.
So it is no great surprise that text is well adapted to our latest adventure in encoding and manipulating abstract concepts.ļ½£
So true! And it brings back the discussion of the limitations of graphical user interfaces in the essay about the āminimal viable userā. Interesting discussions all around!
ā#Programming ā#Philosophy
Comments
(Please contact me if you want to remove your comment.)
ā
@dredmorbius added the following tidbit worth remembering regarding computer skills: they depend on literary skills, and those challenges are actually well understood. He writes:
ļ½¢Most advanced countries have basic literacy rates of 95--100%. But basic literacy is simply the baseline. The US has a four-grade rating:
- Proficient: 13%
- Intermediate: 44%
- Basic: 29%
- Below Basic: 14%
Source: 2003 National Assessment of Adult Literacy.
2003 National Assessment of Adult Literacy
One third of US adults are at or below ābasicā prose literacy.
Mind: A fair portion of these are nonnative speakers of English. Some border regions especially in Texas have remarkably low English literacy, they may be proficient in other languages.
But thatās a third of the population with a major impediment to significant computer proficiency, on what is a principally text-and-language-based interface.
Keep in mind that secondary school graduation rates have been well above 90% since the 1950s. Educational access shouldnāt be a major driver.ļ½£
I agree, if people canāt read and write well enough, and we seem to be incapable of raising that number, that puts an upper limit on what we can expect in terms of computer literacy.
As for what is possible, @clacke has a different take on the numbers:
ļ½¢What Iām seeing is that 60% have reasonable to amazing literacy and yet they arenāt capable of combining simple programs into slightly less simple programs.
I blame mostly the programs and how we combine them.ļ½£
That goes back to a point @yaaps was making:
ļ½¢ ⦠computer technologies have actively sabotaged the capacities of the user base ⦠People arenāt stupid and computing isnāt intrinsically hard. Weāve just created a computing environment hostile to learning ⦠digital technology is hyper-fuckery struggling to achieve interplanetary scaleļ½£
Somewhere in here is our wriggling room.
ā 2020-12-13 10:59 UTC