We had a nice discussion over dinner yesterday, where the gist was whether/why new tech that makes one's job easier and better, can actually end up making life worse for the worker.
Let's start with a happy story. Economist Ha-Joon Chang, amongst other commentators, has a very interesting theory that the washing machine was far more revolutionary and life-changing than the internet.
Doing laundry was a labor-intensive grueling, boring, repetitive job that acted like a ball-and-chain for innumerable women. With the advent of the washing machine, the effort it took got decimated. Women eventually entered the workforce en-masse, and so it possibly played a key role in making society unrecognizable. An old lady who who did a question-and-answer session online seems to vouch for the theory:
At the age of 95, what is your favorite invention? For example my great grandmother once explained about when she first day presliced bread. Thank you so much for doing this AMA.
The most important invention is the washing machine. Any other technologies come second. It's so amazing what kind of technologies got invented.
My great grandmother who was born in Victorian England and passed in 1963 said the same thing about the washing machine.
So, a horrible job is dead, women have more options, and things are plainly better. However, even though doing laundry became an "after-work" task that anybody can do, it still remains overwhelmingly a gendered task. Many women now have to do a regular 9-5, then get home and take care of the laundry (and dishes, and kids). It remains a bullet-point in their job description.
I'm a programmer, and my partner is an architect. We both agree that rapid-prototyping in our industries has been both a curse and a blessing.
In architecture, the expectations of a client would be that at a given point in the project, draftsmen would sketch out time-intensive illustrations of what the finished product might look like. Today, the expectations are set for periodic 3D renderings. And the future seems to be inching towards immersive Virtual-Reality experiences. We all agree that these improvements are both cheaper and in many ways better, but there's two issues: 1) the job of draftsman wasn't refined into a job of 3D renderer, instead, it is now expected that any architect will be proficient enough in software to produce renderings, 2) the renderings are cheap enough that the client will demand periodic updates, obsessing over presentational details, and forcing the architect to make various "placeholder" decisions, such as materials or finish, very prematurely.
My field's prototyping tools have also gotten pretty good, and mockups are really hardly distinguishable from "the real thing". The problem is that, while I'm concerned with ensuring correctness and turning userland requirements into programming objectives, clients are obsessing over the colors of tables, or the behavior of checkboxes. I have no problem pushing back, but elsewhere I've heard senior programmers warn juniors to be very careful to strike the right balance when you present a prototype: expectations are high, so do it well, but don't do it too well, because you may be forced to work off of a mockup instead of a solid foundation for the rest of the project.
It's delightful that Rhino and CSS Grids allow even beginners to do so much, but in many workplaces this means dedicated draftsmen and designers lost their jobs, and that previously-dedicated programmers and architects now have to spend non-trivial portions of their work-day half-assing presentational work.
I'm very fond of another, somewhat goofy example, involving storytellers in the medium of video-games.
Consider the fully-realized scene that is experienced by the player of a game as divisible in terms of the effort different people put into realizing it. In an early text-adventure game, the writer may say nothing more than "shrewd germanic lady", ponying up maybe 20% of the work, and every single reader of said work ponied 80% of effort into rendering it in their imaginations. With graphics, it becomes maybe 60/40. And with the insane fidelity of graphics today, it's maybe 95/5. After we are done consuming the book or game, though, what's left in our imaginations is often not so different. Imagination could and did do a ton of leg-work in the past. I don't remember the exchanges that were read and grunted any different than the ones that were fully voice acted.
If you're the kind of person that believes actions showed revealed preferences, and whatever people buy is what they know is good for them, there's no argument here. Games used to be niche, now they're a titanic industry. More people are buying, so new games are better. Cool. However, I played a lot of videogames, and I'm not so sure.
We're appealing to laziness here to a large extent, maybe making things worse for both writers and readers. Technology shapes our expectations. It makes adding the polishing layer easier for when that needs to be done, but whereas we expected the polishing layer to be applied once at the end, now we require it to be applied over and over at every stage of the project. Maybe the new burden of having to repeatedly make small presentational tweaks significantly eats into time saved. Things that used to be fine, that let our imaginations run wild, now seem cheap and insufficient.
If many people read this, I can imagine what some of them are thinking. The market finds an equilibrium, and we are told that since this happens in an "evolutionary" fashion, there's not much point in hand-wringing about whether this equilibrium is good or not. The charge of "luddite" comes up a lot. "Technological progess is unstoppable!"
However, it isn't really true that technology is unstoppable.
Professions that wield societal power, such as doctors, have been able to better manage the pressures that technology exerts on their workflows. Whether they are correct or not, doctors have (had?) the freedom to push back on the burden of making electronic records available, of automatically reporting performance statistics to be aggregated and displayed (to help patients pick doctors). They were allowed to rightly argue that there's hidden pernicious incentives (implement a saved-life ratio, and many doctors will stop taking risky patients to maintain high-scores).
Another example is file-sharing. We have the technology today to clone and freely distribute media in an absolutely magical fashion, via the BitTorrent protocol and peer-to-peer networks. Research papers, movies, music, videogames, can be sent around the planet at absolutely zero cost, consumable instantaneously by even the poorest individual with an internet connection. And yet, instead of this mythical digital Library of Alexandria, we have Netflix, Steam, Spotify, etc. with their wealthy-westerner access fee and limited selection. oink.cd was shut down, and so was what.cd. What we have available today in exchange is a paltry shadow.
Perhaps more interesting is the case of academic publishing, where corporations like Elsevier charge both writers and readers of academic work, offering absolutely nothing in exchange. Alternatives such as the spartan arXiv used by physicists and mathematicians, or SciHub maintained by "Science's Pirate Queen" Alexandra Elbakyan, do exist. However, technology has not proved "inevitable" or ushered them into dominance.
Factory workers have no control. Programmers, architects, and writers have little control. Doctors are somewhat in control. Copyright-owners seem to have an awful lot of control.
And that is the point of these discussions, ultimately. Pushing back against defeatism, and end-of-history-ism, against "the customer is always right". To allow workers to (re-)realize they can band together in unions, and push back against technology that they deem is making their lives worse-not-better. Maybe they'll get it wrong from time to time, but that's better than submitting to market logic without a fight.