The sky is falling.
The value of all of man’s incorporeal creations has just taken an unprecedented turn in the general direction of “approaching zero”. Song, visual art, software, and the written word - all purportedly reduced to the value represented by the effort required to craft the correct prompt, to the machines.
The written word, in particular, is under attack from every angle. Large-language-model integrations are being rolled out for every last text-processing system under the sun - from workplace communications, to software development tooling, to God only knows what else - and it’s clear that this is only the beginning.
On the surface, the upside seems to be self-evident: Tell the machine that you’d like it to compose an email advertising an all-hands meeting for next Wednesday; it’s mandatory but use a light tone so that nobody suspects that it’s a layoff announcement. Copy, paste, send to the “everyone” mailing list, and get back to your preferred brand of endless scrolling - in record time.
As if composing the message were truly the embodiment of the work itself. The map continues not to be the territory; written communication is not a product with inherent value - it is merely an artifact. An artifact of the process of organizing information; within not only the context of fact and reason, but also within the social dynamic of the relationship between the writer and the reader. I maintain that it is within this process of organizing information, that the artifact itself actually gains any value to which it may lay claim.
If you remove the active process of writing - of organizing information - what are you left with? The artifact. What value does the artifact represent? Consider the reader’s position, in the reader/writer relationship; a piece of written communication is valued based on the reader’s perception of the writer’s position - as a lover, an employer, as a government agency, a Nigerian prince in need of urgent assistance. This allows the reader to frame the communication within the context of their relationship with the writer, and to interpret it accordingly, and to respond if necessary. Any framing, interpretation, and response will be tailored to the reader’s relationship with the writer - which in some cases, can actually be changed by the contents of the written artifact. But it’s not the artifact itself that transmits this value - the words mean nothing - the value exists in the abstract organization of concepts. The medium may in fact be the message, but the message is… just a medium.
It is with this in mind that I’d like to politely call attention to the unpopular idea that it’s completely and totally bananastown for us as a society to reduce the idea of “human-to-human communication” to “a five-word prompt and let the machine handle the rest of it”; to disavow responsibility for of the organization and communication of our thoughts, and to settle for whatever fifth-grade-reading-level schlock that the machines can hallucinate.
I’m a software engineer. I spend most of my conscious hours interacting with the written word, in forms varied and sundry - from technical documentation, to Slack messages, to emails, to the code itself - for which the paradigm of “written language except with a bunch of aggravating infix operators and reserved keywords” still reigns supreme.
The code is not the actual work product - it is merely an artifact. The “work product” - the value - comes from a precise understanding of the context and domain of the problem, carefully encoded into a format such that it can undergo a lossless transformation into instructions for a computer processor. The work product gains even more long-term value if it has been created in such a way that it is possible - not even necessarily easy, possible is more than enough - for future engineers to read, understand, maintain, and extend. Our employers don’t pay us to write code - they pay us to think - they just fire us if we don’t write the code afterwards. In fact, the process of “writing the code” is generally not even the “hard part”; the “hard part” is translating the requirements for a given feature or bugfix into the context of the existing application’s code, constrained by the environment in which the application resides and operates. The unparalleled beauty and richness of the English language makes it a poor candidate for “a format that can undergo a lossless transformation into constructions for a computer processor”, as the computer processor can generally only be trusted to do very basic arithmetic, very very fast - so even English-language communication specifying in absolute terms which operations are to be performed tends to differ - semantically, conceptually, or both - from a finished work product that does in fact correctly perform the desired operations.
It is with this in mind that we shall consider the implications of adding large-language-model-based tooling to our software engineering processes. A common argument in favor of augmenting software engineering processes with LLM-based assistants is that engineers claim to feel “more productive” when they are able to offload the burden of some classes of repetitive code-writing tasks to a LLM assistant. I don’t buy it. I have yet to meet an engineer whose output is bounded by their words-per-minute at the keyboard. And it may be the case that some of the mental burden is lifted by outsourcing some tasks to a LLM; but for every hundred repetitive lines of code that I write, I spend an order of magnitude more time agonizing over ten or twenty. And every time I write a line of code - no matter how trivial or boilerplate - I am gaining the value from having put my body - my hands and my eyes and my mind - through the process of creating that artifact; of organizing my thoughts in that precise manner; of paying attention to the dull, agonizing grind of making sure that I didn’t bring the “Address1” field in twice, when I really should have brought in “Address1” and “Address2”. With every line of code that I author, I increase my own muscle memory that much more.
In a job that can feel so abstract, so draining, and at times downright pointless - at the very least, when the day comes to a close, I’ve spent my time putting my mind and body through the motions of creating something. And for every day that I wrestle with those problems and create those things, I improve my skills - even if only by a little - even if only by doing just enough to keep them from atrophying. I’ve had enough closed-head injuries that sometimes, a day where I don’t feel like I’ve taken a step backwards is enough. I refuse to outsource that opportunity to gain that value, to the machines.
And as above, so below - for every email that I draft, for every Slack message that I write and rewrite until the phrasing is just so, the formatting is perfect, and the balance between clarity and terseness has been struck - I am improving those skills. I am gaining the deep value that comes from developing a mastery over one’s ability to organize one’s own thoughts, and to communicate them effectively to others. Even now, as I write this essay, I am wrestling with myself; a ship lost at sea, crashing beneath the waves of these abstract concepts that I am working to reify - as I try to wring some sort of insight from this torrent of idle thoughts that I’ve had over the past few months, collect them into a thimble, and communicate them to you - and all of this without wasting anyone’s time and embarrassing myself in the process.
And so, what will come of this? If everything I read is primarily a fluffed-up prompt from a machine, what impetus have I to read it? First come the LLM-based writing assistants; then come the LLM-based “summarizers”, to deserialize LLM-based text back into a sort of prompt form - we’ll need this technology, no doubt, for as the effort required to create a given piece of communication drops to “approaching zero”, the volume of said communication will increase proportionally, but the requirement that I act on “all of it” will remain constant. All of this, and for… what? I suppose that one could begin to craft additional LLM-based “assistants”, that can respond to other LLMs on my behalf, calling to my attention only what’s most urgent. (This is a weak line of reasoning, to be fair, but it was obvious and I’m convinced that it, or something else equally as obvious, is imminent.) If you want a picture of the future, imagine a bunch of LLMs making decisions on all of our behalves, forever.
Our words are powerful; or at least, they can be. But even when they aren’t - our words are, now more than ever, our primary interface to the world around us. We conduct the lion’s share of our lives through the dehumanizing abstraction of the screen; and yet, it is as though we cannot disavow this last shambling vestige of our only remaining humanity quickly enough. As if it weren’t sufficient, to be freed from the burden of having to inhabit the same physical space to share experiences and communicate with others - now we’ve glimpsed a light at the end of the tunnel such that we don’t even have to put any thought into the formerly-human act of communication, and we’re tying ourselves to the tracks and praying for the train to come as fast as it can.
And this may be the end. Maybe I’ll lose my job, to be replaced by a twelfth-generation LLM that doesn’t ask hard questions or try to create dissent among the ranks. Maybe writing is well and truly over; maybe I’m a fool for starting this Substack. Maybe when the machines take over, they’ll come for me first, for this brazen anti-paean to Roko’s basilisk.
But I will not go silently; and it matters not if none hear my final words. It only matters that they are mine.
The machines will not speak for me.
"In the beginning was the Word, and the Word was with God, and the Word was God."
Pick your God carefully.
"If you want a picture of the future, imagine a bunch of LLMs making decisions on all of our behalves, forever."
Will they be making actual decisions for us? Or will they be having endless pointless conversations with each other on our behalves, emails full of bland vocabulary arranged into ambiguous meanings?