
When my girlfriend and I arrived in North Devon for a short break with her parents last week I said I didn’t want to talk about AI. I suppose lately I’ve been feeling the coming together of several different forces in my work life and I feel so terribly exhausted by the rhetoric around the use of LLMs. I’m so tired of being seen to be the unreasonable one, to ask for things I don’t think of as being unreasonable. To be absolutely clear, I do not want this future society appears to be sleepwalking into. I didn’t ask for any of this.
In my despair it was then so refreshing to read Kyle Kingsbury’s essay series “The Future of Everything is Lies, I Guess”. In my favourite of the series, “Where do we go from here?”, he writes:
The personal automobile reshaped streets, all but extinguished urban horses and their waste, supplanted local transit and interurban railways, germinated new building typologies, decentralized cities, created exurban sprawl, reduced incidental social contact, gave rise to the Interstate Highway System (bulldozing Black communities in the process), gave everyone lead poisoning, and became a leading cause of death among young people. Many parts of the US are highly car-dependent, even though a third of us don’t drive. As a driver, cyclist, transit rider, and pedestrian, I think about this legacy every day: how so much of our lives is shaped by the technology of personal automobiles, and the specific way the US uses them.
I want you to think about “AI” in this sense.
Some of our possible futures are grim, but manageable. Others are downright terrifying, in which large numbers of people lose their homes, health, or lives. I don’t have a strong sense of what will happen, but the space of possible futures feels much broader in 2026 than it did in 2022, and most of those futures feel bad.
I do appreciate the fabulous utility of this technology.
Last week I got Claude to write me a script that took my saved articles (as markdown files with front matter) and save them in Readwise, then delete the files. It took ~20 minutes and I saved hours of my time. It wrote me a Python script and I don’t know Python particularly well, not that it mattered. It’s wonderful to be able to do boring things so easily (I never really cared about the ins and outs of the Readwise API) but I do feel a great disconnect between what I’m doing and what I’m learning.
As Christopher Meiklejohn writes in “Rift”:
This is engineering. I keep being told that. It is engineering and it is the future and it is more leveraged than what I used to do. All of that is probably true. But it is not the work I have been doing for thirty years. The shape of it is different. The rhythm is different. The way it sits in the day is different.
…
I don’t know how to be honest about this without sounding like I am complaining about progress, but I can’t pretend that something hasn’t been taken. The flow state I had for thirty years is not part of my workday now. The creativity that lived inside it is not there either. I do useful things. I do not feel what I used to feel while doing them.
Why do conversations about one aspect, or nuance of the use of LLMs, so often seem to devolve into conversations about whether LLMs are, on the whole, useful or not? I want to believe it’s because we’re still in the process of deciding. For some, it seems like an ideological argument, like making billionaires pay more tax, than it is one grounded in truth. They were quick to set out their position, where they will remain. I suspect a lot of those who have only recently started admitting to using LLMs publicly were long using them in secret.
A friend of mine, James, another Software Engineer, tells me the company he works for acquired Claude licenses for the engineering staff about a month ago. He recently met with some of his team to discuss concerns about the way he suspects LLMs are being used in communication. He says many of them use it for programming, which he’s not opposed to. We both agree that we’re willing to take a risk on not using LLMs in our work because we think (and some research seems to suggest) that using them will make us worse programmers. I think it will still be useful to be knowledgeable about programming in ~five years, while I don’t think that using Claude to assist my code is a skill I’ll miss out on if I don’t. Maybe I’ll be wrong, but I’m willing to gamble on myself, which in the past has never led me astray.
James thinks his colleagues are using LLMs in various “subversive” ways.
His manager recently wrote a spike (a call for investigation) on a particular issue James had some knowledge of. James spent some time describing some of the challenges in that domain and giving feedback on some of the approaches his manager (the LLM) had suggested. He has reason to believe that his manager merely took his feedback and fed it back into the LLM. The so-called “findings” from the spike were just rebuttals to James’ ideas.
He suspects another colleague is using their Gitlab access token to have Claude read PR feedback, make changes and reply to the comments. James takes time to review his colleague’s work but suspects his input is merely being funnelled into another stream of pseudo-consciousness. When James asked his colleague what they thought about his feedback they said it was “all fine”.
Naturally James asked his colleagues for more clarity about when they were using LLMs in their work. He suggested a label on PRs, or perhaps if he asked them, they would be candid about it. But to his surprise even his manager thought that was unreasonable. For the record, he wasn’t asking that they change their use of LLMs in any way, just that they tell him when they do.
I told him I’d had a similar experience. I will always feel dishonest taking credit for someone else’s work or present some ideas I’ve read online as my own without citing them. I’m not talking about intellectual property, though that’s another discrete concern, I’m talking about a failure to engage with one’s work in any sort of creative or intellectual capacity. How can you say you “take responsibility” for your work, when it’s an agent that speaks on your behalf? A continuous failure like this, failure to think and to communicate with candour, at least to me, signals a lack of creativity, innovation and desire. Perhaps, James’s colleagues feel it is simply not their concern.
Sadly, there’s no “smoking gun”. James can’t prove any of what he has good reason to believe. He knows (having worked with them for years) that his colleagues don’t really write like that. He knows he’s not really messaging them, that what he’s reading isn’t really them. But how can any of it really be proven? And why should he bother to engage with the ideas of someone who can’t be bothered to engage with his?