While out drinking with my programmer friends at the weekend, naturally the subject of ChatGPT arose. We laughed about it, comparing the way you can ask it to generate random pictures to Celery Man. While I laugh, there’s a slightly eerie feeling at the back of my mind that perhaps I shouldn’t be. That at some point in time it might not seem so funny.
Often as someone “in the know” I feel differently on hot-button technology subjects to how my non-technical friends and family feel. Take WhatsApp being asked to create backdoors to make it easier for government agencies to hunt down terrorists and child abusers; I can totally see why you’d be concerned by those things, but knowing what I do about the benefits of encryption I’m aware of the other risks. I’m not very knowledgeable about AI or what it’s capable of, other than that I know its current limitation is that it requires human input, so we’re still some way away from something that “thinks” for itself.
One of my programmer friends thinks there is nothing to stop the proliferation of AI. “It’s unstoppable. It’s inevitable that some people will lose their jobs”, he says. He also works in tech and makes good money. The people working on the cutting edge of AI (and with vested interests in it) are hugely wealthy and have likely spent most of their lives in the belief that their work will push humanity forward. I wonder how much sympathy they have for the working classes, sympathy for the Customer Service Representative with no university education. Attempts to automate entry-level jobs (that our economy thrives on) like these with bots might seem silly now, but I’d hazard a guess they soon won’t be. Those of us who work in tech are all incredibly lucky; I think about this every day.
I don’t think I have any reason to be worried about my job. The application I work on is highly complex and configurable. By the time you’d explained what it does to a machine in English, you’d have built the entire thing from scratch anyway. English might not even be the ideal way to explain it but at least it’s how us humans think (and dream) about applications now. Perhaps I could describe a service and some endpoints to ChatGPT and it could probably produce an Echo server with working code, but the limit to how accurate it’s going to be is how much detail I’m willing to go into.
There’s a subculture that seems to have appeared on some parts of Reddit and Hacker News recently: choosing the prompts that your AI art is created from. I find it totally absurd that anyone would think this could be considered a “skill” but people already do, as though choosing 10 words requires anything like the level of skill required to draw or paint well. Thankfully we do consider skill when we think about the quality of art, which is why a Rembrandt knockoff doesn’t cost you anywhere near as much as the real thing.
I imagine the future (at least for programmers) will be some augmentation of what we currently do: auto generated code snippets. I can’t imagine a self-checking, self-moderating machine that turns Jira tickets into an application you never touch or see. There will still be code, edited and validated by humans.
But I don’t see people losing their jobs in the face of this technology as an “inevitability” either. It certainly doesn’t have to be, unless those in charge let it be. We’ve developed a much better understanding of how to live with technology than ~170 years ago. I only hope that we don’t give in to the lazy, big tech optimism that got us into so much trouble in the past. Let’s stop pretending the big-tech CEOs and SV investors of this world care about the interests of the common person like they say they do; they don’t.