As a writer and someone learning to code, I’ve been exposed to two fronts that are constantly advertised as being under threat of displacement by AI. The speed of innovation, combined with investor incentive, presents an upward pressure of concern and uproar. The most popular iteration of AI, Language Models (LMs), provoking the greatest concern. The good news for writers, artists, software developers, etc. is that language models are too dumb to be a credible threat.
I've found much more success with LMs when I use them as tools that compel me to think critically. Instead of telling an LM "write an insightful post that is less than 300 characters", it becomes: "Give me an insightful idea to write about". The difference between the two is that the former provides an unimaginative solution; the latter provides no usable output, but acts as a prompt to create work by my own efforts. The result is a post I can be proud of making, less concern of writer's block, and no concern about the originality of my own output.
Word by word creativity is just as important as the big picture. As a writer develops their voice, the more intricate those decisions become. I don't condemn people for writing completely with an LM, but there are still levels of depth and complexity that LMs are incapable of producing, at least at this stage. Even if LMs can write poetry, like AI-generated art, it's simply because LMs are most advanced at mimicry. The healthy relationship to a work isn't just the consumption process itself, but the intimate knowledge of knowing it was created by another person. This artist-audience relationship will not be replaced. It's a relief for anyone getting into the creative process, as the aim should be to provide work that is truly interesting. Only artists with little interest in providing work of enduring value have to be concerned. The hardest part about these transitory stages is the premature rush to automate away labor and the scars in can produce in the workforce.
For contrast, in software development, learning to create code that isn't understood doesn't make someone more knowledgeable, it makes them more of a liability, and that is the default of using LM generated code as a beginner. Most business problems to be solved with code are hugely consequential. There are systems that provide critical functions that require the developer to understand the code from top to bottom. Normalizing an inability to understand what's written ensures that the burden of comprehension falls heavier on more experienced developers. The stakes of this only increase when it comes to security and privacy concerns.
This is why efforts to get new software developers to normalize the use of LM can be perilous to their long-term growth. I am not hard line against the use of LMs, but the use has to be supportive of the learning process. Socratic self-prompting is a great example of how to effectively leverage it for learning. Khanmigo from Khan Academy and Boots from Boot.dev are great examples of how to do this right. Both chatbots will help someone to break down how to solve a problem Socratically, thinking through the problem step-by-step in a question driven way, without providing an answer.
All answers produced by an LM are simply pattern recognized solutions based on a sea of use cases that may be entirely irreverent to what the user is attempting to do. The more novice code generated, the more the novice robs themselves of the critical thinking required to solve increasingly harder and complex problems. Developer Rudis Muiznieks puts it succinctly:
Serial script kiddies want to be big boy hackers, but they'll never accomplish that by running scripts. The real big boy hackers are the ones writing those scripts—the ones exploring, probing, and truly understanding the vulnerabilities being exploited. Serial AI coders may want to be big boy developers, but by letting predictive text engines write code for them they're hurting the chances of that ever happening. At least for now, the real big boy developers are the ones writing code that those predictive text engines are training on.
I've been there. Nothing is more humiliating than feeling like you're more capable than you actually are, by routinely scumming solutions with AI, only to have it crash down all around you when put to the test without those tools. Most beginners, when faced with that wall-hit, will tuck their tail between their legs and give up on learning all together. What this results in is a sea of people who give up prematurely, never to return. This same group of learners could have stuck with it, had they simply assumed it would be hard in the first place. The only reason I’ve stuck in there is that I did just enough to learn before the popularity of LMs, discovering that I had a genuine passion for learning to code as a whole. I'm still very much early in my development, but with every greater challenge I face, minimal to no assistance of AI affirms my ability to actually do the work.
Despite many who will be more inclined to not appreciate these thinking and working processes, the very work is the means to its own end. The process of creating changes the people who create. One does not need to be advanced in their ability to benefit from making efforts to learn the hard way. Responsible education efforts for AI usage shouldn’t take a back seat to the speed of innovation in the industry. These efforts can avoid being an impediment to progress. Innovations should be encouraged to intervene on the behalf of learners, to ensure that the joy of creating itself is preserved.