Many people online have taken an iffy view online when talking about AI. Not in the fun sense - in the boring, qualified “AI is incredibly powerful, if you XYZ” sense, where XYZ can equal “study prompt engineering”, “just practice a lot with it”, “don’t investigate it too hard”, “trust but verify”, “investigate it in detail”, “learn about kernels”, “USE CAPS LOCK FOR ANYTHING YOU DON’T WANT IT TO DO”. (For future readers, “AI” here means 2023-level large language models, such as GPT-3.5 and Github Copilot, which I’ve written about using in the terminal. I have also written about a concrete economic mechanism to stop AI progress at scale , which may be more up your alley if the future ends up more Yudkowsky than Venkatesh.)
All of these claims are of course true by default. This is because AI improves outcomes with no increase in skill. There is no “if” you could append to the statment “AI is incredibly powerful” which could make it false.
If you sat two complete beginners down at a terminal session
and asked both of them to code FizzBuzz however they saw
fit, with the sole environmental difference being that one
of them had Github Copilot integrated into nano
and one of
them did not, the former would almost certainly figure it
out faster. If you sat two experienced professionals down,
matched for skill, there would at worst be no significant
difference in time. But I would wager there would probably
be a significant speed improvement even then, because it is
easier to notice flaws than to create from whole cloth. All
artists know this.
So why do people feel the need to append these ‘if’s? wager that it is because, while AI vastly improves both the average amount and the average quality of code one can generate, it doesn’t do enough to reduce the variance on either of those metrics. AI usually guesses right, but when it guesses wrong, oh boy can it guess wrong. As software professionals, the thing we sell is not, and has never been, “85% of the time I am an absolute genius, and 15% of the time I am the most confidently wrong person you will ever meet.” What we sell is, “I show up on time and deliver what you need, in a reasonably fast time frame. I don’t pick fights, I hedge my own epistemic confidence, I am something you can rely on and build around yourself as a known element.” That’s where the ‘if’s come from – the implicit injection of professional norms into the conversation.
All the same, children don’t give a shit about professional norms.