The mediocrity AI era

The poor souls that cared for us while growing up, trying to instill some values to us, like “effort”, would explain the consequences of cheating an exam saying that we would not be as a good as a version of ourselves as if we didn’t: “you can always cheat and pass the exam, but what would you do when you really need that skill and you realize that you don’t have it?”.
I’m not saying that all the skills and knowledge being thought are necessary or should be mandatory, or that even that exam grades are a good measure for competence. But when you care about something, you care about the quality, you want to make something good, and avoid mediocrity, you will need expertise and experience. Cheating is not an option.
There’s a necessary effort, critical thinking, understanding, reasoning, to make something good. To even discern if something is good or not. And that’s the main problem with the mass adoption use of generative AI. All of that critical thinking will be -is- absent in most usages of AI, and the outputs will be, mostly, mediocre, as there’s no expertise to certify the generated output.
I understand the need to go fast in certain situations, and that we will care more or less, even on the same thing, depending on the day. Even without the option of generative AI, before it, the dilemma was there: shall we invest in quality or not? Is it important enough to care, or not? And how much, what’s the budget, the estimations etc. And of course there will be a nudging to do things as fast as possible and that normally sacrifices quality.
With AI in the picture, the needle will go deeper on the “let’s go fast” side. “Now there’s even quality when going fast, thanks to AI”. That, unfortunately, it’s not the case, as most people can’t really discern, don’t have the expertise, whether AI’s output is good or not. Good products and good stuff will become rarer and rarer, and mediocrity will be the norm, as most people won’t really care a bit to be doubtful of the generated AI output, or they don’t know how to.
It’s a little bit scary to see how nobody is pointing out that. That in general it’s assuming that AI will replace expertise, while instead of replacing it, it’s just removing it from the equation.
There’s this anecdote about Pablo Picasso, to make sure I don’t get random bits form it, I am copying it form this source, and it goes like this:
Legend has it that Picasso was at a Paris market when an admirer approached and asked if he could do a quick sketch on a paper napkin for her.
Picasso politely agreed, promptly created a drawing, and handed back the napkin — but not before asking for a million Francs.
The lady was shocked: “How can you ask for so much? It took you five minutes to draw this!”
“No”, Picasso replied, “It took me 40 years to draw this in five minutes.”
I don’t want to focus here about the value that have expertise and mastery. I just want to take this anecdote and bring it further. Let’s say that the woman actually used AI to generate for her a Picasso. But as pointed in the anecdote, that woman had no idea about what it means to do something good in art. She will get the result, and her, like me, who has no idea, will take our useless judgement (probably a “mmh, looks good”) in order to decide if that piece of work is good, could be a proper candidate for a Picasso’s forgery, or it’s just, well, mediocre.
Generative AI can’t substitute expertise. Per design, as it’s the most probable output, according to the training data. Nothing else matters. “Prompt engineering”? apparently the results are much better if you are careful with how you prompt. Still, if you have no expertise, you can’t tell if what AI give you is just bullshit or it’s genius. Can you make it do a better “Picasso”? Probably you need to know already a little bit of art, to have some expertise, to prompt better.
Focusing more on my field, in software engineering, there’s also the constant fight of doing things properly, adopting the industry’ good practices (which I wrote here that they are now as more important than ever!) as opposed as going fast and ugly, with a dreaded permanent “temporary solution”. But also, those practices aren’t used as widely as they should. New people enter the workforce, getting blinded by the next java script framework hype, or the new programming language, forgetting the principles that are agnostic to those. Good software engineering was already rare before. With generative AI it will become even rarer.
Now there’s “vibe coding”, (the practice of using AI to get the code), which by the way, makes no difference whatsoever to what actual coding was before, or should be. It makes me think that people getting excited with “vibe coding” never worked in a team before: we do pair programming, code reviews, and refinement sessions to speak and discuss how to solve a problem and design a solution and implement it. Now there’s AI to the party in all the steps. And it’s nothing less than natural -in software engineering field, where we are used to just copy paste and reuse and debate and discuss- that we just integrate generative AI seamlessly in our project development cycles. We let AI do the tedious work, but we have the expertise to validate the output. We can ask exactly what was needed according to the already thought design and solution.
I don’t think this practice deserves a name differentiator, at best, it just confuses people to what proper software engineering practices should be, misleading them in thinking that just accepting whatever AI outputs is a valid way to work.
The reality, though, is that for most people, it is the only way. AI’s output will be good enough, to have something working. It can be the most terrible design, solution, code, but there’s a chance that will work, and there’s the certainty that the prompter won’t have the expertise, nor the critical thinking to decide whether the solution is good or not.
Without the expertise to control the output, there will yet another mediocre piece of output, out on the wild, for the world to see. A mediocre example, to be used to train new AI models, so their output becomes yet even more mediocre. The gates have been opened, and mediocrity has been unleashed.
Only you, caring about quality, not swallowing without thinking critically any output that generative AI vomits, can save us.


