AI, the new great leveller. Really?
AI, the new great leveller. Really? Be careful what you wish for.
The hype that large language models (LLMs) will “empower” everyone smacks of the same promise once made by social media: universal liberation through a new technology. Indeed, AI can lower barriers for some, but without foundational knowledge, those lowered barriers become traps.
In reality, only a small minority who truly understand how LLMs learn and can craft precise prompts reap genuine benefits. The developers on Nostr seem to have a good handle on the game – the few super hand-coders using AI as a tool not a crutch, the architects of well-publicised successes built fast and well. The vibe among amateur coders and enthusiastic weekend hackers is probably less positive, avoiding, as they do, all and every app related to money, because of the danger of a wrong prompt wiping out their kids’ inheritance. But we are getting loads of useful productivity apps relevant to three people.
And pass around the handkerchiefs to the unwashed masses trying to save their jobs or retrain, throwing themselves full‑time after work into AI, paying for contracts with no idea how credits even work, let alone LLM queries and in/out fees.
Putting aside for one moment the personal misery of the majority spending endless hours feeding vague queries, receiving generic answers, while feeling dumber by the day — what about the data they’re feeding in? What’s happening to the prompt logs, usage patterns, even the names of friends they mention? Short answer: creating privacy havoc down the road.
If tech history is anything to go by, you can bet your bottom Bitcoin that VCs are making their enslaved project leaders run as fast as they possibly can before legislators catch up to their scammy data‑harvesting and monetisation games.
How can we regain some sanity and safety? Education, innit!
The old IT axiom Garbage‑In, Garbage‑Out still holds. A machine processes zeros and ones; it does not possess intelligence. Without solid domain knowledge and big‑picture views, users cannot ask the right questions, so AI delivers average, “tailored” responses that mask their irrelevance — or worse, deliver the most in‑depth, professor‑emeritus‑level detail. True empowerment therefore requires education before automation: forget high‑tech, get back to basics. Learn the fundamentals of the field (rocket science, scaffolding, law, hmmm javascript), watch real‑world practitioners on platforms like YouTube, and then use AI as a specialist assistant to refine or accelerate work you actually understand.
If this educational gap persists, AI will repeat social media’s pattern of addiction, manipulation, and economic displacement, driving the average Joe and Joana ever more batty, and widening inequality. The path forward is clear: treat AI as a tool, not a cure‑all; demand transparency about data use; and prioritize real learning so that prompts become intelligent, not merely noise.
Right, time to get back to coding up a plug‑in to attract thousands of WordPress website owners over to Nostr, the only true decentralised social‑media platform of signal, not noise. At least that’s what Claude told me what Nostr is.
AI will empower the few who understand it and exploit the many who don’t. If we want AI to be a true leveller, we must first level up our own knowledge. That sounds like boring advice, but that’s how the world’s brain-boxes reached this point.
Write a comment