Forget a Terminator style apocalypse, ‘Cognitive Offloading’ is the true risk of Artificial Intelligence.
Poor decision making at a global scale is far more likely than a rogue AI plotting mass destruction.
Several years in from the widespread introduction of LLM’s it is becoming increasingly clear that the sci-fi trope of mass destruction in judgement of humanity isn’t a likely outcome. After all, rather than plug these nascent systems into military mainframes, technology companies instead opted to plug them into the public web. This seemingly benign action has driven awareness and adoption certainly, but it carries a more insidious risk to human critical thinking.
While they have some widely discussed flaws, AI tools have proven successful at carrying out tasks such as rapid analysis of text and data, organising documents, creating foundational concepts and translation - but unchecked dependence on these tools for deeper problem solving, ideation and analysis risks making us lazy and incompetent. And this isn’t just an issue with dependence, since the informational feedback loop that AI relies on could rapidly and repeatedly amplify errors which result in decisions running hard into the brick wall of reality.
Market forces are powerful, and you only have to look at the outsourcing of the past few decades to see that businesses and institutions will certainly be offloading and externalising functions to AI over the coming years, all in the name of efficiency and profit. Aside from the drastic impact on societies and economies as a result of mass job cuts, the unchecked outcome here will likely be catastrophically bad decision making from a poorly informed remaining workforce and technology that builds on the foundation of its own mistakes.
How do we protect ourselves from this? Well for a start we need to be fixated on AI technology being tools to help solve problems rather than some surrogate for decision making. Tools we understand as things that can break, be badly made or configured but that’s harder to do with an entity we perceive as human and relatable. Resisting tech companies’ efforts to humanise their AI tech (yes, I mean you Siri and Alexa) is a solid start, and I expect we’ve only seen the beginning of their attempts to do this - so let’s keep an eye out for that kind of distracting anthropomorphism.
Beyond than that though, we need to take a break from our fascination with shiny new tech and relearn how to value and respect our own wonderful capacity to both create and resolve. We’ve lasted millennia making good use of our own grey matter, and we should last a millennia more if we continue to.

