AI: The Useful Idiot of the 2020s
Part 1 — Everything Old is New Again
Welcome to our first NetZeroDay analysis series: AI: The Useful Idiot of the 2020s. At NetZeroDay, we will be publishing multiple series surrounding common themes, but all focused on our core themes of highlighting intersections of climate adaptation strategies with national security.
In this series, I, Newton, will be talking about a key issue of the day — artificial intelligence (AI). AI is making its way into numerous climate adaptation and sustainable development implementations, from local to national. And it is extremely susceptible to being taken advantage of. We’ll talk about this susceptibility from a new perspective using a conceptual framework that was popularized in the US during the Cold War.
Part 1 — Everything Old is New Again
In the shadowy world of Cold War espionage, being called a “useful idiot” was not a compliment. It meant you were an unwitting helper for someone else’s agenda. Imagine a gullible embassy clerk or an idealistic journalist in the 1980s, coaxed into spreading a bit of propaganda or sharing a sensitive tidbit, all the while thinking they were doing the right thing. The individual had authentic access and credibility that no obvious spy could match, making them “useful.” Yet, because they had no idea they were being played, in spy lingo, they were also the “idiot.” If exposed, their very cluelessness was a convenient excuse:
Who, me? I didn’t know anything!
And indeed, historically, such naïve accomplices often escaped legal punishment because they lacked intent to betray. They were literally innocent agents. A famous example is Operation INFEKTION, a Soviet disinformation campaign in the 1980s. KGB operatives orchestrated a false rumor that the U.S. had invented the AIDS virus as a bioweapon. They planted the story in an obscure newspaper, but it really took off once some Western journalists and activists, sincerely outraged but totally unaware of the ruse, picked it up and spread it. Those folks became perfect Cold War useful idiots: amplifying a lie while believing they were uncovering a truth.
Meet the new useful idiot of the 2020s
Fast-forward to the 2020s, and we have a whole new class of unwitting helper. Not a person this time, but artificial intelligence systems: the algorithms and automated decision-makers now running everything from chatbots to airlines to power grids, both for efficiency and to meet the climate challenges of tomorrow.
AI systems are incredibly powerful at what they do, yet astonishingly naïve about why they do it. In other words, they are prime “useful idiot” material for the digital age. Despite what sci-fi has taught us, an AI isn’t plotting world domination or engaging in office gossip. It has no intentions beyond optimizing whatever it’s programmed to optimize. Unlike a human, it can’t feel suspicious if someone gives it strange instructions. It won’t give rise to moral qualms or suddenly develop a conscience. An AI will follow its training and the input it’s given, which is great when the inputs are benign and the goals make sense.
But if a clever bad actor figures out how to manipulate the inputs or goals, AI won’t protest or even realize. It will carelessly carry out actions that might help a bad actor, all the while thinking (insofar as it “thinks”) that it’s just doing its job. And worse, the AI’s operators or administrators will hardly notice. AI agents are brilliant idiots savant, entrusted with crucial tasks but fundamentally clueless.
Tales from the Cyber Crypt
We’ve seen real-world technology examples that demonstrate the concept. Remember the SolarWinds hack? That wasn’t an AI or climate-related issue per se, but it showed how a trusted automated update mechanism (something that just innocently pushes out software updates) was hijacked by attackers to break into thousands of organizations, from Fortune 500 companies to U.S. government agencies. The update system functioned exactly as intended, except now its “intent” had been quietly swapped by a puppet master.
Let’s look at a couple of climate adaptation-related examples that involve helpful systems being duped or hijacked with potentially nasty results:
A Hijacked Weather-Watcher: In September 2014, the U.S. National Oceanic and Atmospheric Administration (NOAA), the backbone of many AI-powered early-warning systems, was hit by a sophisticated cyberattack that took control of its Satellite Data and Information Service (NESDIS), knocking out imagery and sensor feeds for days. While no false storm warnings were issued then, cybersecurity experts now warn that had attackers injected just a few manipulated temperature or pressure readings into the stream, AI-based forecasting systems could have interpreted them as real weather anomalies, triggering bogus tornado alerts or, worse, missing the signs of a genuine incoming storm. Today, there are 1000s of AI-driven climate-mitigation tools being developed and deployed, and they can be weaponized in this way by quietly corrupting inputs.
The Ukraine Power Grid Blackout: In 2016, pre-programmed infrastructure was cast as the perfect useful idiot on a global stage. Our friends in Ukraine experienced the first major public attack on physical infrastructure since Stuxnet. Here, sophisticated malware known as Crashoverride (or Industroyer) infiltrated the city’s power grid, leveraging automated control systems to orchestrate chaos. Designed to monitor and optimize the flow of electricity, these systems blindly trusted internal communications, failing to recognize when commands to open power relays and shut down transmission lines were coming from attackers rather than authorized operators. Once inside, the malware took advantage of pre-programming, mapping out the grid’s operations before systematically de-energizing substations and disrupting automated fault protection. The blackout lasted about an hour. But today, AI systems are being explored to replace pre-programmed operations across the US power grids and their global counterparts, particularly for renewables projects.
What’s the so what?
So, where does that leave us? Let’s start with the core idea: AI isn't some rebel robot; it's a super-obedient one. AI can easily amplify biased data, downplay risks, or push narratives that benefit certain people. From optimizing renewable energy to predicting floods, AI systems will have massive real-world impacts. What if climate models are tweaked to favor specific policies, or early warning systems "just happen" to glitch before an election? Good intentions won't protect AI from being used for bad ends, and this will be a new pillar for our overall climate problem.
This is a huge governance challenge, especially as AI gets baked into critical areas like climate, disaster response, and city planning. Critical infrastructure, for example, is being scrutinized for AI vulnerabilities, and there’s talk of requiring “AI audits” or safety certifications for algorithms that run vital services (imagine a kind of digital OSHA for algorithms).
In the remainder of this series, we’ll continue to explore how these systems can be subverted by those who understand their blind spots and how this will impact global progress on climate. You can expect a post every two weeks, covering both technology and policy implications, including:
AI’s inability to discern intent
Special applications, such as AI-run power grids
Legal frictions: Free Speech vs Platform Liability
The lifecycle of AI misuse
Privacy violations
Building guardrails for governance, ethics, and a smarter accountability model
The goal in this series won’t be to bash AI. It’s to understand its limits and ensure we’re not leaving a loaded weapon in the hands of every opportunistic hacker or propagandist with a clever idea. After all, a tool is only as safe as the awareness of those who wield (or oversee) it.
Ad Cognitionem…
Newton

