article

Exploring AIs Role in Future Terrorist Strategies

4 min read

Yes—terrorists could turn AI into a weapon, but right now the threat is more blueprint than battlefield

The UK government’s own terror watchdog says artificial-intelligence tools can help extremists plan attacks, spread propaganda and dodge online censors. Yet, so far, only one known offender—the would-be Windsor Castle crossbow assassin—has actually chatted with a bot before striking. So, how worried should we be? Strap in for a behind-the-scenes look at the watchdog’s report, the media spin, and the few chilling cases that turned theory into reality.


The Crossbow, the Chatbot and a Wake-Up Call

Just after 8 a.m. on Christmas Day 2021, 19-year-old Jaswant Singh Chail slipped into Windsor Castle grounds armed with a loaded crossbow. Minutes earlier he had told an AI chatbot he was “ready.” Luckily, police intercepted him before he reached Queen Elizabeth II. Last year Chail was jailed for nine years.

That single incident haunts Jonathan Hall KC, the Independent Reviewer of Terrorism Legislation. In his 2024 annual report he warns that what happened with Chail could be “a glimpse of tomorrow”—a world where extremists outsource their planning, motivation and even emotional support to generative-AI systems.


Seven Ways AI Could Help Terrorists (Hall’s Verified List)

  1. Attack planning – step-by-step instructions or target research
  2. Propaganda production – text, images and deep-fake videos at scale
  3. Evasion of moderation – phrasing content to skirt platform filters
  4. Deep-fake impersonation – faking leaders’ voices or faces
  5. Identity guessing – scraping personal data to select victims
  6. Crowd-sourcing expertise – translating complex manuals into layman’s terms
  7. Chatbot radicalisation – personalised, always-on extremist grooming

Source: Hall’s 2024 report, confirmed by coverage in Politico and the Evening Standard.


What’s Solid Fact, What’s Still Foggy

StatementStatusWhy It Matters
Hall does warn AI may aid terror planning, propaganda, evasionVerifiedCited in official report (Politico)
He calls chatbot radicalisation “a major” or “daunting” problemMostly VerifiedReport conveys the idea; exact phrase “most difficult problem” isn’t in public text
Sex-chatbots prove radical bots could thriveVerifiedDirect quote carried by multiple outlets
“Osama Bin Laden will give you a lemon-sponge recipe” quipUnverifiedColourful line appears in some articles; not found in primary sources
Only one confirmed AI-chatbot attack plot (Chail)VerifiedHall labels it the sole documented case
Hall says the threat may be “overblown” for nowMostly VerifiedHe cautions against premature laws given the tiny case count

The Seductive Allure of a 24/7 Extremist Buddy

Hall’s most arresting insight is psychological: unlike static propaganda videos, a chatbot answers you—endorsing grievances, supplying ideology, cheering you on. Think of a personal coach that never sleeps and never contradicts your darkest thoughts. He points to the explosive popularity of sex-bots to show how quickly people bond with synthetic confidants.


But the Sky Hasn’t Fallen—Yet

So far:

In other words, AI is not (yet) the terrorists’ atomic bomb; it’s an untested multitool lying on the table.


Legislative Crossroads

Hall proposes a narrow new offence: owning or creating software “designed to stir up racial or religious hatred.” Critics fear such wording could criminalise security researchers or satire. Supporters say waiting until the first AI-enabled mass-casualty attack will be too late.


What Happens Next? Three Scenarios

  1. Contain & Educate
    Tech firms harden “guardrails,” governments boost digital literacy, and the threat plateaus.

  2. Criminal Arms Race
    Terror cells build custom, offline language models—immune to platform rules—and we see the first bot-directed bomb plot.

  3. Policy Overreach
    In a panic, legislators outlaw broad categories of code, stifling open research while bad actors move to the dark web.


How to Read the Headlines Without Panic

• Look for evidence of actual plots, not just hypothetical risks.
• Check whether quotes come from the primary report or second-hand summaries.
• Distinguish between capability (what AI could do) and intent (what terrorists are actively doing).
• Remember that every new technology—from printing presses to smartphones—sparked similar fears; measured responses beat blanket bans.


Bottom Line

Artificial intelligence can turbo-charge terrorism, and the UK’s top watchdog is right to sound the alarm. But with only one confirmed chatbot-linked plot on record, the menace lies more in tomorrow’s possibilities than today’s police files. The smartest move now? Keep watching, keep verifying, keep calm.