Every day our businesses and government organisations are being clobbered by cyber attacks. So what’s the last thing we want them to do? Probably make the attacks easier, and park the most valuable secrets in front of the house with a “take what you want” sign.

As companies experiment with artificial intelligence (AI), they are flinging open doors that really should be locked very tight – and in many cases, they don’t even know they’ve done it.

To picture the potential damage, security expert Michael Bargury talks me through an example using Microsoft’s own demonstration site for Microsoft 365. He quietly alters the business’s bank details; staff are oblivious to the change.

Bargury, chief technology officer of Tel Aviv-based security firm Zenity, is one of the leading experts in exploring how business AI can be used for mischief.

The attacks exploit one of AI’s key selling points to business: automating repetitive tasks. Previously, getting a hack to work required knowledge of a scripting language. Now, anyone can create a bot with a couple of clicks – and it’s turning hacking into a public sport.

In the past, many hacks also required hundreds of hours of social engineering – tricking an individual into clicking on something. But with Microsoft’s Copilot and other business AI bots, people can simply say a set of words and open a Pandora’s box. Bargury calls it “promptware”.

The typical Fortune 500 company already runs around 3,000 Copilot AI bots, Zenity found, and some 63pc of private business chatbots can be operated by the public. “All of the defaults are insecure,” an astonished Bargury discovered.

Things are about to get much worse.

While Microsoft has changed the defaults, the underlying problem is not fixable, which is that AI can’t distinguish between data and computer instructions. I may send you a one-line message wishing you happy birthday that contains hidden hacking instructions – and the AI will obliviously let it happen.

Microsoft says it is constantly revising the “guard rails” on its large language models, but Bargury isn’t impressed. “Guard rails aren’t enough because it’s not a solvable problem,” he tells me.

No wonder Copilot has earned the nickname “Coparrot” – it simply repeats what it hears.

Worse, companies are being encouraged to pour everything they have into the pot. An AI model devours everything it can: supplier contracts, employee salaries, redundancy lists, strategy papers, or the directors’ very private Teams chats.

AI breaks down boundaries we have traditionally maintained in the offline world, where information was privileged, and was only shared on a need-to-know basis. Now we’re making everything accessible to anyone.

Rather like the cartoon monster in the Beatles’ movie Yellow Submarine, the Suckophant, AI devours all the other monsters, then the Beatles, and eventually the screen itself.

In summary, it’s a lethal combination: we’re allowing far more people to do more stupid things far more easily, while exposing far more private information to the bad people.

Incredibly, Microsoft is also capturing private information we have not intentionally committed to a system, just so it can all be fed to an AI model.

A new feature of Windows called Recall silently takes a snapshot of your computer screen every few seconds and stashes it away. Recall doesn’t care if what’s on your screen is your sales forecasts, a banking app, passwords or pornography – it remembers everything.

In truth, the technology industry has never been very good at respecting boundaries, or even basic security, and this goes back a long way.

One industry veteran who helps define international security standards told me: “The people building software and networks do not think security is something valuable – it gets in their way, and they won’t do it.

“Enterprises are left picking up the pieces. The attitude is very much “f--- security, f--- intellectual property, and let’s fix it with lawyers afterwards”.

Google’s former chief executive Eric Schmidt – now a major investor in AI and one of the most influential figures in US science policy – admitted as much in a talk to undergraduates at Stanford, according to reports last month.

Quizzed about the ethics of stealing IP to build a start-up, Schmidt advised students to go ahead and worry later: just “hire a whole bunch of lawyers to go clean the mess up”. Schmidt was merely “saying the quiet part out loud”, noted the tech publication The Verge. The video of the talk has been deleted.

Former Google chief Eric Schmidt has perpetuated the ‘let lawyers fix it afterwards’ mentality that's rife in the tech industry Credit: Mike Blake/Reuters

Ministers need to stop fantasising that AI automation will save the NHS and improve public service efficiency. The House of Representatives has already banned Copilot, deeming it “a risk to users due to the threat of leaking House data”.

The MIT economist Daron Acemoglu, author of the book Why Nations Fail, was recently asked to predict the potential impact of AI on business and society on a scale of nought to 10.

It could go up to seven he thought, if all went well, but on its current trajectory, it was heading for a “minus six”.

His interviewer, who moves in rarified high status policy and media circles, sounded astonished. Well, he said, your mistake is only looking at the good stuff AI can do, but you’ve failed to look at any of the bad stuff. Or count the costs.

Perhaps his next book can be called “Why Civilisations Fail”. We’re doing all we can to hasten their demise.

Disclaimer: The copyright of this article belongs to the original author. Reposting this article is solely for the purpose of information dissemination and does not constitute any investment advice. If there is any infringement, please contact us immediately. We will make corrections or deletions as necessary. Thank you.