What happens when AI makes a safety call and gets it wrong?
- RiskSTOP
- Jun 24
- 2 min read

From ‘smart’ machines to AI-powered safety systems, more decisions at work are now being made by algorithms. However, if something goes wrong, who’s responsible? And how do you keep risk under control when the decision-maker isn’t human?
The Health and Safety Executive (HSE) has just published guidance on how AI should be managed in the workplace and the message is simple: the rules haven’t changed. If you’re introducing AI, it needs to be assessed, understood and controlled just like any other risk.
According to HSE, existing health and safety law already applies to AI. That means employers still need to carry out risk assessments, put controls in place, and manage any new hazards introduced by AI.
HSE’s remit here covers three areas:
AI used in workplaces, where it acts as the enforcing authority
AI involved in the design and supply of workplace equipment
Other areas like building safety and chemical regulation, where it also has a role
The update forms part of a wider move from government to regulate AI in a way that supports innovation. HSE will be applying principles like safety, transparency and accountability in a way that makes sense for the industries it regulates.
However, it’s not all about what could go wrong. HSE also recognises that AI could bring real safety benefits if used properly — and it’s investing in its own knowledge, working with other regulators and industry bodies, and supporting practical tools like its Industrial Safetytech Regulatory Sandbox.
“AI has a role to play in risk management, but only if it’s applied in the right way,” says Johnny Thomson, Head of Strategic Planning at RiskSTOP.
“That’s exactly the thinking behind our own tools. We’re not trying to replace people – we’re using data and automation where it makes sense, so we can help more businesses, more quickly, with the right kind of support.”
You can read the full article from HSE here.
Comments