Skip to content

Global push for responsible AI gains momentum with new laws and frameworks

From Brussels to Washington, landmark rules are rewriting how AI is built and deployed. Can stricter oversight prevent bias, misinformation, and unchecked risks?

In this image, we can see an advertisement contains robots and some text.
In this image, we can see an advertisement contains robots and some text.

Global push for responsible AI gains momentum with new laws and frameworks

Governments and businesses are stepping up efforts to ensure AI is developed and used responsibly. New laws and guidelines now shape how companies build, test, and deploy artificial intelligence. The goal is to make AI safer, more transparent, and less prone to issues like bias or misinformation.

In recent months, major frameworks and regulations have emerged on both sides of the Atlantic. These include the EU’s landmark AI Act and updated US guidelines, alongside voluntary standards from organisations like NIST and ISO.

The European Union took a leading role in July 2024 by enforcing the EU AI Act, the first comprehensive legal framework for AI regulation. This law sets strict rules on transparency, accountability, and risk management for AI systems used in the EU.

Across the Atlantic, the US government has also pushed for safer AI practices. An Executive Order (EO 14110) laid the foundation for AI risk management, while the National Institute of Standards and Technology (NIST) released its AI Risk Management Framework (RMF). This voluntary guideline helps organisations assess and mitigate risks in AI development. In July 2024, NIST expanded it with the Generative AI Profile (NIST AI 600-1), offering specific advice for generative AI tools.

Beyond legal requirements, companies are adopting their own responsible AI frameworks. Key steps include verifying the source of AI models, checking their ethical principles, and ensuring training data is handled securely. Businesses must also guard against AI hallucinations—where models generate false or misleading information—by implementing strict output verification processes.

The International Organisation for Standardisation (ISO) defines responsible AI as a balance between ethical and legal considerations. A core principle is keeping humans-in-the-loop (HITL), ensuring human oversight for safety, reliability, and compliance. Companies are also urged to prevent confidential data from leaking through AI systems by controlling how inputs and outputs are managed.

The push for responsible AI now combines legal enforcement, voluntary guidelines, and corporate self-regulation. The EU AI Act and NIST’s updated frameworks provide clear benchmarks for developers and users. Companies that adopt these standards can reduce risks, improve transparency, and build trust in their AI systems.

Moving forward, businesses will need to stay updated on evolving regulations while embedding ethical practices into their AI workflows. This includes rigorous vetting of models, protecting sensitive data, and maintaining human oversight at every stage.

Read also:

Latest