Skip to content

Hamburg court limits worker rights over private AI tools like ChatGPT

From ChatGPT to high-risk systems, AI is transforming work—but who controls its use? A landmark ruling and the EU's new laws force businesses to rethink policies.

The image shows a drawing of a machine with a lot of holes in it, which is a patent for a steam...
The image shows a drawing of a machine with a lot of holes in it, which is a patent for a steam engine. The drawing is detailed and shows the various components of the engine, including the pistons, valves, and other components. The text accompanying the drawing provides further information about the patent, such as its purpose and how it works.

Hamburg court limits worker rights over private AI tools like ChatGPT

A recent ruling by Hamburg's labour court has set new limits on employee co-determination rights when workers use AI tools like ChatGPT on private accounts. The decision comes as companies and unions grapple with broader legal uncertainties around AI adoption in the workplace. Meanwhile, the EU's AI Act is set to introduce stricter rules for high-risk systems, requiring businesses to adjust their policies.

The rapid adoption of AI in workplaces has sparked debates over efficiency gains and worker protections. Employers see AI as a way to streamline operations, while trade unions push for stronger safeguards to prevent employees from bearing one-sided risks. Under Germany's Works Constitution Act, works councils already hold co-determination rights when surveillance technology is introduced, but the Hamburg ruling narrows these rights for voluntarily used private AI tools.

The EU's AI Act, taking a risk-based approach, will soon require companies to notify employee representatives and affected staff if high-risk AI systems are implemented. This adds pressure on businesses to establish clear compliance frameworks. Legal experts now recommend voluntary works agreements to define rules on AI use, data protection, and training before disputes arise.

Yet, questions remain over liability. Employers are responsible for damages caused by in-house AI systems, but shifting political developments at the EU level have left some gaps in accountability. Unions continue to demand greater involvement for worker representatives, arguing that current protections fall short in addressing AI's evolving risks.

As the AI Act's full implementation nears, companies must adapt their processes to meet new obligations. The lack of clarity on liability frameworks—particularly after the EU Commission's withdrawal from earlier drafts on AI liability—has left businesses and unions searching for alternative legal solutions.

The Hamburg court's ruling and the upcoming EU AI Act will reshape how companies manage AI in the workplace. Businesses must now balance efficiency goals with stricter compliance and worker protections. Without clearer liability rules, however, legal uncertainties are likely to persist for both employers and employees.

Read also:

Latest