Take part in a one-to-one session and help us improve the FRA website. It will take about 30 minutes of your time.
YES, I AM INTERESTED NO, I AM NOT INTERESTED
“AI is not infallible, it is made by people – and humans can make mistakes. That is why people need to be aware when AI is used, how it works and how to challenge automated decisions. The EU needs to clarify how existing rules apply to AI. And organisations need to assess how their technologies can interfere with people's rights both in the development and use of AI,” says FRA Director Michael O’Flaherty. “We have an opportunity to shape AI that not only respects our human and fundamental rights but that also protects and promotes them.”
The FRA report ‘Getting the future right – Artificial intelligence and fundamental rights in the EU’ identifies pitfalls in the use of AI, for example in predictive policing, medical diagnoses, social services, and targeted advertising. It calls on the EU and EU countries to:
The report is part of FRA’s project on artificial intelligence and big data. It draws on over 100 interviews with public and private organisations already using AI. These include observations from experts involved in monitoring potential fundamental rights violations. Its analysis is based on real uses of AI from Estonia, Finland, France, the Netherlands and Spain.
On 14 December, FRA and the German Presidency of the Council of the EU organise a conference Doing AI the European way: Protecting fundamental rights in an era of artificial intelligence.
For more, please contact: media@fra.europa.eu / Tel.: +43 1 580 30 653