Reasons Europe Should Be Regulating Artificial Intelligence, By Nimesh Shah, COO Of Feel Good Contacts

Artificial intelligence (AI) is a powerful tool that can increase efficiency by automating tasks and providing huge amounts of data analysis. AI has the ability to do a lot of great things. However, it could also have the potential to do harm. As the use of AI becomes more commonplace, the need for proper regulations has become even more apparent.

On March 13, 2024, the European Parliament created the Artificial Intelligence Act (AI Act), widely considered as the first legal framework for AI. Other places around the world are also considering more regulations for AI, and it’s not difficult to see why they would be needed.

AI gives gendered results

When AI systems learn from biased data, it’s not surprising that they give biased results. Many AI results have racial and gender bias, showing images that aren’t diverse at all. In 2018, Amazon scrapped its AI hiring tool because it favoured men over women. The tool was designed to analyse job candidates and give them a mark from one to five. However, the tool gave preference to men for many jobs, because it was trained to spot patterns in the CVs that were given to Amazon over 10 years. Most of the CVs were from men. Other companies looking to automate their hiring processes using AI should consider the issues of gender bias.

AI is trained in racial bias

The Correctional Offender Management Profiling for Alternative Sanctions (COMPAS) attempted to use AI to predict if criminals in the US would re-offend. The system was far more likely to predict that black criminals would re-offend, even when all previous crimes, age, and gender were considered. White defendants were also often mislabelled as low-risk.

AI makes impersonation easier

Deepfakes are becoming increasingly common, and as technology improves, the videos become more realistic. Many people have seen videos of celebrities that look real but are actually deepfakes circulating on social media apps. In February 2024, US lawmakers called for new legislation to stop people from being able to distribute deepfakes without consent. European Union negotiators also created a bill criminalising the distribution of deepfake content within the EU.

AI can be useful when used in the correct way

Automating services using AI is a brilliant way to improve efficiency, but the risks outlined above must be considered before such measures are put in place. All AI systems should be rigorously tested before they are released to the public. Adding liability to the creators of these algorithms could also help, as there would be more incentive for dangerous and biased systems to be removed from the public. This would also go some way towards ensuring harmful technologies aren’t created in the first place, by creating a form of self-regulation.

 

By Nimesh Shah, COO of Feel Good Contacts