Differences in AI Regulation: U.S. vs EU

By ⁣Cloey Callahan – December 5, 2023

‍ ​ ​
‌ Ivy ‍Liu

This post was originally released by Digiday sister ⁤ WorkLife

The‍ Global ⁤Race ⁤for Artificial‌ Intelligence

It’s not just a race. It’s global and it’s for artificial intelligence. So ‌far, the U.S. seems to ⁤be in ⁢the lead of⁢ this technological race.

Big tech companies‍ like OpenAI, Microsoft, Google, and Meta call the states‍ home. They are‍ major‍ contributors to the U.S. being ahead in ⁤the‍ artificial intelligence race. Furthermore, the absence of ​federal legislation has allowed the country to accelerate its position. Currently, New York City’s Local Law 144 is‌ the only federal legislation that exists around artificial intelligence ⁤requirements. This law mandates that automated hiring processes must undergo a ‌bias audit. Currently, other states like California and New Jersey are also in the process of creating ​their own versions‌ of state legislation.

This doesn’t⁤ mean the whole country⁢ is devoid of legislation. The ‍White ⁢House has issued an Executive Order on AI underlining the importance of safety, security, reliability of AI. It has also put forth a policy for an AI Bill ‌of ⁤Rights. The Equal Employment Opportunity Commission (EEOC) is committed​ to upholding Title VII of the ⁤Civil Rights Act to prevent discrimination against job applicants and employees, whether it originates from a human or technological source.

Moreover, ‌the U.S. and ​the European⁢ Union have taken very different regulatory approaches⁢ to artificial intelligence.

The EU’s preventative‌ approach and the U.S.’s more decentralized method.

It’s quite evident that the U.S.‌ has‍ a​ more⁢ decentralized and ‍sector-specific approach⁤ to⁣ AI regulation. On⁢ the other⁢ hand, the EU has pursued a more comprehensive and ⁤preventative approach. For ‍example, the​ EU AI ‌Act is centered on categorizing AI systems based on their potential risk. The legislation, which passed in⁢ June 2023,‍ will be finalized before European Parliament elections in June 2024. Once finalized, ⁢it will enforce regulations relevant to the level of risk that ‍AI systems present. The legislation will require that AI ‍systems be safe, transparent, traceable, ​non-discriminatory, and environmentally friendly. This legislation will require human regulation of the AI⁢ systems and create a unified definition of AI that⁢ applies to existing and future systems.

However, both the U.S. and EU are crucial stakeholders in the ‌future‍ of international AI governance as they set the ‌standard for AI risk⁢ management regulations. European tech start-ups have expressed concerns over the potential impact ⁤of the EU’s stricter legislature, worrying ⁣that it might hinder their‌ growth and ​cause them to‍ fall behind⁣ the U.S.

Companies like French-based AI business Mistral⁢ have ‌lobbied for less stringent regulations,⁤ arguing that it creates an uneven playing field ‌in the global AI innovation race.

We don’t want to paralyze our winner, right?”

The U.K., which is no longer officially part of ​the E.U., currently occupies a position‌ midway between the two faction’s approaches. As it develops its AI protocol, it seeks to find a cross between the EU and the U.S. This development reflects its handling of the General Data Protection Regulation.

» …
Read More rnrn

Latest articles

Related articles