EU and US agree to chart common course on AI regulation

on

|

views

and

comments



“AI regulation necessitates joint efforts from the international community and governments to agree a set of regulatory processes and agencies,” Angelo Cangelosi, professor of machine learning and robotics at the University of Manchester in England, told CIO.com.

“The latest UK-US agreement is a good step in this direction, though details on the practical steps are not fully clear at this stage, but we hope that this will continue at a wider international level, for example with integration with the EU AI agencies, as well as in the wider UN framework,” he added.

Risks of AI misuse

Dr Kjell Carlsson, head of AI strategy at Domino Data Lab, argued that focusing on the regulation of commercial AI offerings loses sight of the real and growing threat: the misuse of artificial intelligence by criminals to develop deep fakes and more convincing phishing scams.

“Unfortunately, few of the proposed AI regulations, such as the EU AI Act, are designed to effectively tackle these threats as they mostly focus on commercial AI offerings that criminals do not use,” Carlsson said. “As such, many of these regulatory efforts will damage innovation and increase costs, while doing little to improve actual safety.”

“At this stage in the development of AI, investment in testing and safety is far more effective than regulation,” Carlsson argued.

Research on how to effectively test AI models, mitigate their risks and ensure their safety, carried out through new AI Safety Institutes, represents an “excellent public investment” in ensuring safety whilst fostering the competitiveness of AI developers, Carlsson said.

Many mainstream companies are using AI to analyze, transform, and even produce data – developments that are already throwing up legal challenges on myriad fronts.

Ben Travers, a partner at legal firm Knights and specializes in AI, IP and IT issues, explained: “Businesses should have an AI policy, which dovetails with other relevant policies, such as those relating to data protection, IP and IT procurement. The policy should set out the rules on which employee can (or cannot engage with AI).”

Recent instances have raised awareness of the risks to employers when employees upload otherwise protected or confidential information to AI tools, while the technology also poses issues in areas such as copyright infringement.

“Businesses need to decide how they are going to address these risks, reflect these in relevant policies and communicate these policies to their teams,” Travers concluded.

Share this
Tags

Must-read

The Great Bitcoin Crash of 2024

Bitcoin Crash The cryptocurrency world faced the hell of early 2024 when the most popular Bitcoin crashed by over 80% in a matter of weeks,...

Bitcoin Gambling: A comprehensive guide in 2024

Bitcoin Gambling With online currencies rapidly gaining traditional acceptance, the intriguing convergence of the crypto-trek and gambling industries is taking place. Cryptocurrency gambling, which started...

The Rise of Bitcoin Extractor: A comprehensive guide 2024

Bitcoin Extractor  Crypto mining is resources-thirsty with investors in mining hardware and those investing in the resources needed as the main beneficiaries. In this sense,...

Recent articles

More like this