The US and UK are teaming up to test the safety of AI models

OpenAI, Google, Anthropic and other companies developing AI output continue to improve their technology and release better and bigger languages. In order to create a similar approach for independent analysis of the safety of the samples as they come out, the UK is US governments to be he signed Memorandum of Understanding. Together, the UK’s AI Safety Institute and its US counterpart, which were he announced and Vice President Kamala Harris but before starting work, they will conduct a series of tests to assess the risks and ensure the safety of “high-tech AI models.”

They plan to share technical knowledge, information and personnel as part of the partnership, and one of their primary goals appears to be conducting collaborative tests in a publicly accessible format. UK Science Minister Michelle Donelan, who signed the agreement, said The Financial Times that they “need to act quickly” because they are expecting a new generation of AI to come out next year. They believe these models could be “game changers,” and they still don’t know what they’re capable of.

According to The Times, The agreement is the first bilateral agreement on AI security in the world, although the US and the UK want to cooperate with other countries in the future. “AI is the defining technology of our generation. This agreement will accelerate the work of all of our agencies on many threats, whether it is our national security or our society as a whole,” said US Commerce Secretary Gina Raimondo. “Our partnership clearly shows that we are not running away from these concerns – we are running towards them. Thanks to our partnership, our Institutes will gain a better understanding of AI systems, more rigorous analysis, and provide more rigorous guidance.”

While the agreement is focused on testing and evaluation, governments around the world are also developing regulations to keep AI devices safe. Back in March, the White House signed an executive order to ensure that government agencies only use AI tools that “do not threaten the freedom and safety of the American people.” A few weeks before this, the European Parliament to be accepted laws are sweeping to regulate artificial intelligence. It will ban “AI that interferes with human behavior or exploits human vulnerability,” “biometric processing based on complex conditions,” and “unintentional harvesting” of faces from CCTV footage and the Internet to create facial recognition databases. In addition, deepfakes and other AI-generated images, videos and audio will need to be clearly labeled according to its rules.

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *