The White House lays out extensive AI guidelines for the federal government


It’s been five months since President Joe Biden signed an executive order (EO) to address the technological advancements of artificial intelligence. The White House today is moving forward with implementing an EO and policy aimed at overseeing the federal government’s use of AI. The safeguards that organizations must have include, among other things, measures to reduce the risk of algorithmic bias.

“I believe that all the leaders of the government, civil society and responsible organizations, have the responsibility to ensure that artificial intelligence is followed and progressed in a way to protect people from the dangers that may occur and to ensure that everyone can enjoy its benefits, “The Deputy of President Kamala Harris told reporters during a press conference.

Harris announced three priorities that are being built under the new Office of Management and Budget (OMB) policy. First, agencies will need to ensure that any AI tools they use “do not compromise the rights and safety of the American people.” They have until December 1st to demonstrate that they have “concrete safeguards” in place to ensure that the AI ​​systems they are using do not affect the security or freedom of Americans. Otherwise, the agency will have to stop using AI products unless its leaders say that removing the system would have “unacceptable” effects on critical operations.

About the rights and security of Americans

According to the policy, an AI system is considered to affect security if “it is used or expected to be used, in actual situations, to control or significantly influence the outcome of” other events and decisions. This includes maintaining fairness in elections and voting; managing critical infrastructure security services such as water systems, emergency services and power grids; autonomous vehicles; and driving the movement of robots “for work, school, housing, transportation, medicine or law enforcement.”

Unless they have appropriate safeguards or can authorize their use, organizations must also stop AI practices that violate Americans’ civil rights. The purposes that the policy is intended to affect the freedom it refers to include predictive policing; social control for compliance; detection of fraud in schools; block or limit protected speech; identify or measure human emotions and feelings; pre-employment screening; and “copying a person’s image or voice without express permission.”

When it comes generative AI, the policy states that organizations should assess their potential benefits. All are also required to “establish adequate security and regulatory mechanisms that allow artificial AI to be used in the organization without posing undue risk.”

Visible requirements

The second requirement will force organizations to be transparent about the AI ​​systems they are using. “Today, President Biden and I require that every year, US government agencies publish online a list of their AI systems, assessing the risks of these systems and how those threats are managed,” Harris said.

As part of these efforts, organizations will be required to publish government AI code, models and data, as long as doing so does not harm public or government services. If an organization cannot disclose the actual use of AI for impact purposes, it must disclose metrics.

Vice President Kamala Harris speaks at a campaign event with President Joe Biden in Raleigh, NC, Tuesday, March 26, 2024. (AP Photo/Stephanie Scarbrough)

ASSOCIATED PRESS

Finally, government agencies will need to manage their AI operations. This includes each department appointing a chief AI officer to oversee all aspects of the organization’s use of AI. “This is about ensuring that AI is used effectively, understanding that we need to have senior leaders in our government who are responsible for managing AI adoption and use,” Harris said. Most organizations will also need to have AI governing boards in place by May 27.

The vice president added that prominent figures in the public and private sectors (including human rights leaders and computer scientists) helped develop the plan along with business leaders and legal experts.

The OMB suggests that, based on security measures, the Transportation Security Administration may allow air travelers to exit through facial recognition without losing their place in line or being delayed. It also suggests that there should be public oversight of things like AI fraud detection and decision-making in the federal healthcare system.

As you can imagine, government agencies are already using AI systems in a variety of ways. The National Oceanic and Atmospheric Administration is working on artificial intelligence models to help accurately predict severe weather, floods and wildfires, while the Federal Aviation Administration is using a system to help pilot large-scale air travel.

“AI presents not only a risk, but also a great opportunity to improve human services and advance societal challenges such as addressing climate change, improving public health and improving economic access,” OMB Director Shalanda Young told reporters. “When used and managed properly, AI can help organizations reduce wait times for government services to be more accurate and increase access to critical government services.”

The policy is the latest in an attempt to control the rapidly growing field of AI. While the European Union did provided sweeping rules for using AI in blocand there are federal funds in the pipeline, efforts to regulate AI in the US have taken a more aggressive approach at the federal level. This month, Utah he established the law protect consumers from AI fraud. In Tennessee, the Ensuring Equal Voice and Image Security Act (aka the Elvis Act – seriously). to try protecting singers from deepfakes i.e. owning their voice was made without permission.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *