White House Puts New Limits on Government Use of AI

The US government issued new rules on Thursday that require greater transparency and accountability from government agencies artificial intelligence, saying they need to protect people as AI advances. But the new policy also has requirements to promote AI skills in government agencies where the technology can be used to help people.

The US hopes to emerge as a global leader with its new AI government. Vice President Kamala Harris said at a press conference before the announcement that the administration was preparing the policy to “set a precedent for international practice.” He added that the U.S. “will continue to call on all countries to follow our lead and focus on the public interest in the use of AI by government.”

A new policy from the White House Office of Management and Budget will regulate the use of AI across government. There is a need for greater transparency in the government’s use of AI and a call for further development of the technology within government agencies. The process sees regulators trying to address the risks that arise from the widespread use of AI—the extent of which is unknown—and using AI tools to address existing threats such as climate change and disease.

The announcement adds to a series of moves by the Biden administration to embrace and discourage AI. In October, President Biden signed the sweep Executive Order on AI that would encourage the development of AI technology by the government and also require the developers of large types of AI to provide the government with information about their activities, for the sake of national security.

In November, the US joined the UK, China, and EU members sign a document which acknowledged the dangers of AI advances and called for international cooperation. Harris in the same week revealed the inconsistencies to announce on the use of AI in warfare, signed by 31 countries. It establishes safeguards and calls for the prohibition of systems that engage in “unplanned behavior”.

A new policy for the use of AI by the US government announced on Thursday calls on organizations to take steps to prevent the unintended consequences of the deployment of AI. To begin, organizations must ensure that the AI ​​tools they use do not put the American people at risk. For example, in order for the Department of Veterans Affairs to use AI in its hospitals it must ensure that the technology does not discriminate. Research has found that AI systems are other algorithms used to report the disease or choice where patients receive care can reinforce old prejudices.

If the organization cannot ensure such safeguards, it must stop using the AI ​​system or justify its use. US agencies are awaiting a December 1 deadline to meet the new requirements.

The policy also calls for greater transparency about government AI practices, requiring agencies to release government AI models, data, and code, as long as the release of such information does not threaten the public or the government. Organizations must disclose annually how they are using AI, the risks of the systems, and how those risks are mitigated.

And the new rules also require government agencies to strengthen their AI technology, mandating each to appoint an AI chief to oversee all AI used in the agency. It is a field that focuses on promoting AI capabilities and managing its risks.

Officials say the change will also remove barriers to the use of AI in federal agencies, a move that could lead to better experiments with AI. This technology can help organizations assess damage after natural disasters, forecast weather, map coverage, and manage air traffic.

Countries around the world are moving to control AI. The EU voted in December to continue The work of AIthe standard that drives the creation and use of AI technologies, and legally adopted at the beginning of this month. China, too, is working all AI rules.

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *