PoliticsTech & ScienceTop Story

Harris: US agencies must show their AI tools aren’t harming people’s safety or rights

U.S. federal agencies must show that their artificial intelligence tools aren’t harming the public, or stop using them, under new rules unveiled by the White House on Thursday. “When government agencies use AI tools, we will now require them to verify that those tools do not endanger the rights and safety of the American people,” Vice President Kamala Harris told reporters ahead of the announcement.

Quick Read

  • The White House mandates U.S. federal agencies to ensure their AI tools do not harm the public, with a deadline set for December to implement concrete safeguards.
  • Vice President Kamala Harris highlighted the importance of verifying that AI tools used by government agencies do not endanger the rights and safety of Americans.
  • The policy directive, part of President Biden’s AI executive order, covers AI applications in various sectors, including healthcare, immigration, and housing.
  • Agencies must cease using AI systems that cannot meet the new safeguards, unless exceptions are justified by agency leadership.
  • The directive requires federal agencies to appoint a chief AI officer and annually publish an inventory of AI systems, assessing potential risks.
  • Intelligence agencies and the Department of Defense are exempt, with a separate discussion on autonomous weapons.
  • The initiative aims to promote responsible AI use in government services, improving efficiency and access to public services.

The Associated Press has the story:

Harris: US agencies must show their AI tools aren’t harming people’s safety or rights

Newslooks- (AP)

U.S. federal agencies must show that their artificial intelligence tools aren’t harming the public, or stop using them, under new rules unveiled by the White House on Thursday. “When government agencies use AI tools, we will now require them to verify that those tools do not endanger the rights and safety of the American people,” Vice President Kamala Harris told reporters ahead of the announcement.

Each agency by December must have a set of concrete safeguards that guide everything from facial recognition screenings at airports to AI tools that help control the electric grid or determine mortgages and home insurance.

The new policy directive being issued to agency heads Thursday by the White House’s Office of Management and Budget is part of the more sweeping AI executive order signed by President Joe Biden in October.

President Joe Biden speaks before a dinner for Combatant Commanders in the Cross Hall of the White House in Washington, Wednesday, May 3, 2023. (AP Photo/Susan Walsh)

While Biden’s broader order also attempts to safeguard the more advanced commercial AI systems made by leading technology companies, such as those powering generative AI chatbots, Thursday’s directive targets AI tools that government agencies have been using for years to help with decisions about immigration, housing, child welfare and a range of other services.

As an example, Harris said, “If the Veterans Administration wants to use AI in VA hospitals to help doctors diagnose patients, they would first have to demonstrate that AI does not produce racially biased diagnoses.”

Agencies that can’t apply the safeguards “must cease using the AI system, unless agency leadership justifies why doing so would increase risks to safety or rights overall or would create an unacceptable impediment to critical agency operations,” according to a White House announcement.

FILE – Vice President Kamala Harris delivers a speech on healthcare at an event in Raleigh, N.C., March 26, 2024. U.S. federal agencies must show their artificial intelligence tools aren’t harming the public, or stop using them, under new rules unveiled by the White House on Thursday. Vice President Kamala Harris said government agencies that use AI tools will be required to verify that those tools do not endanger the rights and safety of the American people. (AP Photo/Matt Kelley, File)

The new policy also calls for two other “binding requirements,” Harris said. One is that federal agencies must hire a chief AI officer with the “experience, expertise and authority” to oversee all of the AI technologies used by that agency, she said. The other is that each year, agencies must make public an inventory of their AI systems that includes an assessment of the risks they might pose.

Some rules exempt intelligence agencies and the Department of Defense, which is having a separate debate about the use of autonomous weapons.

Shalanda Young, the director of the Office of Management and Budget, said the new requirements are also meant to strengthen positive uses of AI by the U.S. government.

“When used and overseen responsibly, AI can help agencies to reduce wait times for critical government services, improve accuracy and expand access to essential public services,” Young said.

For more U.S. news

Previous Article
Biden announces a new rule to protect consumers who purchase short-term health insurance plans
Next Article
Trump to attend the wake of a slain NY police officer as he goes after Biden over crime

How useful was this article?

Click on a star to rate it!

Average rating 0 / 5. Vote count: 0

No votes so far! Be the first to rate this article.

Latest News

Menu