The Department of Homeland Security (DHS) on Thursday unveiled new guardrails for its use of artificial intelligence in carrying out its mission to secure the border.
The new policies were developed by DHS Artificial Intelligence Task Force (AITF), which DHS Secretary Alejandro Mayorkas created in April.
In announcing these new policies, DHS noted that AI has been critical to its missions, including combating fentanyl trafficking, strengthening supply chain security, countering sexual exploitation, and protecting critical infrastructure.
Mayorkas writes in the AI policy memo, expected to be released later Thursday, that the US must ensure AI is “rigorously tested to be effective [and] safeguards privacy, civil rights, and civil liberties while avoiding inappropriate biases.”
DHS has already used AI technology extensively on the southern border, most notably with the use of more than 200 surveillance cameras to detect and flag where human crossings occur.
DHS says it has appointed Chief Information Officer (CIO) Eric Hysen as the Department’s first Chief AI Officer. Hysen, who was set to appear before Congress Thursday, will promote AI innovation and safety within the Department, DHS said.
“I think the potential for unintended harm from the use of AI exists in any federal agency and in any use of AI,” Hysten said. “We interact with more people on a daily basis than any other federal agency. And when we interact with people, it can be during some of the most critical times of their lives.”
Historically, academics have flagged the dangers of AI regarding racial profiling because it can still make errors while identifying relationships in complex data.
As part of the new policy, Americans are able to decline the use of facial recognition technology in a variety of situations, including during air travel check-ins.
DHS’ new guidelines will also require that facial recognition matches discovered using AI technology be manually reviewed by human analysts to ensure their accuracy, according to a new directive that the agency plans to release alongside the AI memo.
During a congressional hearing, Hysen planned to highlight a recent case at California’s San Isidro Port of Entry where agents with Customs and Border Patrol had used advanced machine learning (ML) models to flag an otherwise unremarkable car driving north from Mexico for having a “potentially suspicious pattern.”
Agents later discovered 75 kilograms of drugs in the car’s gas tank and rear quarter panels.
Reuters contributed to this report.