The Department of Homeland Security has experienced the benefits and dangers of artificial intelligence firsthand. It located a trafficking victim years later using an A.I. tool that generated an image of the child a decade older. However, it has also been deceived into investigations by deep fake images created by A.I.
Now, the department is set to become the first federal agency to adopt the technology with a plan to integrate generative A.I. models across various divisions. In collaboration with OpenAI, Anthropic, and Meta, it will initiate pilot programs that utilize chatbots and other tools to combat drug and human trafficking crimes, train immigration officials, and prepare for emergencies nationwide.
The rush to implement the still unproven technology is part of a broader effort to keep pace with the changes facilitated by generative A.I., which can create extremely realistic images and videos and mimic human speech.
“One cannot ignore it,” stated Alejandro Mayorkas, Secretary of the Department of Homeland Security, in an interview. “And if one isn’t forward-leaning in recognizing and being prepared to address its potential for good and its potential for harm, it will be too late and that’s why we’re moving quickly.”
The strategy to incorporate generative A.I. agency-wide is the latest example of how new technology like OpenAI’s ChatGPT is compelling even traditional industries to reconsider their operational methods. Despite this, government agencies like D.H.S. will likely face intense scrutiny regarding their use of the technology, which has triggered heated debate due to its occasional unreliability and discriminatory nature.
Following President Biden’s executive order last year mandating the establishment of safety standards for A.I. and its adoption throughout the federal government, those within the federal government hurried to devise plans.
The D.H.S., with 260,000 employees, was founded after the events of September 11th and is responsible for safeguarding Americans within the country’s borders, including combating human and drug trafficking, safeguarding critical infrastructure, responding to disasters, and managing border security.
As part of its initiative, the agency intends to recruit 50 A.I. experts to develop solutions to protect the nation’s critical infrastructure from A.I.-related threats and counter the use of the technology in generating child sexual abuse material and producing biological weapons.
In the $5 million pilot programs, the agency will utilize A.I. models like ChatGPT to aid in investigations involving child abuse materials, human and drug trafficking. It will also collaborate with companies to analyze its vast amounts of text-based data to identify patterns for investigative purposes. For example, a detective seeking a suspect driving a blue pickup truck will be able to search homeland security investigations for the same type of vehicle for the first time.
D.H.S. will employ chatbots to train immigration officials who interact with other personnel and contractors posing as refugees and asylum seekers. The A.I. tools will enable officials to undergo additional training through simulated interviews. The chatbots will also analyze information about communities nationwide to assist in developing disaster relief plans.
The results of the pilot programs will be reported by the end of the year, according to Eric Hysen, the department’s chief information officer and A.I. head.
The agency selected OpenAI, Anthropic, and Meta to experiment with various tools and will utilize cloud providers Microsoft, Google, and Amazon in its pilot programs. “We cannot do this alone,” he said. “We need to collaborate with the private sector to establish guidelines for responsible use of generative A.I.”