How a Trump Win Could Unleash Dangerous AI

Estimated read time 3 min read


The reporting requirements are essential for alerting the government to potentially dangerous new capabilities in increasingly powerful AI models, says a US government official who works on AI issues. The official, who requested anonymity to speak freely, points to OpenAI’s admission about its latest model’s “inconsistent refusal of requests to synthesize nerve agents.”

The official says the reporting requirement isn’t overly burdensome. They argue that, unlike AI regulations in the European Union and China, Biden’s EO reflects “a very broad, light-touch approach that continues to foster innovation.”

Nick Reese, who served as the Department of Homeland Security’s first director of emerging technology from 2019 to 2023, rejects conservative claims that the reporting requirement will jeopardize companies’ intellectual property. And he says it could actually benefit startups by encouraging them to develop “more computationally efficient,” less data-heavy AI models that fall under the reporting threshold.

AI’s power makes government oversight imperative, says Ami Fields-Meyer, who helped draft Biden’s EO as a White House tech official.

“We’re talking about companies that say they’re building the most powerful systems in the history of the world,” Fields-Meyer says. “The government’s first obligation is to protect people. ‘Trust me, we’ve got this’ is not an especially compelling argument.”

Experts praise NIST’s security guidance as a vital resource for building protections into new technology. They note that flawed AI models can produce serious social harms, including rental and lending discrimination and improper loss of government benefits.

Trump’s own first-term AI order required federal AI systems to respect civil rights, something that will require research into social harms.

The AI industry has largely welcomed Biden’s safety agenda. “What we’re hearing is that it’s broadly useful to have this stuff spelled out,” the US official says. For new companies with small teams, “it expands the capacity of their folks to address these concerns.”

Rolling back Biden’s EO would send an alarming signal that “the US government is going to take a hands off approach to AI safety,” says Michael Daniel, a former presidential cyber adviser who now leads the Cyber Threat Alliance, an information sharing nonprofit.

As for competition with China, the EO’s defenders say safety rules will actually help America prevail by ensuring that US AI models work better than their Chinese rivals and are protected from Beijing’s economic espionage.

Two Very Different Paths

If Trump wins the White House next month, expect a sea change in how the government approaches AI safety.

Republicans want to prevent AI harms by applying “existing tort and statutory laws” as opposed to enacting broad new restrictions on the technology, Helberg says, and they favor “much greater focus on maximizing the opportunity afforded by AI, rather than overly focusing on risk mitigation.” That would likely spell doom for the reporting requirement and possibly some of the NIST guidance.

The reporting requirement could also face legal challenges now that the Supreme Court has weakened the deference that courts used to give agencies in evaluating their regulations.

And GOP pushback could even jeopardize NIST’s voluntary AI testing partnerships with leading companies. “What happens to those commitments in a new administration?” the US official asks.

This polarization around AI has frustrated technologists who worry that Trump will undermine the quest for safer models.

“Alongside the promises of AI are perils,” says Nicol Turner Lee, the director of the Brookings Institution’s Center for Technology Innovation, “and it is vital that the next president continue to ensure the safety and security of these systems.”



Source link

You May Also Like

More From Author

+ There are no comments

Add yours