3 ways we tried to outwit AI last week: Legislation, preparation, intervention

Estimated read time 9 min read


AI data concept

Weiquan Lin/Getty Images

Current models of artificial intelligence (AI) aren’t ready as instruments for monetary policies, but the technology can lead to human extinction if governments do not intervene with the necessary safeguards, according to new reports. And intervene is exactly what the European Union (EU) did last week. 

Also: The 3 biggest risks from generative AI – and how to deal with them

The European Parliament on Wednesday passed into law the EU AI Act, marking the first major wide-reaching AI legislation to be established globally. The European law aims to safeguard against three key risks, including “unacceptable risk” where government-run social scoring indexes such as those used in China are banned. 

“The new rules ban certain AI applications that threaten citizens’ rights, including biometric categorization systems based on sensitive characteristics and untargeted scraping of facial images from the internet or CCTV footage to create facial recognition databases,” the European Parliament said. “Emotion recognition in the workplace and schools, social scoring, predictive policing (when it is based solely on profiling a person or assessing their characteristics), and AI that manipulates human behavior or exploits people’s vulnerabilities will also be forbidden.”

Applications identified as “high risk”, such as resume-scanning tools that rank job applicants, must adhere to specific legal requirements. Applications not listed as high risk or explicitly banned are left largely unregulated. 

There are some exemptions for law enforcement, which can use real-time biometric identification systems if “strict safeguards” are met, including limiting their use in time and geographic scope. For instance, these systems can be used to facilitate targeted search of a missing person or to prevent a terrorist attack. 

Operators of high-risk AI systems, such as those in critical infrastructures, education, and essential private and public services including healthcare and banking, must assess and mitigate risks as well as maintain use logs and transparency. Other obligations these operators must fulfill include ensuring human oversight and data accuracy. 

Also: As AI agents spread, so do the risks, scholars say

Citizens also have the right to submit complaints about AI systems and be given explanations about decisions based on high-risk AI systems that affect their rights. 

General-purpose AI systems and the training models on which they are based have to adhere to certain transparency requirements, including complying with EU copyright law and publishing summaries of content used for training. More powerful models that can pose systemic risks will face additional requirements, including performing model evaluations and reporting of incidents.

Furthermore, artificial or manipulated images, audio, and video content, including deepfakes, must be clearly labeled as such.

 “AI applications influence what information you see online by predicting what content is engaging to you, capture and analyze data from faces to enforce laws or personalise advertisements, and are used to diagnose and treat cancer,” EU said. “In other words, AI affects many parts of your life.”

Also: Employees input sensitive data into generative AI tools despite the risks

EU’s internal market committee co-rapporteur and Italy’s Brando Benifei said: “We finally have the world’s first binding law on AI to reduce risks, create opportunities, combat discrimination, and bring transparency. Unacceptable AI practices will be banned in Europe and the rights of workers and citizens will be protected. 

Benifei added that an AI Office will be set up to support companies in complying with the rules before they enter into force. 

The regulations are subject to a final check by lawyers and a formal endorsement by the European Council. The AI Act will enter into force 20 days after its publication in the official journal and be fully applicable two years after its entry into force, with the exception of bans on prohibited practices, which will apply six months after the entry into force date. Codes of practice also will be enforced nine months after the initial rules kick off, while general-purpose AI rules including governance will take effect a year later. Obligations for high-risk systems will be effective three years after the law enters into force.

A new tool has been developed to guide European small and midsize businesses (SMBs) and startups to understand how they may be affected by the AI Act. The EU AI Act site noted, though, that this tool remains a “work in progress” and recommends organizations seek legal assistance. 

Also: AI is supercharging collaboration between developers and business users

“The AI Act ensures Europeans can trust what AI has to offer,” the EU said. “While most AI systems pose limited to no risk and can contribute to solving many societal challenges, certain AI systems create risks that we must address to avoid undesirable outcomes. For example, it is often not possible to find out why an AI system has made a decision or prediction and taken a particular action. So, it may become difficult to assess whether someone has been unfairly disadvantaged, such as in a hiring decision or in an application for a public benefit scheme.”

The new legislation works to, among others, identify high-risk applications and require a standard assessment before the AI system is put into service or the market. 

EU is hoping its AI Act will become a global standard like its General Data Protection Regulation (GDPR).

AI can lead to human extinction without human intervention

In the United States, a new report has called for governmental intervention before AI systems develop into dangerous weapons and lead to “catastrophic” events, including human extinction. 

Released by Gladstone AI, the report was commissioned and “produced for review” by the US Department of State, though, its contents do not reflect the views of the government agency, according to the authors. 

The report noted the accelerated progress of advanced AI, which has presented both opportunities and new categories of “weapons of mass destruction-like” risks. Such risks have been largely fueled by competition among AI labs to build the most advanced systems capable of achieving human-level and superhuman artificial general intelligence (AGI).

Also: Is humanity really doomed? Consider AI’s Achilles heel

These developments are driving risks that are global in scale, have deeply technical origins, and are evolving quickly, Gladstone AI said. “As a result, policymakers face a diminishing opportunity to introduce technically informed safeguards that can balance these considerations and ensure advanced AI is developed and adopted responsibly,” it said. “These safeguards are essential to address the critical national security gaps that are rapidly emerging as this technology progresses.” 

The report pointed to major AI players including Google, OpenAI, and Microsoft, that have acknowledged the potential risks, and noted that the “prospect of inadequate security” at AI labs added to the risk that the “advanced AI systems could be stolen from their US developers and weaponized against US interests”.

These leading AI labs also highlighted the possibility of losing control of the AI systems they are developing, which can have “potentially devastating consequences” to global security, Gladstone AI said. 

Also: I fell under the spell of an AI psychologist. Then things got a little weird

“Given the growing risk to national security posed by rapidly expanding AI capabilities from weaponization and loss of control, and particularly, the fact that the ongoing proliferation of these capabilities serves to amplify both risks — there is a clear and urgent need for the US government to intervene,” the report noted. 

It called for an action plan that includes implementing interim safeguards to stabilize advanced AI development, including export controls on the associated supply chain. The US government also should develop basic regulatory oversight and strengthen its capacity for later stages, and move toward a domestic legal regime of responsible AI use, with a new regulatory agency set up to have oversight. This should be later extended to include multilateral and international domains, according to the report. 

The regulatory agency should have rule-making and licensing powers to oversee AI development and deployment, Gladstone AI added. A criminal and civil liability regime also should define responsibility for AI-induced damages and determine the extent of culpability for AI accidents and weaponization across all levels of the AI supply chain. 

AI is not ready to drive monetary policies

Elsewhere in Singapore, the central bank mulled over the collective failure of global economies to predict the persistence of inflation following the pandemic. 

Faced with questions about the effectiveness of existing models, economists were asked if they should be looking at advancements in data analytics and AI technologies to improve their forecasts and models, said Edward S. Robinson, deputy managing director of economic policy and chief economist at Monetary Authority of Singapore (MAS). 

Also: Meet Copilot for Finance, Microsoft’s latest AI chatbot – here’s how to preview it

Traditional big data and machine learning techniques already are widely used in the sector, including central banks that have adopted these in various areas, noted Robinson, who was speaking at the 2024 Advanced Workshop for Central Banks held earlier last week. These include using AI and machine learning for financial supervision and macroeconomic monitoring, where they are used to identify anomalous financial transactions, for instance. 

Current AI models, however, are still not ready as instruments for monetary policies, he said. 

“A key strength of AI and machine learning modeling approaches in predictive tasks is their ability to let the data flexibly determine the functional form of the model,” he explained. This allows the models to capture non-linearities in economic dynamics such that they mimic the judgment of human experts. 

Recent advancements in generative AI (GenAI) take this further, with large language models (LLMs) trained on vast volumes of data that can generate alternate scenarios, he said. These specify and simulate basic economic models and surpass human experts at forecasting inflation.

Also: AI adoption and innovation will add trillions of dollars in economic value

The flexibility of LLMs, though, is a drawback, Robinson said. Noting that these AI models can be fragile, he said their output often is sensitive to the choice of the model’s parameters or prompts used. 

The LLMs also are opaque, he added, making it difficult to parse the underlying drivers of the process being modeled. “Despite their impressive capabilities, current LLMs struggle with logic puzzles and mathematical operations,” he said. “[It suggests] they are not yet capable of providing credible explanations for their own predictions.”

AI models today lack clarity of structure that allows existing models to be useful to monetary policymakers, he added. Unable to articulate how the economy works or discriminate between competing narratives, AI models cannot yet replace structural models at central banks, he said.

However, preparation is needed for the day GenAI evolves as a GPT, Robinson said. 





Source link

You May Also Like

More From Author

+ There are no comments

Add yours