The UK government’s AI Safety Institute will open its first overseas office in San Francisco this summer in a move that was confirmed by Technology Secretary Michelle Donelan.
Announced in a press release, the AI Safety Institute (AISI)’s first foreign office will recruit an initial team of technical staff led by a research director, complementing the Institute’s growing London headquarters which already houses over 30 experts.
The strategic expansion aims to benefit from the tech talent in the SF Bay Area, where many of the world’s leading tech and AI firms are based.
British AI Safety Institute to go global
The London office will continue to scale risk assessments of advanced AI systems, while the new San Francisco office is set to facilitate close collaboration between the two nations.
Both the UK and US have committed to similar AI safety agreements, including signing the Bletchley Declaration at a British-organized summit together with 25 other countries and the European Union.
Donelan commented: “[The expansion] is a pivotal moment in the UK’s ability to study both the risks and potential of AI from a global lens, strengthening our partnership with the US and paving the way for other countries to tap into our expertise as we continue to lead the world on AI safety.”
The UK and Canada have also committed to a partnership aimed at enhancing AI safety research.
The announcement coincides with the release of recent AI safety test results by the AISI. Besides highlighting some of the technical limitations of large language models, the study also noted that “All tested models remain highly vulnerable to basic ‘jailbreaks’, and some will produce harmful outputs even without dedicated attempts to circumvent safeguards.”
Ian Hogarth, AISI’s Chair, commented: “AI safety is still a very young and emerging field… Our ambition is to continue pushing the frontier of this field by developing state-of-the-art evaluations, with an emphasis on national security related risks.”
The announcement also precedes the AI Seoul Summit 2024, which is seen as somewhat of a successor to Britain’s Bletchley Park summit held in November 2023. International governments, AI companies, academia, and civil society are expected to come together to continue discussions surrounding AI safety.
+ There are no comments
Add yours