Anna Gausen

AI Safety Researcher

I am a research scientist working at the intersection of AI evaluation and human–AI relationships at the UK AI Security Institute. My work focuses on building empirical and methodological foundations, including evaluation frameworks, large-scale experiments, and longitudinal usage analyses, to understand how increasingly capable AI systems influence human agency, behaviour and well-being. I hold a PhD in Computer Science from Imperial College London focussed on AI safety.

Experience

Research Scientist

UK AI Security Institute

September 2024 – Present

Research Associate

The Alan Turing Institute

September 2023 – July 2024

PhD Researcher

Microsoft Research

June 2023 – September 2023

Education

PhD in Computer Science

Imperial College London

September 2020 – September 2024

Thesis: Improving Transparency of Social Media Algorithms Using Agent-Based Modelling

Media Coverage

  • AI chatbot test for smart capabilities may be exaggerated, flawed↗

    NBC News

  • Experts find flaws in hundreds of tests to check AI safety and effectiveness↗

    The Guardian

  • One in three using AI for emotional support and conversation↗

    BBC

  • De claims over steeds slimmere AI-modellen? Meer vibe dan wetenschap↗

    De Correspondent

Links

  • X↗
  • Google Scholar↗
  • LinkedIn↗
FarAI Presentation

FarAI Presentation

Dec 2025 · Talk · FarAI Alignment Workshop (NeurIPS)

PublicationAISI Report

Frontier AI Trends Report

Summarised evaluation trends from evaluating frontier AI systems at the UK AI Security Institute, highlighting rapid capability growth and emerging safety risks.

Dec 2025 · Policy

The Future of AI Safety

The Future of AI Safety

April 2025 · Panel · CETAS 2025

PublicationCETAS

Sociotechnical Approaches to AI Evaluation

Organised an international workshop on sociotechnical AI evaluation, examining how generative AI could amplify malicious capabilities and identifying best practices for risk-focused safety assessment across policy, security, and research communities.

Dec 2024 · Research

PaperAI and Ethics (Springer)

An approach to sociotechnical transparency of social media algorithms

Proposed a sociotechnical transparency approach for social media recommendation systems, using an empirically validated agent-based model to reveal how algorithms prioritise content signals and to compare platform behaviour with public and policy expectations.

July 2024 · Research

PublicationCETAS Report

The Rapid Rise of Generative AI

Contributed to a comprehensive UK study on generative AI and national security, analysing how the technology amplifies existing threats (e.g., disinformation, fraud) while assessing reliability limits, misuse risks, and implications for defensive adoption.

Dec 2023 · Research

PaperNeurIPS'25

Measuring What Matters

We reviewed 445 large language model (LLM) benchmarks, identifying gaps in how AI safety and robustness are measured, and proposing practical guidelines to improve evaluation standards.

Dec 2025 · Research

Data is God

Data is God

Sept 2025 · Collaboration · Mørning

TalkAI UK 2025

Evaluating Malicious AI Capabilities

A presentation and panel on evaluating malicious AI capabilities at Turing's AI UK 2025.

March 2025 · Presentation

PublicationCETAS

Evaluating Malicious Generative AI Capabilities

Report on how generative AI could amplify malicious activities (e.g., cyberattacks, radicalisation, and weapons planning), identifying inflection points in risk and advocating a sociotechnical evaluation approach for national security and law enforcement.

July 2024 · Research

PaperFAccT'24

A Framework for Exploring the Consequences of AI-Mediated Enterprise Knowledge Access and Identifying Risks to Workers

Developed a framework to assess worker risks in AI-mediated enterprise knowledge systems, connecting system mechanisms to harms such as commodification, appropriation, concentration of power, and marginalisation to inform responsible design and deployment.

June 2024 · Research

Colonialism and AI

Colonialism and AI

Nov 2023 · Collaboration · Accessible AI

FarAI Presentation

FarAI Presentation

Dec 2025 · Talk · FarAI Alignment Workshop (NeurIPS)

Data is God

Data is God

Sept 2025 · Collaboration · Mørning

PublicationCETAS

Sociotechnical Approaches to AI Evaluation

Organised an international workshop on sociotechnical AI evaluation, examining how generative AI could amplify malicious capabilities and identifying best practices for risk-focused safety assessment across policy, security, and research communities.

Dec 2024 · Research

PaperFAccT'24

A Framework for Exploring the Consequences of AI-Mediated Enterprise Knowledge Access and Identifying Risks to Workers

Developed a framework to assess worker risks in AI-mediated enterprise knowledge systems, connecting system mechanisms to harms such as commodification, appropriation, concentration of power, and marginalisation to inform responsible design and deployment.

June 2024 · Research

PaperNeurIPS'25

Measuring What Matters

We reviewed 445 large language model (LLM) benchmarks, identifying gaps in how AI safety and robustness are measured, and proposing practical guidelines to improve evaluation standards.

Dec 2025 · Research

The Future of AI Safety

The Future of AI Safety

April 2025 · Panel · CETAS 2025

PublicationCETAS

Evaluating Malicious Generative AI Capabilities

Report on how generative AI could amplify malicious activities (e.g., cyberattacks, radicalisation, and weapons planning), identifying inflection points in risk and advocating a sociotechnical evaluation approach for national security and law enforcement.

July 2024 · Research

PublicationCETAS Report

The Rapid Rise of Generative AI

Contributed to a comprehensive UK study on generative AI and national security, analysing how the technology amplifies existing threats (e.g., disinformation, fraud) while assessing reliability limits, misuse risks, and implications for defensive adoption.

Dec 2023 · Research

PublicationAISI Report

Frontier AI Trends Report

Summarised evaluation trends from evaluating frontier AI systems at the UK AI Security Institute, highlighting rapid capability growth and emerging safety risks.

Dec 2025 · Policy

TalkAI UK 2025

Evaluating Malicious AI Capabilities

A presentation and panel on evaluating malicious AI capabilities at Turing's AI UK 2025.

March 2025 · Presentation

PaperAI and Ethics (Springer)

An approach to sociotechnical transparency of social media algorithms

Proposed a sociotechnical transparency approach for social media recommendation systems, using an empirically validated agent-based model to reveal how algorithms prioritise content signals and to compare platform behaviour with public and policy expectations.

July 2024 · Research

Colonialism and AI

Colonialism and AI

Nov 2023 · Collaboration · Accessible AI