AI Governance · AI Safety · Mechanical Engineering

Rujuta Karekar

AI governance and safety researcher. Mechanical engineer by training. Focused on military AI governance, safety frameworks, and the structural gaps in how frontier AI is constrained.

I studied mechanical engineering and worked as an engineer. That's where I learned to think about how things actually break — not how they're supposed to work on a whiteboard.

The move to AI governance came from noticing that the hardest problems in this space are structural. Who controls deployment. Where oversight degrades. What happens when systems behave outside their design parameters. Engineering doesn't hand you the answers, but it gives you the right instincts for the questions.

Activities

Stop Killer Robots — Youth Network

Meaningful human control over autonomous weapons. UNIDIR poster. UNSG recommendations. Digital colonialism blog. Youth advocacy toolkit. Humanitarian concerns report in progress.

AI Safety Coordination Hub

Co-founder. Building a coordination space for AI safety researchers and practitioners.

Also: education volunteering across several organisations, and structured AI safety coursework (alignment, governance, AGI safety). Details on the CV.

I'm building a body of work at the intersection of engineering thinking and governance research. Interested in collaboration — and in being told where the framing is wrong.

When I'm not doing any of this, I write fiction — mostly romance, mostly for fun.

Research & Writing

Research & Writing

Frameworks, posters, policy inputs, and longer analytical writing.

Open questions

?

What governance mechanisms apply when a company's deployment policy constrains military AI use more than international law does?

?

Can red lines for AI systems be made concrete and measurable — not just aspirational?

?

How does loss of control actually unfold — structurally, gradually, through institutional erosion?

?

Beyond job displacement: what are we losing in literacy, camaraderie, and collective capacity?

Outputs & Presentations

Framework

RED30: 30 Indicators for AI Red Lines

Apart Research Technical Governance Hackathon

30 indicators, four severity tiers, grounded in international treaties. Interactive dashboard.

Tool

AI Red Lines Tracker Dashboard

Interactive assessment tool

Evaluates AI systems against ethical and legal thresholds.

Poster

Meaningful Human Control and AI in Military Use

UNIDIR Conference

Structural requirements for human control over autonomous weapons.

Poster

Virtue Ethics Approaches to AI Safety

Tokyo AI Safety Conference 2025

Alignment-as-character vs alignment-as-constraint.

Blog

Autonomous Systems as Digital Colonialism

Stop Killer Robots

Autonomous weapons development as a reproduction of colonial power asymmetry.

Presentation

AI Red Lines Tracker and RED30 Demo

Apart Research Technical Governance Hackathon

Framework and dashboard demonstration.

Report

Youth and Humanitarian Concerns with AI Weapons

Stop Killer Robots (in progress)

Youth perspectives on humanitarian implications of AI weapons development.

Essays & Analysis

Essays coming soon.

In the meanwhile, read here →

Policy Interests

Policy Interests

Three areas that organise my research. Each represents a question I consider both important and insufficiently addressed.

1

Military AI Governance & Autonomous Weapons

Who governs military AI when the companies building the models make deployment decisions that no treaty or procurement regulation accounts for? The civil-military distinction most frameworks rely on has already broken down. I work on this through research at Sentient Future and advocacy with Stop Killer Robots.

2

AI Safety Frameworks & Red Lines

Governance standards mean nothing if they can't be operationalised. RED30 is my attempt to build concrete, measurable indicators grounded in international law. Separately, I'm researching virtue ethics as an alternative alignment paradigm with Prof Mizumoto.

3

Loss of Control & Structural Risk

Loss of control is a process, not an event. Institutional drift, compounding oversight failures, gradual erosion of human authority. My engineering training — thinking about cascading failures and system degradation — directly applies. This is the focus of my work at Sentient Future.

Now

Current Focus

Last updated: April 2026

Research

·

AGI loss-of-control pathways at Sentient Future

·

Virtue ethics and AI safety with Prof Mizumoto

·

Developing RED30 into a more robust assessment tool

·

Gradual disempowerment — Effective Thesis Accelerator

Policy & Advocacy

·

Youth advocacy toolkit for autonomous weapons — Stop Killer Robots

·

Humanitarian concerns report on AI weapons (in progress)

Community

·

AI Safety Coordination Hub — co-founder

Open to

·

Research fellowships in AI governance and policy

·

RA roles at think tanks, labs, or policy organisations

·

Collaboration on military AI, autonomous weapons, or structural AI risk

↓ Download CV as PDF

I'm interested in working with people across governance, safety, and military AI. If there's overlap — or disagreement — reach out. → email