About

SAHAS: The Science and Art of Human-AI Systems

साहस • (sāhas) — stem noun: courage, adventure, boldness, daring

Our view is that AI marks a new kind of general-purpose cognitive tool: it is flexible, generative, and unpredictable. To guide it toward expanding human potential in the long run, we must rethink both the capabilities we build and how humans interact with them.

We treat this as both a scientific and a design problem. Scientifically, we ask: What kinds of representations, capabilities, and behaviors make AI systems fundamentally useful to human minds? Design-wise, we ask: What interfaces and workflows help humans think more deeply, explore more broadly, or act more precisely?

We call this dual focus the science and art of human-AI systems.

Keywords: Machine Learning, Human-Computer Interaction, Human Behavior, Creativity, Generative Models, Interactive Systems, Interpretability, Steerability, Human-AI Collaboration, Augmented Intelligence, Cognitive Tools, Human-Centered AI, Human-AI Co-Evolution, Human-in-the-Loop Systems

Research Vision

We work across the stack of human-AI systems:

SAHAS Research Vision Diagram showing anchor points: Human-Compatible Capabilities in AI, Channels for Human-AI Communication, Designing for Human Benefit. The points are overlayed on top of a diagram showing a cyclical relationship between humans and AI using arc shaped arrows.SAHAS Research Vision Diagram showing anchor points: Human-Compatible Capabilities in AI, Channels for Human-AI Communication, Designing for Human Benefit. The points are overlayed on top of a diagram showing a cyclical relationship between humans and AI using arc shaped arrows.
1

Human-Compatible Capabilities in AI

Hover to expand

What kinds of capabilities must AI systems develop to be useful to people?

Loading content...
2

Channels for Human-AI Communication

Hover to expand

How can humans meaningfully shape the behavior of complex AI systems?

Loading content...
3

Designing for Human Benefit

Hover to expand

What mechanisms lead to self-improving human-AI systems over time?

Loading content...