About
SAHAS: The Science and Art of Human-AI Systems
साहस • (sāhas) — stem noun: courage, adventure, boldness, daring
We seek to computationally expand the ways people think, discover, and create. We do this by building {systems, methods, experiments} that show how AI can support new forms of interaction & invention. We’re design informed but epistemically motivated.
We treat this as both a scientific and a design problem. Scientifically, we ask: What capabilities make an AI system useful to a human mind? Not just in terms of efficiency, but in how it might fundamentally extend users' observation and action spaces. Design-wise, we ask: What kinds of interaction paradigms might stretch human-AI systems to their conceptual limits, so we discover what is possible and worth investigating?
We call this dual focus the science and art of human-AI systems.
Some of our work looks like empirical machine learning. Some of it looks like human-computer interaction, or design. Often it falls in between.
Research Vision
Our work can be divided into the following three primary research areas:


Human-Compatible Capabilities in AI
What kinds of capabilities must AI systems develop to be useful to people?
A prerequisite to learning something useful from our AI counterparts is that they have learned something useful first. We are interested in developing AI systems with functionally human-relevant capabilities, such as perception, inference, and creative generation in the kinds of rich, multimodal environments we tend to work in. We see these as preconditions for alignment with human experience.
Channels for Human-AI Communication
How can humans meaningfully shape the behavior of complex AI systems?
We study how to make internal structure in AI models accessible and manipulable, with an eye towards insight rather than verification. This includes developing interpretable generative pipelines, tools for discovery, and human-in-the-loop interfaces. Our approach treats interpretability and steerability as design problems as well as technical ones. In summary, we ask how to build systems that humans can trust and collaborate with.
Designing for Human Benefit
What mechanisms lead to self-improving, growing human-AI systems over time?
Our long-term goal is to create virtuous cycles between human and machine intelligence. This interaction-level design problems, e.g. building direct manipulation interfaces or agentic collaborators with human goals embedded deeply into their design, and meta-level questions about how we might scientifically study human-AI systems at scale.