Professor Shi Feng's research studies AI safety and alignment. Focusing on future AI systems that are potentially more capable than humans, his main goal is to empower human oversight and inform policy decisions. To these ends, he designs new theories, algorithms, and user interfaces to augment human decisions with and around AI systems. Most recently, he has focused on identifying safety risks of using LLMs themselves to perform evaluation and monitoring of LLM systems.