Stuart Armstrong’s research at the Future of Humanity Institute centers on the safety and possibilities of Artificial Intelligence (AI), how to define the potential goals of AI and map humanity’s partially defined values into it, and the long term potential for intelligent life across the reachable universe. He has been working with people at FHI and other organizations, such as DeepMind, to formalize AI desiderata in general models so that AI designers can include these safety methods in their designs. His collaboration with DeepMind on “Interruptibility” has been mentioned in over 100 media articles.
Stuart Armstrong’s past research interests include comparing existential risks in general, including their probability and their interactions, anthropic probability (how the fact that we exist affects our probability estimates around that key fact), decision theories that are stable under self-reflection and anthropic considerations, negotiation theory and how to deal with uncertainty about your own preferences, computational biochemistry, fast ligand screening, parabolic geometry, and his Oxford D. Phil. was on the holonomy of projective and conformal Cartan geometries.Back to people