Ever since our ancestors discovered how to make sharp stones more than two and a half million years ago, our mastery of tools has driven our success as a species. But as our tools become more powerful, we could be putting ourselves at risk should they fall into the wrong hands— or if humanity loses control of them altogether. Concerned with bioengineered viruses, unchecked climate change, and runaway artificial intelligence? These are the challenges the Centre for the Study of Existential Risk (CSER) was founded to grapple with.
My feature article for the Future of Life Institute tells the story of how CSER was established and its ongoing work on catastrophic risk. This area of research is growing fast to keep up with research in AI and bioengineering. Rows continue over the NIH’s review of research into dangerous pathogens, the outcome of which will determine whether to restart research efforts following the 1-year moratorium in the US. Scientists are upset that the formal assessment of the risks of this sort of research will be conducted by a private firm, a move that looks rushed and secretive.
So it’s a hot topic, and CSER and academic institutes like it are seeking expert advice from leaders in industry to guide their risk analysis. Not to halt innovation, but to guide it in the right direction. One of the best things about covering this story was getting to interview Sir Martin Rees, one of the UK science greats. Here’s my selfie with him, what a star!