Dr. Shiri Dori-Hacohen is an Assistant Professor at the School of Computing at the University of Connecticut, where she leads the Reducing Information Ecosystem Threats (RIET) Lab. Dr. Dori-Hacohen’s research focuses on threats to the information ecosystem online and the sociotechnical AI alignment problem, while fostering transdisciplinary collaborations with experts spanning medicine, public health, the social sciences, and the humanities. Her research has been funded by the National Science Foundation (NSF) and Google, among others; she has served as PI or Co-PI on $7.7M worth of federal funds from the NSF. Her career in both academia and industry spans Google, Facebook, and University of Massachusetts, Amherst among others. Dr. Dori-Hacohen is the recipient of several prestigious awards. Her AI safety & ethics work has won the AI Risk Analysis Award at the NeurIPS ML Safety workshop, and was cited in the March 2023 AI Open Letter calling for a pause on AI development. Dr. Dori-Hacohen was named to the 2023 D-30 Disability Impact List.
Prof. Dori-Hacohen was interviewed on her expertise in AI safety with respect to her work being cited in the March 2023 Open Letter.
"Shiri Dori-Hacohen, an assistant professor at the University of Connecticut, also took issue with her work being mentioned in the letter. She last year co-authored a research paper arguing the widespread use of AI already posed serious risks.
Her research argued the present-day use of AI systems could influence decision-making in relation to climate change, nuclear war, and other existential threats.
She told Reuters: 'AI does not need to reach human-level intelligence to exacerbate those risks.'
'There are non-existential risks that are really, really important, but don’t receive the same kind of Hollywood-level attention.'"
Letter signed by Elon Musk demanding AI research pause sparks controversy. The Guardian. https://www.theguardian.com/technology/2023/mar/31/ai-research-pause-elon-musk-chatgpt
“[Dori-Hacohen’s] research argued the present-day use of AI systems could influence decision-making in relation to climate change, nuclear war, and other existential threats. She told Reuters: "AI does not need to reach human-level intelligence to exacerbate those risks.” AI experts disown Musk-backed campaign citing their research. Reuters.
https://www.reuters.com/technology/ai-experts-disown-musk-backed-campaign-citing-their-research-2023-03-31/