Brian Patrick Green is the director of technology ethics at the Markkula Center for Applied Ethics at Santa Clara University and teaches AI ethics in Santa Clara University’s Graduate School of Engineering. His work focuses on AI and ethics, technology ethics in corporations, the ethics of space exploration and use, the ethics of technological manipulation of humans, the ethics of mitigation of and adaptation towards risky emerging technologies, and various aspects of the impact of technology and engineering on human life and society, including the relationship of technology and religion (particularly the Catholic Church).
Green is author of the book Space Ethics (2021), co-author of Ethics in the Age of Disruptive Technologies: An Operational Roadmap (The ITEC Handbook) (2023), co-author of the Ethics in Technology Practice (2018) corporate technology ethics resources, contributing author to Encountering AI: Ethical and Anthropological Investigations (forthcoming, 2023), co-editor of the book Religious Transhumanism and Its Critics (2022), and co-editor of a special issue of the Journal of Moral Theology on AI (2022).
Green has been a lead contributor on three World Economic Forum case studies on ethical practices at Microsoft, Salesforce, and IBM, and has worked with the Partnership on AI, the Vatican’s Dicastery for Culture and Education, and technology companies ranging from startups to the largest. He also supervises undergraduate fellowships at Santa Clara in technology ethics and environmental ethics.
Green has doctoral and master's degrees in ethics and social theory from the Graduate Theological Union in Berkeley, and his undergraduate degree is in genetics from the University of California, Davis. Between college and graduate school, he served for two years in the Jesuit Volunteers International teaching high school in the Marshall Islands, where he saw first-hand the devastating impacts of unethically-used technologies (nuclear weapons and fossil fuels) on people and their nation.
Green has been published, interviewed, or mentioned in media including America, Ars Technica, The Atlantic, Axios, BigThink, CNN.com, FiveThirtyEight, Forbes.com, Fortune.com, KCBS, NBC Bay Area, NPR, Nature, The San Jose Mercury News, Smithsonian.com, The Wall Street Journal, WIRED Magazine, the World Economic Forum website, and WNYC.
Artificial intelligence offers great opportunity, but it also brings potential hazards—this article presents 16 of them.
Discover the balance between technological progress and ethics regarding AI.
Resources on technology ethics including ethics in IT and biotechnology from the Markkula Center for Applied Ethics
The ethical implications of AI often fall to the wayside in favor of faster and more efficient deployment – especially as competition in the space remains high, said Brian Green, director of technology ethics. But enterprises building ethics into their own AI strategies could help prevent lasting reputational damage, both for themselves and the industry at large, Green said.
“The leadership has to decide whether they’re actually going to take ethics seriously,” said Green. “If the leadership is not on board, everything is going to look like a cost instead of a benefit.”
Green pointed out that fierce competitiveness in the field often sidelines AI ethics. Prioritizing ethics in AI strategies, however, can prevent long-term damage to a company’s reputation, Green emphasized.
"...ethical AI use involves understanding AI’s role in your organization—considering who uses it, its purposes, its beneficiaries, and those it might harm."
“Bots can extract a lot of information out of you if they’re acting like one of your deceased loved ones […] as a friend, [or] a significant relation,” says Brian Green, director of technology ethics. “Who’s actually in control of the bot? Is the information private? Even if it’s private, is it being securely stored?"