Inspired by the potential of intelligent technologies to improve our lives, I began my M.S. in Computer Science at the University of Pennsylvania (UPenn) in 2020. I took the course CIS 522 Deep Learning, which repeatedly raised the question of how we – as developers – can design more ethical algorithms. Interested in how the industry approaches the topic, I reached out to former colleagues and contacts, asking about how their companies manage the ethical implications of AI. Surprisingly, no one had an answer. They weren’t managing adverse outcomes at all because they didn’t know how. The problem was confirmed when I dived into research on the topic: Compared to the technical advancements in AI, the area of technical AI safety is significantly understudied. Novel, complex autonomous systems are being developed without devoting enough time to their potential negative implications and how developers can mitigate them. Given the increasing use of such systems throughout society, this sparked my interest in contributing to research in responsible AI.
Since then, I have found more questions than answers: Can – and should – autonomous agents learn from human beings what it means to act ethically? How can we implement an ethical intuition into these agents while avoiding complex computations? And how can we design solutions that companies can use easily? With my research, I’m trying to answer some of these questions and make responsible AI more understandable, practicable, and accessible.
So far, I have mostly been exposed to technical aspects of AI safety. However, I recently noticed that technical AI safety won’t be sufficient – we need effective AI governance mechanisms to ensure the safe deployment of the technology. So, besides my technical research, I’m also committed to improving the collaboration between these fields to ensure that we not only have safe AI technologies but also effective governance frameworks that guide their development and deployment.
Schuett, J.*, Reuel, A.*, & Carlier, A. (2023). How to design an AI ethics board. arxiv:2304.07249.
Lamparth, M., & Reuel, A. (2023). Analyzing And Editing Inner Mechanisms Of Backdoored Language Models. arxiv:2303.12461.
Reuel, A., Koren, M., Corso, A., & Kochenderfer, M. (2022). Using Adaptive Stress Testing to Identify Paths to Ethical Dilemmas in Autonomous Systems. Proceedings of the AAAI-22 Workshop on Artificial Intelligence Safety.
Reuel, A., Peralta, S., Sedoc, J., Sherman, G., & Ungar, L. (2021). Measuring the Language of Self-Disclosure across Corpora. Findings of the 60th Annual Meeting of the Association for Computational Linguistics 2022.