Researchers discover a vulnerability in an implantable medical device that cannot be patched because the manufacturer is out of business. What should they do? Assuming there is zero chance of the vulnerability being exploited even if adversaries know about it, should they disclose the vulnerability to the government and the public, thereby respecting patients’ right to be informed? Alternatively, knowing that adversaries would never manifest but that the knowledge of the vulnerabilities could harm patients, should they not disclose it, thereby respecting the principle of “beneficence” and the avoidance of harm?
Research assistant professor in the computer science department Yasemin Acar and co-authors created this computer security moral dilemma for their paper, “Ethical Frameworks and Computer Security Trolley Problems: Foundations for Conversations.” However, the computer security research community regularly tackles ethical questions such as these, and universal agreement on what an adequate moral response should be cannot always be reached.
Through the creation and analysis of trolley problem-like computer security moral dilemmas, the researchers aimed to contribute to the discussion about how the computer security research field considers and converses about ethical questions and provide considerations to the community based on philosophy. They presented their findings and continued the conversation around ethical frameworks at the 32nd USENIX Security Symposium, where they won a distinguished paper award!
“We’re really enjoying this interdisciplinary collaboration and are excited that the security research community cares deeply about discussing ethics in research,” Acar stated.
The team looked at philosophy’s contribution to the field of ethics/moral philosophy as a starting point to provide considerations. They argue that the current processes used to overcome these challenges could be strengthened by being informed by philosophy’s understanding of the different approaches that people take to ethics. The two frameworks they pulled from philosophy include deontological and consequentialist ethics.
When the team evaluated the medical device dilemma with deontological ethics, they concluded the morally correct decision was to disclose the vulnerability. Even if people would prefer to know about it, under this framework, they must respect people’s right to informed consent. Under a consequentialist ethics evaluation, they decided the morally correct decision was not to disclose the vulnerability because telling them would harm them both physically since deciding to remove the device or not get one due to the vulnerability would shorten their life expectancy and psychologically since they would live in fear of a security incident.
In the real world, there is much more uncertainty that could change these results. At the end of the paper, the team reflected on uncertainty and how ethical frameworks can provide tools for thought and discussion. They also reflected on how shifting morality earlier into research can help to stop scenarios like the medical device dilemma from arising; for example, a code escrow might enable the patching of devices even after the manufacturer ceases operation.
It is the researchers’ hope their findings are broadly useful to the computer security community. In particular, they hope they gave program committees the necessary language to discuss morality, helped researchers have more methodical and informed discussions about research paths when tensions exist in morality, and encouraged community conversations and education around morality. Being able to have productive conversations around ethics in computer security is critical to bettering how the community navigates such complex moral dilemmas.