This students background checks before teaching hacking skills, some

This paper explores the growing concerns with
computer science research and in particular, computer security research and its
relationship with the committees that review human subject’s research. It
offers cases that review boards are likely to confront, and provides a context
for appropriate consideration of such research, as issues of bots, clouds, and
worms enter the discourse of human subjects review.As students learn to protect
network services, they necessarily learn to attack network services, disguise
their identities, et cetera. While a few colleges have gone as far as giving
students background checks before teaching hacking skills, some simple new
tactics can motivate students to employ their skills legally and ethically. These
tactics lead students to discover what is ethical for them, rather than being
told what is or is not ethical.The Declaration of Helsinki and Belmont Report
motivated the growth of bioethics alongside traditional biomedical research.
Unfortunately, no equivalently active ethics discipline has paralleled the
growth of computer security research, where serious ethical challenges are
regularly raised by studies of increasingly sophisticated security threats
(such as worms, botnets, and phishing). In this absence, program committees and
funding agencies routinely must judge the acceptability of research studies.
Such judgments are often difficult because of a lack of community consensus on
ethical standards, disagreement about who should enforce standards and how, and
limited experience applying ethical decision-making methods. This article
motivates the need for such a community, touching on the extensive field of
ethical decision making, examining existing ethical guidelines and enforcement
mechanisms used by the computer security research community, and calling the
community to joint action to address this broad challenge.This rapid growth has
given rise to a variety of ethical challenges for researchers seeking to combat
these threats. For example, if someone has the ability to take control of a
botnet, can they just clean up all the infected hosts? Can we deceive users, if
our goal is to better understand how they are deceived by attackers? Can we
demonstrate the need for better methods, by breaking something that people rely
on today? When one considers the implications of something like botnet clean-up
– the blind modification and possible rebooting of thousands of computers
without their owners ‘ knowledge or consent – this complexity becomes all the
more obvious. To be effective, we must find ways to balance societal needs and
the ethical issues surrounding our efforts, lest we drift to the extremes—
becoming the very thing we deplore, or ceding the Internet to the miscreants
because we fear to act. In this paper, we endeavour to build expertise in
practical decision making, as well as to suggest a path towards development of
community standards and enforcement mechanisms governing basic and applied
computer security research. However,
they differ in that the attacks in the first case were fast moving and
aggressive (impacting avail­ability), whereas the second involved more subtle
and concealed attacks on information and information systems (impacting
integrity and confidentiality). These complementary case studies expose a
These include collecting data from compro­mised computers with
the owners’ consent, monitoring the C infrastructure enough to understand
the attackers’ activities and to enable notification of infected parties at the
appropriate time, working with government authorities in multiple jurisdictions
to take down the attacker’s C infrastructure, and storing and han­dling
data securely. In talking about notification and disclosure of information, the
researchers note, “Existing practices in this area are underdeveloped and
largely informal. In part, this reflects the fact that global cyber security is
still an embryonic field.” Unfortunately, although
the rich field of ethics offers us a way to consistently and coherently reason about
specific ethical issues, the gap between these ap­proaches and a practical
ethical framework is tremen­dous. In this work, we seek to be neither
proscriptive nor prescriptive, as we believe it presumptuous to pro­pose such a
framework in an area that lacks consensus and shows little active debate.
Instead, our goal here is to raise the issue of community involvement. As such,
our approach is closest to that of Deborah Johnson and Keith Miller6 in that we
are concerned with building expertise in practical decision-making. Note that we intentionally
separate from this discussion the related, but not identical issues surrounding
law and computer security