DisclosiveComputerEthics.pdf

3 Values in technology and disclosivecomputer ethics

Philip Brey

3.1 Introduction

Is it possible to do an ethical study of computer systems themselves inde-pendently of their use by human beings? The theories and approaches in thischapter answer this question affirmatively and hold that such studies shouldhave an important role in computer and information ethics. In doing so, theyundermine conventional wisdom that computer ethics, and ethics generally,is concerned solely with human conduct, and they open up new directions forcomputer ethics, as well as for the design of computer systems.

As our starting point for this chapter, let us consider some typical examplesof ethical questions that are raised in relation to computers and informationtechnology, such as can be found throughout this book:

� Is it wrong for a system operator to disclose the content of employee emailmessages to employers or other third parties?

� Should individuals have the freedom to post discriminatory, degrading anddefamatory messages on the Internet?

� Is it wrong for companies to use data-mining techniques to generate con-sumer profiles based on purchasing behaviour, and should they be allowedto do so?

� Should governments design policies to overcome the digital divide betweenskilled and unskilled computer users?

As these examples show, ethical questions regarding information and com-munication technology typically focus on the morality of particular ways ofusing the technology or the morally right way to regulate such uses.

Taken for granted in such questions, however, are the computer systemsand software that are used. Could there, however, not also be valid ethicalquestions that concern the technology itself? Could there be an ethics ofcomputer systems separate from the ethics of using computer systems? Theembedded values approach in computer ethics, formulated initially by HelenNissenbaum (1998; Flanagan, Howe and Nissenbaum 2008) and since adoptedby many authors in the field, answers these questions affirmatively, and aimsto develop a theory and methodology for moral reflection on computer systemsthemselves, independently of particular ways of using them.

Copyright 2010. Cambridge University Press.

All rights reserved. May not be reproduced in any form without permission from the publisher, except fair uses permitted under U.S. or applicable copyright law.

EBSCO Publishing : eBook Collection (EBSCOhost) – printed on 2/6/2023 1:50 PM via UNIVERSITY OF MARYLAND GLOBAL CAMPUSAN: 317678 ; Luciano Floridi.; The Cambridge Handbook of Information and Computer EthicsAccount: s4264928.main.edsebook

42 Philip Brey

The embedded values approach holds that computer systems and softwareare not morally neutral and that it is possible to identify tendencies in them topromote or demote particular moral values and norms. It holds, for example,that computer programs can be supportive of privacy, freedom of informa-tion, or property rights or, instead, to go against the realization of these val-ues. Such tendencies in computer systems are called ‘embedded’, ‘embodied’or ‘built-in’ moral values or norms. They are built-in in the sense that theycan be identified and studied largely or wholly independently of actual usesof the system, although they manifest themselves in a variety of uses ofthe system. The embedded values approach aims to identify such tenden-cies and to morally evaluate them. By claiming that computer systems mayincorporate and manifest values, the embedded values approach is not claim-ing that computer systems engage in moral actions, that they are morallypraiseworthy or blameworthy, or that they bear moral responsibility (Johnson2006). It is claiming, however, that the design and operation of computersystems has moral consequences and therefore should be subjected to ethicalanalysis.

If the embedded values approach is right, then the scope of computer ethicsis broadened considerably. Computer ethics should not just study ethical issuesin the use of computer technology, but also in the technology itself. And ifcomputer systems and software are indeed value-laden, then many new ethi-cal issues emerge for their design. Moreover, it suggests that design practicesand methodologies, particularly those in information systems design and soft-ware engineering, can be changed to include the consideration of embeddedvalues.

In the following section, Section 3.2, the case will be made for the embed-ded values approach, and some common objections against it will be dis-cussed. Section 3.3 will then turn to an exposition of a particular approachin computer ethics that incorporates the embedded values approach, disclo-sive computer ethics, proposed by the author (Brey 2000). Disclosive com-puter ethics is an attempt to incorporate the notion of embedded valuesinto a comprehensive approach to computer ethics. Section 3.4 considersvalue-sensitive design (VSD), an approach to design developed by computerscientist Batya Friedman and her associates, which incorporates notions ofthe embedded values approach (Friedman, Kahn and Borning 2006). TheVSD approach is not an approach within ethics but within computer sci-ence, specifically within information systems design and software engineer-ing. It aims to account for values in a comprehensive manner in the designprocess, and makes use of insights of the embedded values approach forthis purpose. In a concluding section, the state of the art in these dif-ferent approaches is evaluated and some suggestions are made for futureresearch.

EBSCOhost – printed on 2/6/2023 1:50 PM via UNIVERSITY OF MARYLAND GLOBAL CAMPUS. All use subject to https://www.ebsco.com/terms-of-use

43 Values in technology and disclosive computer ethics

3.2 How technology embodies values

The existing literature on embedded values in computer technology is stillyoung, and has perhaps focused more on case studies and applications fordesign than on theoretical underpinnings. The idea that technology embod-ies values has been inspired by work in the interdisciplinary field of scienceand technology studies, which investigates the development of science andtechnology and their interaction with society. Authors in this field agree thattechnology is not neutral but shaped by society. Some have argued, specifi-cally, that technological artefacts (products or systems) issue constraints onthe world surrounding them (Latour 1992) and that they can harbour politicalconsequences (Wiener 1954). Authors in the embedded value approach havetaken these ideas and applied them to ethics, arguing that technological arte-facts are not morally neutral but value-laden. However, what it means for anartefact to have an embedded value remains somewhat vague.

In this section a more precise description of what it means for a technologi-cal artefact to have embedded values is articulated and defended. The positiontaken here is in line with existing accounts of embedded values, although theirauthors need not agree with all of the claims made in this section. The ideaof embedded values is best understood as a claim that technological artefacts(and in particular computer systems and software) have built-in tendencies topromote or demote the realization of particular values. Defined in this way, abuilt-in value is a special sort of built-in consequence. In this section a defenceof the thesis that technological artefacts are capable of having built-in con-sequences is first discussed. Then tendencies for the promotion of values areidentified as special kinds of built-in consequences of technological artefacts.The section is concluded by a brief review of the literature on values in infor-mation technology, and a discussion of how values come to be embedded intechnology.

3.2.1 Consequences built into technology

The embedded values approach promotes the idea that technology can havebuilt-in tendencies to promote or demote particular values. This idea, how-ever, runs counter to a frequently held belief about technology, the idea thattechnology itself is neutral with respect to consequences. Let us call this theneutrality thesis. The neutrality thesis holds that there are no consequencesthat are inherent to technological artefacts, but rather that artefacts can alwaysbe used in a variety of different ways, and that each of these uses comes withits own consequences. For example, a hammer can be used to hammer nails,but also to break objects, to kill someone, to flatten dough, to keep a pile ofpaper in place or to conduct electricity. These uses have radically different

EBSCOhost – printed on 2/6/2023 1:50 PM via UNIVERSITY OF MARYLAND GLOBAL CAMPUS. All use subject to https://www.ebsco.com/terms-of-use

44 Philip Brey

effects on the world, and it is difficult to point to any single effect that isconstant in all of them.

The hammer example, and other examples like it (a similar example couldbe given for a laptop), suggest strongly that the neutrality thesis is true. If so,this would have important consequences for an ethics of technology. It wouldfollow that ethics should not pay much attention to technological artefactsthemselves, because they in themselves do not ‘do’ anything. Rather, ethicsshould focus on their usage alone.

This conclusion holds only if one assumes that the notion of embeddedvalues requires that there are consequences that manifest themselves in eachand every use of an artefact. But this strong claim need not be made. Aweaker claim is that artefacts may have built-in consequences in that thereare recurring consequences that manifest themselves in a wide range of usesof the artefact, though not in all uses. If such recurring consequences can beassociated with technological artefacts, this may be sufficient to falsify thestrong claim of the neutrality thesis that each use of a technological artefactcomes with its own consequences. And a good case can be made that at leastsome artefacts can be associated with such recurring consequences.

An ordinary gas-engine automobile, for example, can evidently be usedin many different ways: for commuter traffic, for leisure driving, to taxipassengers or cargo, for hit jobs, for auto racing, but also as a museum piece,as a temporary shelter for the rain or as a barricade. Whereas there is no singleconsequence that results from all of these uses, there are several consequencesthat result from a large number of these uses: in all but the last three uses,gasoline is used up, greenhouse gases and other pollutants are being released,noise is being generated, and at least one person (the driver) is being movedaround at high speeds. These uses, moreover, have something in common:they are all central uses of automobiles, in that they are accepted uses thatare frequent in society and that account for the continued production andusage of automobiles. The other three uses are peripheral in that they areless dominant uses that depend for their continued existence on these centraluses, because their central uses account for the continued production andconsumption of automobiles. Central uses of the automobile make use of itscapacity for driving, and when it is used in this capacity, certain consequencesare very likely to occur. Generalizing from this example, a case can be madethat technological artefacts are capable of having built-in consequences inthe sense that particular consequences may manifest themselves in all of thecentral uses of the artefact.

It may be objected that, even with this restriction, the idea of built-inconsequences employs a too deterministic conception of technology. It sug-gests that, when technological artefacts are used, particular consequences arenecessary or unavoidable. In reality, there are usually ways to avoid par-ticular consequences. For example, a gas-fuelled automobile need not emit

EBSCOhost – printed on 2/6/2023 1:50 PM via UNIVERSITY OF MARYLAND GLOBAL CAMPUS. All use subject to https://www.ebsco.com/terms-of-use

45 Values in technology and disclosive computer ethics

greenhouse gases into the atmosphere if a greenbox device is attached to it,which captures carbon dioxide and nitrous oxide and converts it into bio-oil.To avoid this objection, it may be claimed that the notion of built-in con-sequences does not refer to necessary, unavoidable consequences but ratherto strong tendencies towards certain consequences. The claim is that theseconsequences are normally realized whenever the technology is used, unlessit is used in a context that is highly unusual or if extraordinary steps aretaken to avoid particular consequences. Built-in consequences are thereforenever absolute but always relative to a set of typical uses and contexts of use,outside of which the consequences may not occur.

Do many artefacts have built-in consequences in the way defined above?The extent to which technological artefacts have built-in consequences can becorrelated with two factors: the extent to which they are capable of exertingforce or behaviour autonomously, and the extent to which they are embeddedin a fixed context of use. As for the first parameter, some artefacts seemto depend strongly on users for their consequences, whereas others seem tobe able to generate effects on their own. Mechanical and electrical devices,in particular, are capable of displaying all kinds of behaviours on their own,ranging from simple processes, like the consumption of fuel or the emission ofsteam, to complex actions, like those of robots and artificial agents. Elementsof infrastructure, like buildings, bridges, canals and railway tracks, may notbehave autonomously but, by their mere presence, they do impose significantconstraints on their environment, including the actions and movements ofpeople, and in this way engender their own consequences. Artefacts that arenot mechanical, electrical or infrastructural, like simple hand-held tools andutensils, tend to have less consequences of their own and their consequencestend to be more dependent on the uses to which they are put.

As for the second parameter, it is easier to attribute built-in consequencesto technological artefacts that are placed in a fixed context of use than tothose that are used in many different contexts. Adapting an example byWinner (1980), an overpass that is 180 cm (6 ft) high has as a generic built-inconsequence that it prevents traffic from going through that is more than180 cm high. But when such an overpass is built over the main access roadto an island from a city in which automobiles are generally less than 180 cmhigh and buses are taller, then it acquires a more specific built-in consequence,which is that buses are being prevented from going to the island whereasautomobiles do have access. When, in addition, it is the case that buses arethe primary means of transportation for black citizens, whereas most whitecitizens own automobiles, then the more specific consequence of the overpassis that it allows easy access to the island for one racial group, while denyingit to another. When the context of use of an artefact is relatively fixed, theimmediate, physical consequences associated with a technology can oftenbe translated into social consequences because there are reliable correlations

EBSCOhost – printed on 2/6/2023 1:50 PM via UNIVERSITY OF MARYLAND GLOBAL CAMPUS. All use subject to https://www.ebsco.com/terms-of-use

46 Philip Brey

between the physical and the social (for example between prevention of accessto buses and prevention of access to blacks) that are present (Latour 1992).

3.2.2 From consequences to values

Let us now turn from built-in consequences to embedded values. An embeddedvalue is a special kind of built-in consequence. It has already been explainedhow technological artefacts can have built-in consequences. What needs tobe explained now is how some of these built-in consequences can be asso-ciated with values. To be able to make this case, let us first consider what avalue is.

Although the notion of a value remains somewhat ambiguous in philosophy,some agreements seem to have emerged (Frankena 1973). First, philosopherstend to agree that values depend on valuation. Valuation is the act of valuingsomething, or finding it valuable, and to find something valuable is to find itgood in some way. People find all kinds of things valuable, both abstract andconcrete, real and unreal, general and specific. Those things that people findvaluable that are both ideal and general, like justice and generosity, are calledvalues, with disvalues being those general qualities that are considered to bebad or evil, like injustice and avarice. Values, then, correspond to idealizedqualities or conditions in the world that people find good. For example, thevalue of justice corresponds to some idealized, general condition of the worldin which all persons are treated fairly and rewarded rightly.

To have a value is to want it to be realized. A value is realized if theideal conditions defined by it are matched by conditions in the actual world.For example, the value of freedom is fully realized if everyone in the worldis completely free. Often, though, a full realization of the ideal conditionsexpressed in a value is not possible. It may not be possible for everyone to becompletely free, as there are always at least some constraints and limitationsthat keep people from a state of complete freedom. Therefore, values cangenerally be realized only to a degree.

The use of a technological artefact may result in the partial realization of avalue. For instance, the use of software that has been designed not to makeone’s personal information accessible to others helps to realize the value ofprivacy. The use of an artefact may also hinder the realization of a value orpromote the realization of a disvalue. For instance, the use of software thatcontains spyware or otherwise leaks personal data to third parties harms therealization of the value of privacy. Technological artefacts are hence capableof either promoting or harming the realization of values when they are used.When this occurs systematically, in all of its central uses, we may say thatthe artefact embodies a special kind of built-in consequence, which is a built-in tendency to promote or harm the realization of a value. Such a built-intendency may be called, in short, an embedded value or disvalue. For example,

EBSCOhost – printed on 2/6/2023 1:50 PM via UNIVERSITY OF MARYLAND GLOBAL CAMPUS. All use subject to https://www.ebsco.com/terms-of-use

47 Values in technology and disclosive computer ethics

spyware-laden software has a tendency to harm privacy in all of its typicaluses, and may therefore be claimed to have harm to privacy as an embeddeddisvalue.

Embedded values approaches often focus on moral values. Moral valuesare ideals about how people ought to behave in relation to others and them-selves and how society should be organized so as to promote the right courseof action. Examples of moral values are justice, freedom, privacy and hon-esty. Next to moral values, there are different kinds of non-moral values, forexample, aesthetic, economic, (non-moral) social and personal values, such asbeauty, efficiency, social harmony and friendliness.

Values should be distinguished from norms, which can also be embeddedin technology. Norms are rules that prescribe which kinds of actions or stateof affairs are forbidden, obligatory or allowed. They are often based on valuesthat provide a rationale for them. Moral norms prescribe which actions areforbidden, obligatory or allowed from the point of view of morality. Exam-ples of moral norms are ‘do not steal’ and ‘personal information should notbe provided to third parties unless the bearer has consented to such distri-bution’. Examples of non-moral norms are ‘pedestrians should walk on theright side of the street’ and ‘fish products should not contain more than10 mg histamines per 100 grams’. Just as technological artefacts can promotethe realization of values, they can also promote the enforcement of norms.Embedded norms are a special kind of built-in consequence. They are tenden-cies to effectuate norms by bringing it about that the environment behavesor is organized according to the norm. For example, web browsers can be setnot to accept cookies from websites, thereby enforcing the norm that websitesshould not collect information about their user. By enforcing a norm, arte-facts thereby also promote the corresponding value, if any (e.g., privacy in theexample).

So far we have seen that technological artefacts may have embedded valuesunderstood as special kinds of built-in consequences. Because this conceptionrelates values to causal capacities of artefacts to affect their environment, itmay be called the causalist conception of embedded values. In the literatureon embedded values, other conceptions have been presented as well. Notably,Flanagan, Howe and Nissenbaum (2008) and Johnson (1997) discuss what theycall an expressive conception of embedded values. Artefacts may be said to beexpressive of values in that they incorporate or contain symbolic meaningsthat refer to values. For example, a particular brand of computer may sym-bolize or represent status and success, or the representation of characters andevents in a computer game may reveal racial prejudices or patriarchal values.Expressive embedded values in artefacts represent the values of designers orusers of the artefact. This does not imply, however, that they also functionto realize these values. It is conceivable that the values expressed in arte-facts cause people to adopt these values and thereby contribute to their own

EBSCOhost – printed on 2/6/2023 1:50 PM via UNIVERSITY OF MARYLAND GLOBAL CAMPUS. All use subject to https://www.ebsco.com/terms-of-use

48 Philip Brey

realization. Whether this happens frequently remains an open question. Inany case, whereas the expressive conception of embedded values merits fur-ther philosophical reflection, the remainder of this chapter will be focused onthe causalist conception.

3.2.3 Values in information technology

The embedded values approach within computer ethics studies embedded val-ues in computer systems and software and their emergence, and providesmoral evaluations of them. The study of embedded values in Information andCommunication Technology (ICT) has begun with a seminal paper by BatyaFriedman and Helen Nissenbaum in which they consider bias in computersystems (Friedman and Nissenbaum 1996). A biased computer system or pro-gram is defined by them as one that systematically and unfairly discriminatesagainst certain individuals or groups, who may be users or other stakeholdersof the system. Examples include educational programs that have much moreappeal to boys than to girls, loan approval software that gives negative rec-ommendations for loans to individuals with ethnic surnames, and databasesfor matching organ donors with potential transplant recipients that system-atically favour individuals retrieved and displayed immediately on the firstscreen over individuals displayed on later screens. Building on their work, Ihave distinguished user biases that discriminate against (groups of) users of aninformation system, and information biases that discriminate against stake-holders represented by the system (Brey 1998). I have discussed various kindsof user bias, such as user exclusion and the selective penalization of users,as well as different kinds of information bias, including bias in informationcontent, data selection, categorization, search and matching algorithms andthe display of information.

After their study of bias in computer systems, Friedman and Nissenbaumwent on to consider consequences of software agents for the autonomy ofusers. Software agents are small programs that act on behalf of the user toperform tasks. Friedman and Nissenbaum (1987) argue that software agentscan undermine user autonomy in various ways – for example by having onlylimited capabilities to perform wanted tasks or by not making relevant infor-mation available to the user – and argue that it is important that softwareagents are designed so as to enhance user autonomy. The issue of user auton-omy is also taken up in Brey (1998, 1999c), in which I argue that computersystems can undermine autonomy by supporting monitoring by third parties,by imposing their own operational logic on the user, thus limiting creativityand choice, or by making users dependent on systems operators or others formaintenance or access to systems functions.

Deborah Johnson (1997) considers the claim that the Internet is an inher-ently democratic technology. Some have claimed that the Internet, because of

EBSCOhost – printed on 2/6/2023 1:50 PM via UNIVERSITY OF MARYLAND GLOBAL CAMPUS. All use subject to https://www.ebsco.com/terms-of-use

49 Values in technology and disclosive computer ethics

its distributed and nonhierarchical nature, promotes democratic processes byempowering individuals and stimulating democratic dialogue and decision-making (see Chapter 10). Johnson subscribes to this democratic potential.She cautions, however, that these democratic tendencies may be limited ifthe Internet is subjected to filtering systems that only give a small group ofindividuals control over the flow of information on the Internet. She henceidentifies both democratic and undemocratic tendencies in the technology thatmay become dominant depending on future use and development.

Other studies, within the embedded values approach, have focused on spe-cific values, such as privacy, trust, community, moral accountability andinformed consent, or on specific technologies. Introna and Nissenbaum (2000)consider biases in the algorithms of search engines, which, they argue, favourwebsites with a popular and broad subject matter over specialized sites, and thepowerful over the less powerful. Introna (2007) argues that existing plagiarismdetection software creates an artificial distinction between alleged plagiaristsand non-plagiarists, which is unfair. Introna (2005) considers values embed-ded in facial recognition systems. Camp (1999) analyses the implications ofInternet protocols for democracy. Flanagan, Howe and Nissenbaum (2005)study values in computer games, and Brey (1999b, 2008) studies them incomputer games, computer simulations and virtual reality applications. Agreand Mailloux (1997) reveal the implications for privacy of Intelligent Vehicle-Highway Systems, Tavani (1999) analyses the implications of data-miningtechniques for privacy and Fleischmann (2007) considers values embedded indigital libraries.

3.2.4 The emergence of values in information technology

What has not been discussed so far is how technological artefacts and systemsacquire embedded values. This issue has been ably taken up by Friedmanand Nissenbaum (1996). They analyse the different ways in which biases(injustices) can emerge in computer systems. Although their focus is on biases,their analysis can easily be generalized to values in general. Biases, they argue,can have three different types of origins. Preexisting biases arise from valuesand attitudes that exist prior to the design of a system. They can either beindividual, resulting from the values of those who have a significant input intothe design of the systems, or societal, resulting from organizations, institutionsor the general culture that constitute the context in which the system isdeveloped. Examples are racial biases of designers that become embedded inloan approval software, and overall gender biases in society that lead to thedevelopment of computer games that are more appealing to boys than to girls.Friedman and Nissenbaum note that preexisting biases can be embedded insystems intentionally, through conscious efforts of individuals or institutions,or unintentionally and unconsciously.

EBSCOhost – printed on 2/6/2023 1:50 PM via UNIVERSITY OF MARYLAND GLOBAL CAMPUS. All use subject to https://www.ebsco.com/terms-of-use

50 Philip Brey

A second type is technical bias, which arises from technical constraints orconsiderations. The design of computer systems includes all kinds of technicallimitations and assumptions that are perhaps not value-laden in themselvesbut that could result in value-laden designs, for example because limitedscreen sizes cannot display all results of a search process, thereby privilegingthose results that are displayed first, or because computer algorithms or modelscontain formalized, simplified representations of reality that introduce biasesor limit the autonomy of users, or because software engineering techniquesdo not allow for adequate security, leading to systematic breaches of privacy.A third and final type is emergent bias, which arises when the social contextin which the system is used is not the one intended by its designers. In thenew context, the system may not adequately support the capabilities, valuesor interests of some user groups or the interests of other stakeholders. Forexample, an ATM that relies heavily on written instructions may be installedin a neighborhood with a predominantly illiterate population.

Friedman and Nissenbaum’s classification can easily be extended to embed-ded values in general. Embedded values may hence be identified as preexist-ing, technical or emergent. What this classification shows is that embeddedvalues are not necessarily a reflection of the values of designers. When theyare, moreover, their embedding often has not been intentional. However, theirembedding can be an intentional act. If designers are aware of the way inwhich values are embedded into artefacts, and if they can sufficiently antic-ipate future uses of an artefact and its future context(s) of use, then theyare in a position to intentionally design artefacts to support particular val-ues. Several approaches have been proposed in recent years that aim to makeconsiderations of value part of the design process. In Section 3.4, the mostinfluential of these approaches, called value-sensitive design, is discussed. Butfirst, let us consider a more philosophical approach that also adopts the notionof embedded values.

3.3 Disclosive computer ethics

The approach of disclosive computer ethics (Brey 2000, 1999a) intends tomake the embedded values approach part of a comprehensive approachto computer ethics. It is widely accepted that the aim of computer ethics isto morally evaluate practices that involve computer technology and to deviseethical policies for these practices. The practices in question are activities ofdesigning, using and managing computer technology by individuals, groupsor organizations. Some of these practices are already widely recognized insociety as morally controversial. For example, it is widely recognized thatcopying patented software and filtering Internet information are morally con-troversial practices. Such practices may be called morally transparent because

EBSCOhost – printed on 2/6/2023 1:50 PM via UNIVERSITY OF MARYLAND GLOBAL CAMPUS. All use subject to https://www.ebsco.com/terms-of-use

51 Values in technology and disclosive computer ethics

the practice is known and it is roughly understood what moral values are atstake in relation to it.

In other computer-related practices, the moral issues that are involved maynot be sufficiently recognized. This may be the case because the practicesthemselves are not well known beyond a circle of specialists, or because theyare well known but not recognized as morally charged because they havea false appearance of moral neutrality. Practices of this type may be calledmorally opaque, meaning that it is not generally understood that the practiceraises ethical questions or what these questions may be. For example, thepractice of browser tracking is morally opaque because it is not well knownor well understood by many people, and the practice of search engine use ismorally opaque because, although the practice is well known, it is not wellknown that the search algorithms involved in the practice contain biases andraise ethical questions.

Computer ethics has mostly focused on morally transparent practices, andspecifically on practices of using computer systems. Such approaches may becalled mainstream computer ethics. In mainstream computer ethics, a typicalstudy begins by identifying a morally controversial practice, like softwaretheft, hacking, electronic monitoring or Internet pornography. Next, the prac-tice is described and analysed in descriptive terms, and finally, moral principlesand judgements are applied to it and moral deliberation takes place, resultingin a moral evaluation of the practice and, possibly, a set of policy recommen-dations. As Jim Moor has summed up this approach, ‘A typical problem incomputer ethics arises because there is a policy vacuum about how computertechnology should be used’ (1985, p. 266).

The approach of disclosive computer ethics focuses instead on morallyopaque practices. Many practices involving computer technology are morallyopaque because they include operations of technological systems that are verycomplex and difficult to understand for laypersons and that are often hiddenfrom view for the average user. Additionally, practices are often morallyopaque because they involve distant actions over computer networks by sys-tem operators, providers, website owners and hackers and remain hidden fromview from users and from the public at large. The aim of disclosive ethics isto identify such morally opaque practices, describe and analyse them, so asto bring them into view, and to identify and reflect on any problematic moralfeatures in them. Although mainstream and disclosive computer ethics aredifferent approaches, they are not rival approaches but are rather comple-mentary. They are also not completely separable, because the moral opacityof practices is always a matter of degree, and because a complex practice mayinclude both morally transparent and opaque dimensions, and thus requireboth approaches.

Many computer-related practices that are morally opaque are so becausethey depend on operations of computer systems that are value-laden without

EBSCOhost – printed on 2/6/2023 1:50 PM via UNIVERSITY OF MARYLAND GLOBAL CAMPUS. All use subject to https://www.ebsco.com/terms-of-use

52 Philip Brey

it being known. Many morally opaque practices, though not all, are the resultof undisclosed embedded values and norms in computer technology. A largepart of the work in disclosive computer ethics, therefore, focuses on the iden-tification and moral evaluation of such embedded values.

3.3.1 Methodology: multi-disciplinary and multi-level

Research typically focuses on an (alleged) morally opaque practice (e.g., pla-giarism detection) and optionally on a morally opaque computer system orsoftware program involved in this practice (e.g., plagiarism detection software).The aim of the investigation usually is to reveal hidden morally problematicfeatures in the practice and to provide ethical reflections on these features,optionally resulting in specific moral judgements or policy recommendations.To achieve this aim, research should include three different kinds of researchactivities, which take place at different levels of analysis. First, there is thedisclosure level. At this level, morally opaque practices and computer systemsare analysed from the point of view of one or more relevant moral values, likeprivacy or justice. It is investigated whether and how the practice or systemtends to promote or demote the relevant value. At this point, very little moraltheory is introduced into the analysis, and only a coarse definition of thevalue in question is used that can be refined later on into the research.

Second, there is the theoretical level at which moral theory is developedand refined. As Jim Moor (1985) has pointed out, the changing settings andpractices that emerge with new computer technology may yield new values, aswell as require the reconsideration of old values. There may also be new moraldilemmas because of conflicting values that suddenly clash when broughttogether in new settings and practices. It may then be found that existing moraltheory has not adequately theorized these values and value conflicts. Privacy,for example, is now recognized by many computer ethicists as requiring moreattention than it has previously received in moral theory. In part, this isdue to reconceptualizations of the private and public sphere, brought aboutby the use of computer technology, which has resulted in inadequacies inexisting moral theory about privacy. It is part of the task of computer ethicsto further develop and modify existing moral theory when, as in the case ofprivacy, existing theory is insufficient or inadequate in light of new demandsgenerated by new practices involving computer technology.

Third, there is the application level, in which, in varying degrees of speci-ficity and concreteness, moral theory is applied to analyses that are the out-come of research at the disclosure level. For example, the question of whatamount of protection should be granted to software developers against thecopying of their programs may be answered by applying consequentialist ornatural law theories of property; and the question of what actions governments

EBSCOhost – printed on 2/6/2023 1:50 PM via UNIVERSITY OF MARYLAND GLOBAL CAMPUS. All use subject to https://www.ebsco.com/terms-of-use

53 Values in technology and disclosive computer ethics

should take in helping citizens have access to computers may be answeredby applying Rawls’ principles of justice. The application level is where moraldeliberation takes place. Usually, this involves the joint consideration of moraltheory, moral judgements or intuitions and background facts or theories, ratherthan a slavish application of preexisting moral rules.

Disclosive ethics should not just be multi-level, ideally it should also bea multi-disciplinary endeavour, involving ethicists, computer scientists andsocial scientists. The disclosure level, particularly, is best approached in amulti-disciplinary fashion because research at this level often requires con-siderable knowledge of the technological aspects of the system or practice thatis studied and may also require expertise in social science for the analysis ofthe way in which the functioning of systems is dependent on human actions,rules and institutions. Ideally, research at the disclosure level, and perhapsalso at the application level, is best approached as a cooperative venturebetween computer scientists, social scientists and philosophers. If this cannotbe attained, it should at least be carried out by researchers with an adequateinterdisciplinary background.

3.3.2 Focus on public values

The importance of disclosive computer ethics is that it makes transparentmoral features of practices and technologies that would otherwise remainhidden, thus making them available for ethical analysis and moral decision-making. In this way, it supplements mainstream computer ethics, which runsthe risk of limiting itself to the more obvious ethical dilemmas in computing.An additional benefit is that it can point to novel solutions to moral dilemmasin mainstream computer ethics. Mainstream approaches tend to seek solu-tions for moral dilemmas through norms and policies that regulate usage.But some of these moral dilemmas can also be solved by redesigning, replac-ing or removing the technology that is used, or by modifying problematicbackground practices that condition usage. Disclosive ethics can bring theseoptions into view. It thus reveals a broader arena for moral action, in whichdifferent parties responsible for the design, adoption, use and regulation ofcomputer technology share responsibility for the moral consequences of usingit, and in which the technology itself is made part of the equation.

In Brey (2000) I have proposed a set of values that disclosive computerethics should focus on. This list included justice (fairness, non-discrimination),freedom (of speech, of assembly), autonomy, privacy and democracy. Manyother values could be added, like trust, community, human dignity and moralaccountability. These are all public values, which are moral and social valuesthat are widely accepted in society. An emphasis on public values makes itmore likely that analyses in disclosive ethics can find acceptance in society

EBSCOhost – printed on 2/6/2023 1:50 PM via UNIVERSITY OF MARYLAND GLOBAL CAMPUS. All use subject to https://www.ebsco.com/terms-of-use

54 Philip Brey

and that they stimulate better policies, design practices or practices of usingtechnology. Of course, analysts will still have disagreements on the properdefinition or operationalization of public values and the proper way of bal-ancing them against each other and against other constraints like cost andusability, but such disagreements are inherent to ethics.

The choice for a particular set of values prior to analysis has been criticizedby Introna (2005), who argues that disclosive computer ethics should ratherfocus on the revealing of hidden politics, interests and values in technologicalsystems and practices, without prioritizing which values ought to be real-ized. This suggests a more descriptive approach to disclosive computer ethicsopposed to the more normative approach proposed in Brey (2000).

3.4 Value-sensitive design

The idea that computer systems harbour values has stimulated research intothe question how considerations of value can be made part of the designprocess (Flanagan, Nissenbaum and Howe 2008). Various authors have madeproposals for incorporating considerations of value into design methodology.Value-sensitive design (VSD) is the most elaborate and influential of theseapproaches. VSD has been developed by computer scientist Batya Friedmanand her associates (Friedman, Kahn and Borning 2006, Friedman and Kahn2003) and is an approach to the design of computer systems and software thataims to account for and incorporate human values in a comprehensive mannerthroughout the design process. The theoretical foundation of value-sensitivedesign is provided in part by the embedded values approach, although it isemphasized that values can result from both design and the social context inwhich the technology is used, and usually emerge in the interaction betweenthe two.

The VSD approach proposes investigations into values, designs, contexts ofuse and stakeholders with the aim of designing systems that incorporate andbalance the values of different stakeholders. It aims to offer a set of methods,tools and procedures for designers by which they can systematically accountfor values in the design process. VSD builds on previous work in variousfields, including computer ethics, social informatics (the study of informationand communication tools in cultural and institutional contexts), computer-supported cooperative work (the study of how interdependent group workcan be supported by means of computer systems) and participatory design (anapproach to design that attempts to actively involve users in the design processto help ensure that products meet their needs and are usable). The focus ofVSD is on ‘human values with ethical import’, such as privacy, freedom frombias, autonomy, trust, accountability, identity, universal usability, ownershipand human welfare (Friedman and Kahn 2003, p. 1187).

EBSCOhost – printed on 2/6/2023 1:50 PM via UNIVERSITY OF MARYLAND GLOBAL CAMPUS. All use subject to https://www.ebsco.com/terms-of-use

55 Values in technology and disclosive computer ethics

VSD places much emphasis on the values and needs of stakeholders. Stake-holders are persons, groups or organizations whose interests can be affectedby the use of an artefact. A distinction is made between direct and indirectstakeholders. Direct stakeholders are parties who interact directly with thecomputer system or its output. That is, they function in some way as usersof the system. Indirect stakeholders include all other parties who are affectedby the system. The VSD approach proposes that the values and interests ofstakeholders are carefully balanced against each other in the design process.At the same time, it wants to maintain that the human and moral values itconsiders have standing independently of whether a particular person or groupupholds them (Friedman and Kahn 2003, p. 1186). This stance poses a possibledilemma for the VSD approach: how to proceed if the values of stakeholdersare at odds with supposedly universal moral values that the analyst indepen-dently brings to the table? This problem has perhaps not been sufficientlyaddressed in current work in VSD. In practice, fortunately, there will often beat least one stakeholder who has an interest in upholding a particular moralvalue that appears to be at stake. Still, this fact does not provide a principledsolution for this problem.

3.4.1 VSD methodology

VSD often focuses on a technological system that is to be designed andinvestigates how human values can be accounted for in its design. However,designers may also focus on a particular value and explore its implications forthe design of various systems, or on a particular context of use, and explorevalues and technologies that may play a role in it. With one of these three aimsin mind, VSD then utilizes a tripartite methodology that involves three kindsof investigations: conceptual, empirical and technical. These investigations areundertaken congruently and are ultimately integrated with each other withinthe context of a particular case study.

Conceptual investigations aim to conceptualize and describe the valuesimplicated in a design, as well as the stakeholders affected by it, and considerthe appropriate trade-off between implicated values, including both moraland non-moral values. Empirical investigations focus on the human contextin which the technological artefact is to be situated, so as to better anticipateon this context and to evaluate the success of particular designs. They includeempirical studies of human behaviour, physiology, attitudes, values and needsof users and other stakeholders, and may also consider the organizational con-text in which the technology is used. Empirical investigations are importantin order to assess what the values and needs of stakeholders are, how techno-logical artefacts can be expected to be used, and how they can be expectedto affect users and other stakeholders. Technical investigations, finally, study

EBSCOhost – printed on 2/6/2023 1:50 PM via UNIVERSITY OF MARYLAND GLOBAL CAMPUS. All use subject to https://www.ebsco.com/terms-of-use

56 Philip Brey

how properties of technological artefacts support or hinder human values andhow computer systems and software may be designed proactively in orderto support specific values that have been found important in the conceptualinvestigation.

Friedman, Kahn and Borning (2003) propose a series of steps that may betaken in VSD case studies. They are, respectively, the identification of thetopic of investigation (a technological system, value or context of use), theidentification of direct and indirect stakeholders, the identification of benefitsand harms for each group, the mapping of these benefits and harms ontocorresponding values, the conduction of a conceptual investigation of keyvalues, the identification of potential value conflicts and the proposal ofsolutions for them, and the integration of resulting value considerations withthe larger objectives of the organization(s) that have a stake in the design.

3.4.2 VSD in practice

A substantial number of case studies within the VSD framework have beencompleted, covering a broad range of technologies and values (see Friedmanand Freier 2005 for references). To see how VSD is brought into practice, twocase studies will now be described in brief.

In one study, Friedman, Howe and Felten (2002) analyse how the value ofinformed consent (in relation to online interactions of end-users) might be bet-ter implemented in the Mozilla browser, which is an open-source browser. Theyfirst undertook an initial conceptual investigation of the notion of informedconsent, outlining real-world conditions that would have to be met for it, likedisclosure of benefits and risks, voluntariness of choice and clear communi-cation in a language understood by the user. They then considered the extentto which features of existing browsers already supported these conditions.Next, they identified conditions that were supported insufficiently by thesefeatures, and defined new design goals to attain this support. For example,they found that users should have a better global understanding of cookie usesand benefits and harms, and should have a better ability to manage cookieswith minimal distraction. Finally, they attempted to come up with designs ofnew features that satisfied these goals, and proceeded to implement them intothe Mozilla browser.

In a second study, reported in Friedman, Kahn and Borning (2006), Kahn,Friedman and their colleagues consider the design of a system consistingof a plasma display and a high-definition TV camera. The display is to behung in interior offices and the camera is to be located outside, aimed at anatural landscape. The display was to function as an ‘augmented window’on nature that was to increase emotional well-being, physical health andcreativity in workers. In their VSD investigation, they operationalized some of

EBSCOhost – printed on 2/6/2023 1:50 PM via UNIVERSITY OF MARYLAND GLOBAL CAMPUS. All use subject to https://www.ebsco.com/terms-of-use

57 Values in technology and disclosive computer ethics

these values and sought to investigate in a laboratory context whether theywere realized in office workers, which they found they did. They then alsoidentified indirect stakeholders of the system. These included those individualsthat were unwittingly filmed by the camera. Further research indicated thatmany of them felt that the system violated their privacy. The authors concludedthat if the system is to be further developed and used, this privacy issue mustfirst be solved. It may be noted, in passing, that, whilst in these two examplesonly a few values appear to be at stake, other case studies consider a muchlarger number of values, and identify many more stakeholders.

3.5 Conclusion

This chapter focused on the embedded values approach, which holds thatcomputer systems and software are capable of harbouring embedded or ‘built-in’ values, and on two derivative approaches, disclosive computer ethics andvalue-sensitive design. It has been argued that, in spite of powerful argumentsfor the neutrality of technology, a good case can be made that technologicalartefacts, including computer systems, can be value-laden. The notion of anembedded value was defined as a built-in tendency in an artefact to promoteor harm the realization of a value that manifests itself across the centraluses of an artefact in ordinary contexts of use. Examples of such values ininformation technology were provided, and it was argued that such valuescan emerge because they are held by designers or society at large, becauseof technical constraints or considerations, or because of a changing contextof use.

Next, the discussion shifted to disclosive computer ethics, which wasdescribed as an attempt to incorporate the notion of embedded values intoa comprehensive approach to computer ethics. Disclosive computer ethicsfocuses on morally opaque practices in computing and aims to identify,analyse and morally evaluate such practices. Many practices in computingare morally opaque because they depend on computer systems that containembedded values that are not recognized as such. Therefore, disclosive ethicsfrequently focuses on such embedded values. Finally, value-sensitive designwas discussed. This is a framework for accounting for values in a comprehen-sive manner in the design of systems and software. The approach was related tothe embedded values approach and its main assumptions and methodologicalprinciples were discussed.

Much work still remains to be done within the three approaches. The embed-ded values approach could still benefit from more theoretical and conceptualwork, particularly regarding the very notion of an embedded value and itsrelation to both the material features of artefacts and their context of use. Dis-closive computer ethics could benefit from further elaboration of its central

EBSCOhost – printed on 2/6/2023 1:50 PM via UNIVERSITY OF MARYLAND GLOBAL CAMPUS. All use subject to https://www.ebsco.com/terms-of-use

58 Philip Brey

concepts and assumptions, a better integration with mainstream computerethics and more case studies. And VSD could still benefit from further devel-opment of its methodology, its integration with accepted methodologies ininformation systems design and software engineering, and more case studies.In addition, more attention needs to be invested into the problematic tensionbetween the values of stakeholders and supposedly universal moral valuesbrought in by analysts. Yet, they constitute exciting new approaches in thefields of computer ethics and computer science. In ethics, they represent aninteresting shift in focus from human agency to technological artefacts andsystems. In computer science, they represent an interesting shift from utili-tarian and economic concerns to a concern for human values in design. Asa result, they promise both a better and more complete computer ethics aswell as improved design practices in both computer science and engineeringthat may result in technology that lives up better to our moral and publicvalues.

EBSCOhost – printed on 2/6/2023 1:50 PM via UNIVERSITY OF MARYLAND GLOBAL CAMPUS. All use subject to https://www.ebsco.com/terms-of-use

Calculate your order
Pages (275 words)
Standard price: $0.00
Client Reviews
4.9
Sitejabber
4.6
Trustpilot
4.8
Our Guarantees
100% Confidentiality
Information about customers is confidential and never disclosed to third parties.
Original Writing
We complete all papers from scratch. You can get a plagiarism report.
Timely Delivery
No missed deadlines – 97% of assignments are completed in time.
Money Back
If you're confident that a writer didn't follow your order details, ask for a refund.

Calculate the price of your order

You will get a personal manager and a discount.
We'll send you the first draft for approval by at
Total price:
$0.00
Power up Your Academic Success with the
Team of Professionals. We’ve Got Your Back.
Power up Your Study Success with Experts We’ve Got Your Back.
Open chat
1
Hello. Can we help you?