Two arguments against human-friendly AI

forthcoming in AI and Ethics

Link to paper: https://link.springer.com/article/10.1007/s43681-021-00051-6

Abstract

The past few decades have seen a substantial increase in the focus on the myriad ethical implications of artificial intelligence. Included amongst the numerous issues is the existential risk that some believe could arise from the development of artificial general intelligence (AGI) which is an as-of-yet hypothetical form of AI that is able to perform all the same intellectual feats as humans. This has led to extensive research into how humans can avoid losing control of an AI that is at least as intelligent as the best of us. This ‘control problem’ has given rise to research into the development of ‘friendly AI’ which is a highly competent AGI that will benefit, or at the very least, not be hostile toward humans. Though my question is focused upon AI, ethics and issues surrounding the value of friendliness, I want to question the pursuit of human-friendly AI (hereafter FAI). In other words, we might ask whether worries regarding harm to humans are sufficient reason to develop FAI rather than impartially ethical AGI, or an AGI designed to take the interests of all moral patients—both human and non-human—into consideration. I argue that, given that we are capable of developing AGI, it ought to be developed with impartial, species-neutral values rather than those prioritizing friendliness to humans above all else.

Introduction

The past few decades have seen a geometric increase in the number of pages dedicated to the ethical implications of artificial intelligence, and rightly so. Included amongst the numerous issues is the existential risk that some believe could arise from the development of artificial general intelligence (AGI) which is an as-of-yet hypothetical form of artificial intelligence that is able to perform all the same intellectual tasks as humans. This has led to extensive research into how humans can avoid losing control of an AI that is at least as intelligent as the best of us. This is often known as the ‘control problem’ and it has given rise to research into the more specific areas of ‘friendly AI’1 and value-alignment (i.e., the aligning of AGI’s values, or at least its behaviors, with those of humans). ‘Friendly AI’ refers to highly competent AGI that will benefit, or at the very least not be hostile toward, humans. In this paper, I want to question the pursuit of human-friendly AI (hereafter FAI).

Worries regarding the failure to successfully develop FAI have led to a significant focus on issues of ‘motivation control’ (e.g., via machine ethics,2 value-alignment,3 Oracle AI,4 etc.). But it remains reasonable to ask whether worries regarding harm to humans is a sufficient reason to develop FAI rather than impartially ethical AGI, or AGI designed to take the interests of all moral patients into consideration. I will argue that given we are capable of developing AGI, it ought to be developed with impartial, species-neutral values rather than those prioritizing friendliness to humans above all else.5

Before proceeding, some brief definitions will help set the stage.

  • By ‘artificial general intelligence’ or ‘AGI’ I mean a non-biological intelligence that possesses all of the same intellectual capabilities as a mentally competent adult human.6
  • By ‘artificial superintelligence’ or ‘ASI’ I intend a non-biological intelligence that far outstrips even the brightest of human minds across all intellectual domains and capacities.7
  • By ‘Friendly AI’ or ‘FAI’ I intend artificial general intelligence which will benefit or, at the very least, not be hostile toward or bring harm to human beings. In particular, FAI will make decisions based upon the assumption that human interests alone are intrinsically valuable.8
  • By ‘Impartial AI’ or ‘IAI’ I intend AGI that is developed so that its decision-making procedures consider the interests9 of all moral patients (i.e., any being that can be harmed or benefitted10) to be intrinsically valuable. Moreover, in being truly impartial, such procedures will be species-neutral rather than human-centered in nature. That is, IAI will not favor any entity simply because it’s a member of a particular species or exists in the present.
  • By a ‘moral patient’ I mean any entity that is worthy of any level of moral consideration for its own sake. While there might be significant disagreement regarding exactly which entities are worthy of such consideration, I take it to be uncontroversial that the capacity to suffer is a sufficient condition for being worthy of consideration. So, at minimum, all beings that can suffer will enter into the moral calculus of IAI.

Assumptions

For the purposes of this paper, I will assume the following:

  1. We are capable of creating AGI. We are, or soon enough will be, capable of creating artificial general intelligence.11
  2. AGI will become ASI. The emergence of AGI will eventually, and possibly quite quickly, lead to artificial superintelligence (ASI).12
  3. We can create either impartial AI (IAI) or human-friendly AI (FAI). AGI will learn, accept, retain and act upon the values that it’s given. Because of its vast intelligence, together with its lack of emotional and cognitive shortcomings, ASI will be able to discern then act according to either (a) impartial and species-neutral ethical values13; or (b) human-friendly ethical values (i.e., it will either give moral consideration only to humans or it will favor humans in any situations in which there are conflicting interests).14
  4. IAI may pose a substantial, even existential threat to humans. An ethically impartial artificial intelligence may, in the course of acting on impartial values, pose a substantial, even existential threat to human beings.15

While none of these assumptions are uncontroversial, they are accepted in some form or other by those searching for ways to align the values of AGI and humans as well as those who fret about the control problem more generally.

The central question of this paper is as follows: given the conjunction of these assumptions, would we be morally obligated to create IAI rather than FAI? Or, would it be morally permissible for humans to create FAI, where FAI [1] is generally intelligent, [2] will become superintelligent, [3] is programmed in accordance with, and even causally determined by, values focused exclusively on friendliness to humans; even if [4] such values are not consistent with an impartial, non-speciesist perspective16—and may, as a result, lead to significant suffering for non-human moral patients17 or reduce the possibility of other intelligent, sentient species that might have otherwise evolved from those we harm or extinguish? I will argue that, given the assumptions listed above, we are obligated to create IAI rather than FAI, even if IAI poses a significant, even existential, threat to humans.

I will provide two arguments for favoring IAI over FAI, each based upon our moral responsibilities to human beings as well as non-human moral patients. The first (Sect. 3) will focus upon our responsibilities to actual, currently existing beings while the second (Sect. 4.2) will focus upon our responsibilities to possible18 beings other than humans. Before doing so, I will expand upon the distinction between FAI and IAI, then provide a brief discussion of speciesism to clarify the species-neutral approach I will be defending.

Friendly AI (FAI) vs ethically impartial AI (IAI)

Friendly AI is artificial general (or, super-) intelligence that will respect the interests of, or at the very least, not be hostile toward humanity. It’s likely to be the case that friendliness is a necessary condition for humanity to enjoy the potential benefits that a general or superintelligence might provide (e.g., ending world hunger, curing diseases, solving the challenges of global warming and issues of social organization more generally). Of course, none of this is beneficial for humans if ASI views us as something to be rid of. A being of such vastly superior intelligence, if given goals that turn out to be inconsistent with human interests, could view us as competition or mere obstacles to be done away with.19 As intelligence is the primary tool by which humans have come to their place at the top of the terrestrial hierarchy, the superior intelligence of a hostile ASI will make our survival highly unlikely and our thriving even less so.

To avoid such disastrous results, AGI might be developed in a way that prohibits or disables its capacity to harm humans or to do that which humans command (other than that which will bring harm to humans). There are at least two different things that researchers might mean when they speak of ‘friendly AI’. First, one might intend AI that will act according to what humans believe to be in the interest of humans. In the second case, one might intend AI that will act according to what is, in fact, in the interest of humans. These are certainly different as it might be that we are mistaken about what is in our best interest.20 And while there are problems with either of these approaches, even apart from there existing a possible alternative in IAI, this is not the place to develop them in any detail.21

As opposed to FAI, the idea of ‘ethically impartial’ AI (or, IAI) assumes, first, that there exists a set of impartial, species-neutral moral facts.22 Second, due to its vast intelligence, IAI will have the ability to discover and act according to these facts, in addition to having a far superior ability to accurately calculate the consequences of any action. Third, because of its lack of akratic emotions, cognitive biases and weakness of will, together with its superintelligence and impartial, non-speciesist goal-orientation, it will be far more likely to act effectively according to this set of moral facts than humans likely ever could.

Existing comparable literature

Though there is little, I believe, that has been written about Impartial Artificial Intelligence, there do exist views of FAI that may be consistent with IAI depending on how they are filled out. In particular, I have in mind Muehlhauser and Bostrom [12] as well as Yudkowsky [29].23 According to the former:

The problem of encoding human (or at least humane) values into an AI’s utility function is a challenging one, but it may be possible. If we can build such a ‘Friendly AI,’ we may not only avert catastrophe but also use the powers of machine superintelligence to do enormous good.24

Yudkowsky [29] characterizes the “problem statement of Friendly AI” as to “[e]nsure that the creation of a generally intelligent, self-improving, eventually superintelligent system realizes a positive outcome”.25

The inclusion of ‘humane’ and ‘enormous good’ in the Muehlhauser and Bostrom [12] characterization is important but, depending on how one interprets these terms, this characterization may or may not imply a ‘species-neutral’ approach to the development of AGI. One way to read this paper is as an argument for why we ought to interpret these terms so as to have this implication. The same can be said for the use of ‘positive outcome’ in Yudkowsky [29].

Simply put, if one includes a commitment to species-neutrality in one’s view of FAI then the distinction between FAI and IAI collapses. Nonetheless, my point still stands insofar as there would then be two distinct interpretations of FAI (exclusively human friendly and species-neutral) and I should be read as supporting the more inclusive, species-neutral interpretation.

I turn now to a brief discussion of the moral status of non-humans to specify what I mean by a ‘species-neutral’ approach to ethics.

Speciesism and the moral standing of non-humans

The classic presentation of, and objections to, speciesism can be found in Peter Singer’s Animal Liberation.26 He characterizes speciesism as “a prejudice or attitude of bias in favor of the interests of members of one’s own species and against those of members of other species”.27

I will understand speciesism—meant to be analogous with other ‘isms’ such as racism and sexism—as favoring members of one’s own species on the basis of their being members of one’s own species.

I will also understand the rejection of speciesism as the acceptance of the view that sentient non-humans have at least some intrinsic moral status. That is, sentient non-humans have interests that are morally significant in their own right and not simply as instrumental in furthering human interests.

To clarify the position I’m proposing, it will be helpful to appeal to DeGrazia’s [7] distinction between Singer’s ‘equal-consideration framework’ and the ‘sliding-scale model’.28 According to the equal-consideration approach:

No matter what the nature of the being, the principle of equality requires that its suffering be counted equally with the like suffering—insofar as rough comparisons can be made—of any other being.29

As it concerns pain, all animals are equal on the equal-consideration view. But, as Singer notes, equal consideration (i.e., of suffering) does not imply the equal value of lives. Certain characteristics can make a being’s life more valuable even if it doesn’t make that being’s suffering of greater significance. He suggests self-awareness, the capacity to think about one’s future, the capacity for abstract thought, meaningful relations with others and having goals amongst the relevant “mental powers”.30 So, on Singer’s ‘equal-consideration view’, while all animals are equal when it comes to pain, this does not imply the equal value of lives.

On the ‘sliding-scale’ approach:

Beings at the very top have the highest moral status and deserve full consideration. Beings somewhat lower deserve very serious consideration but less than what the beings on top deserve. As one moves down this scale of moral status or moral consideration, the amount of consideration one owes to beings at a particular level decreases. At some point, one reaches beings who deserve just a little consideration. Their interests have direct moral significance, but not much, so where their interests conflict with those with much higher moral status, the former usually lose out.31

This sliding-scale approach is what I intend by an ‘impartial, species-neutral approach’ to moral consideration, both as it relates to the significance of suffering and lives.32 Like Singer’s equal consideration approach, this view rejects speciesism insofar as it appeals to morally relevant features rather than species membership when grounding moral judgments with regard to both the treatment and consideration owed to beings. But it does not require that all beings be given equal consideration when it comes to pain.

Just as either being or not being a member of a sub-class of humans based upon superficial features such as gender, skin-color or age is not morally relevant in itself, being a member of a species is likewise not morally relevant. On the other hand, features that contribute to one’s ability to experience or enjoy one’s life more fully—such as intelligence, awareness of one’s past and future, or meaningful communication with others—are morally relevant and also serve to ground our commonsense moral judgments. Features of this sort, as well as the capacity to suffer, are clearly relevant to how a being ought to be treated in ways that its biological or genetic makeup is not.

As for those who still believe that our biological humanity (or membership in the species Homo Sapiens) makes us morally superior to other creatures, consider a non-human species with all the same intellectual, perceptual and phenomenological features possessed by typical humans. The members of this species are self-aware, rational, can plan for the future and form goals as well as hopes, they also feel pleasure and pain of various physical, emotional and intellectual sorts. The extreme speciesist is committed to the view that such beings are not worthy of consideration (or, worthy of less consideration) simply because its members are not human. I take it that this view is untenable.

If we accept that certain cognitive features allow us to make principled moral distinctions in choosing differential treatment of certain beings, then we should accept that some sentient non-humans deserve an elevated level of consideration relative to others in light of evidence that some non-humans possess self-awareness, altruism, tool and language-use, the capacity to reciprocate, varying forms of communication, empathy and a sense of fairness, among other morally relevant features.33

With this said, I will proceed with the assumption that the view that only humans are worthy of morally consideration simply because of their ‘humanness’ is false, and that a being ought to be given moral consideration on the basis of its morally relevant features.

The argument from non-human sentients

  1. As moral agents, humans have responsibilities to all sentients (i.e., moral patients) that can be affected by our actions.
  2. Given humans are capable of creating more than one kind of AI (and given that we will create at least one kind), if one of these will better respect the interests of all sentients, then ceteris paribus we ought, morally, to create that kind of AI.
  3. IAI will, ceteris paribus, better respect the interests of all sentients than FAI.
  4. Humans are capable of creating IAI or FAI (this is assumption 3 in the introductory section above).
  5. ∴ Given the option to create either IAI or FAI, then ceteris paribus we ought, morally, to create IAI.

Premise 1 (i.e., humans have responsibilities toward all sentients that can be affected by our actions): Accepting this premise does not require accepting equal treatment or equal consideration of all moral patients. It only requires that all moral patients are worthy of moral consideration and that their interests ought to be respected. In fact, it is consistent with premise 1 that humans are exceptional and worthy of greater moral consideration, but this doesn’t imply that we are to be favored, or even protected, at all cost! The enormity of some costs to non-human moral patients may override the justification of protecting human interests, especially when the interests in question are uncontroversially trivial in nature (e.g., our interest in sport-killing, palate-pleasures, or believing ourselves to be dominant and superior).

Premise 2 (If we can create something that will better respect the interests of all sentients, then, all else being equal, we ought morally to do so): This follows from premise 1. Suppose humans are on the verge of creating AGI. Now suppose that we have a choice between creating a particular version of AGI which will wreak havoc and cause massive suffering to all non-humans and a different version which will leave things as they are (i.e., it will cause no more suffering than currently exists for both humans and non-humans). I take it that, all else being equal, we clearly ought to opt for the latter.

Premise 3 (IAI will better respect the interests of all sentients than FAI): To support this premise I will begin by explaining how FAI will not be equivalent to IAI. I will then explain how IAI will better respect the interests of all sentients. This will involve both how IAI will, itself, better respect all relevant interests as well as the positive consequences this fact will have on human actions. More explicitly, the development of IAI will lead to better consequences for all moral patients collectively because: [a] IAI will better respect the interests of all moral patients than FAI, and [b] the impact of IAI upon human behavior will have better consequences for non-humans than the impact of FAI upon human behavior.

First, note that FAI’s consideration of non-human interests will be restricted to cases in which such consideration is, or such interests are, instrumentally valuable. Otherwise, such interests will be disregarded entirely. This is implied by the notion of artificial intelligence that is solely focused upon friendliness to humans. While there may be cases where friendliness to humans implies granting consideration to non-humans, it will only be a result of such consideration being in the interest of humans. Otherwise, FAI is equivalent to IAI.34

While there are sure to be many cases where the interests of non-humans coincide with those of humans (e.g., cases where the value of biodiversity, and its importance to humans, leads a purely rational FAI to preserve and respect some species that are otherwise dangerous to humans) there are countless others in which this is unlikely. For example, FAI with the extraordinary capabilities often attributed to AGI/ASI35 could manufacture alternative ways of preserving the human value of biodiversity while extinguishing some sentient species that are dangerous to humans. For instance, it might eradicate aggressive species of bears, dogs, lions, tigers, rhinos, sharks, snakes, primates, etc. and replace them with non-aggressive versions (or, to preserve the ‘food chain’, versions that are only aggressive toward their natural, non-human prey).36 Such cases do not constitute instances where human interests are consistent with those of existing non-human sentients.

While the replacement of aggressive species with non-aggressive versions would certainly benefit humans, it would not constitute consideration of the interests of such species. In fact, preserving non-aggressive versions of such species exemplifies the view that their consideration is merely instrumentally valuable. For this reason, FAI would not, in fact, amount to, or be consistent with, IAI.

Second, being imbued with impartial moral values will make IAI less likely than both humans and FAI to harm sentient non-humans to protect relatively trivial interests of humans (e.g., ‘palate-pleasures’, ‘sport-killing’, feelings of superiority and dominance, etc.). It will also be more likely to benefit non-humans directly as well as discover and pursue strategies for optimizing the interests of all species simultaneously whenever possible.

Moreover, FAI, being human-centric in its behavior, may also encourage and reinforce already widespread human tendencies to disregard or trivialize non-human interests. This is because the awe and deference with which we are likely to interpret the actions and the commands of FAI are liable to lead us to see such actions as justified and, in turn, exacerbate human harm to non-humans.

Numerous classic studies suggest a strong human tendency to defer to authority (e.g., Stanford, Milgram, Solomon Asch). The Nuremberg trials also saw a majority of Nazis claim that they were only ‘following orders’ out of fear of punishment. A being as intelligent and powerful as FAI is at least as likely as any charismatic human to induce a sense of authority, even awe, in us as it pursues its goals. And given that convincing humans to consider non-human interests would, all else being equal, yield consequences preferable to their being species-centered, it seems at the very least incredibly likely that IAI would convince humans of this. And if charisma is deemed to be a necessary condition for persuading humans, AGI will acquire charisma as well.

Relatedly, it’s important not to underestimate or overlook FAI’s (or any ASI’s) powers of persuasion. By logical influence, effective appeals to emotional weakness, threats, brute force or some combination, FAI will convince us to aid in the achievement of its goals. Just as AGI/ASI would employ inanimate resources in the pursuit of its goals, there’s no good reason to believe that it wouldn’t also use living resources (especially relatively competent ones such as ourselves) for the same reasons.37 Such powers are likely to lead to our deferring to FAI when making moral choices. Just as a child tends to imitate the adults it observes, we are likely to imitate and internalize the values of an FAI, the intelligence of which will surpass our own to an extent far beyond that of an adult relative to a child. Of course, if the AI’s values are human-centric this will lead to widespread dismissal of non-human interests, or consideration only where it’s instrumentally valuable for us.

On the other hand, IAI, as impartial, will not reinforce these shortcomings. Observing a superior moral actor (or one that can surely convince us that it’s a superior moral actor) is likely to encourage better practices in humans.

Much of what was said in support of premise 3 can be used here to support the likelihood that IAI will also affect our beliefs as well as our behavioral tendencies. It’s again important to avoid overlooking the persuasive power IAI is sure to possess. Its immense intelligence will allow it to quickly and easily persuade us of whatever it wants to persuade us of.38 In fact, it may also employ force to persuade us to act in accordance with its species-neutral goals. As noted above, if it would use technology and inanimate resources to pursue its goals, it’s hard to fathom why it wouldn’t also persuade (or force) us to assist in the achievement of its goals. As is the case for FAI, IAI’s powers of persuasion are likely to lead to our deferring to it when making moral choices. Just as children tend to imitate the adults they observe, we are likely to imitate and, over time, internalize the values of IAI as its intelligence and authority becomes ever clearer. This will more than likely result in far greater consideration for non-human moral patients than would result from the FAI scenario.

While the foregoing focuses upon our responsibilities to actual beings beyond humans, I turn now to our responsibilities to possible39 beings other than humans.

The argument from future species

Even if one rejects that any currently existing non-humans are worthy of moral consideration, one ought to accept that any species morally equal, or even superior, to humans ought to be granted, at the very least, consideration equal to that which we grant to humans. This approach requires the weakest anti-speciesist position, and one that I expect will be very widely accepted.40

Preliminaries

Evolutionary theory suggests that humans exist on a biological continuum that just so happens to have us at the intellectual and, perhaps, moral peak for the time being. Darwin himself believed that the difference in ‘mental powers’ between humans and non-humans is a matter of degree rather than kind.

If no organic being excepting man had possessed any mental power, or if his powers had been of a wholly different nature from those of the lower animals, then we should never have been able to convince ourselves that our high faculties had been gradually developed. But it can be clearly shewn that there is no fundamental difference of this kind. We must also admit that there is a much wider interval in mental power between one of the lowest fishes…and one of the higher apes, than between an ape and man; yet this immense interval is filled up by numberless gradations…There is no fundamental difference between man and the higher mammals in their mental faculties.41

This notion of a biological continuum raises a further issue for the tendency to overemphasize friendliness to humans in AGI development. I will argue that if the reasons put forward against speciesism in Sect. 2.1 succeed in the very weakest sense, then there are further reasons for favoring IAI over FAI beyond those given in Sect. 3. Moreover, if my reasoning is correct, there is a less obvious implication—that IAI ought to decide whether to become an existential threat to humans or a more limited threat to the freedom of humans—that, as I will argue, also follows.

Regarding the latter point, I will argue that IAI should be developed to determine the likelihood of a ‘superior’ or even equal species emerging from currently existing (or soon to exist) species. It ought also be designed to determine the global moral value (i.e., the value to all moral patients including itself) of such species emerging together with the chance of its emerging. Finally, IAI ought to be designed to determine whether, if the continued existence of humans lessens the chances of superior species emerging, eliminating humans would be morally preferable to allowing our continued existence but with sufficiently restricted freedom of choice and action.

The argument

Given our continued existence, humans are sure to destroy (or contribute to the destruction of) the majority of species on the planet.42 This is nothing more than a well-founded extrapolation from the extinctions we’ve already contributed to together with the likelihood of the continuation of the reasons why these extinctions occurred (e.g., human singlemindedness, shortsightedness, weakness of will, time preference, temporal discounting, selfishness, etc.). In so doing, humans are also, in essence, destroying all of the species that would have otherwise evolved from each of these extinguished species. In fact, it’s quite possible that we will, or maybe already have, extinguished species that would have given rise to one or more species that would have been morally superior to ourselves (according to any plausible moral metric). For whatever species-independent features one might believe underlie the supposed current moral superiority of humans, there is no good reason to believe that such features could not evolve, even to a greater degree, in non-humans. Nonetheless, the above-noted widely shared human features (i.e., singlemindedness, shortsightedness, etc.) suggest that even if humans could calculate the likelihood that some of these species would be morally superior to themselves, they are unlikely to submit to the relevant changes that would be required to allow for the emergence of such species.

With that said, epistemic and cognitive limitations suggest that humans are not in a position to calculate the likelihood of such species. On the other hand, if we could develop something that could [1] calculate whether any such species would be morally superior (or equal) to humans, [2] calculate the likelihood of such species emerging, and [3] act impartially on the basis of such calculations, then we ought to do it. Note that this implication holds because of the weakest non-speciesist version of premise 1 in the argument in Sect. 3 (i.e., as moral agents, humans have responsibilities to all moral patients that can be affected by our actions) as well as for the very reasons that one might believe humans to be currently morally superior to non-humans (e.g., rationality, greater future-directed preferences, capacity for complex acts of communication, etc.). Of course, IAI could accomplish [1–3]. At the very least, it would be far more likely to be able to do so than humans (and far more likely to actually do so than FAI).

Beyond this, IAI should be created to decide whether humans continue to exist at all, and if it decides that we do, it should also decide what kind of latitude (e.g., freedom, privacy, property, etc.) we are to be allowed. This is consistent with premise 1 from my argument in 2.2 for the following reasons:

  1. IAI will know (or, eventually come to know) the impartial, species-neutral moral facts.
  2. IAI will be capable of coming to possess a superior understanding of adaptation and natural selection. This could be accomplished by way of training via relevant materials (e.g., raw data, the internet, scientific journals, Darwin’s On the Origin of Species, the ability to run countless simulations, etc.) together with its superior reasoning capabilities.
  3. As impartial, IAI will be motivated to possess and apply this understanding of adaptation and natural selection on the basis of its judgment that morality is species-neutral and that, if this judgment is correct, future beings are morally relevant.
  4. Given 1 and 2, IAI will be well-positioned (and far better so than humans) to determine whether the species that may evolve from those we are likely to endanger will be equal or morally superior to humans.

Given 1–4, IAI ought to be created in order to decide whether to become an existential threat to human beings, or merely a threat to the freedom of humans.43 This claim is distinct from the claims in the previous section insofar as there I argued that we ought to create IAI rather than FAI. Here I’m making the further claim that not only should we opt for IAI rather than FAI, but we also ought to develop an impartial ASI so that it can decide what ought to be done with us.44

Why create AGI at all?

While one might agree that IAI is morally preferable to FAI, one might still question the permissibility of developing AGI to begin with. In other words, while I’ve been assuming that we can and will create AGI, one might reject the idea that we ought to do so.45

I must admit that I am somewhat sympathetic to this objection. In fact, assuming that my thesis is correct, it seems that creating IAI where the interests of all sentients are taken into consideration—rather than just humans—will be far more difficult. Assuming that we eventually come upon the technical expertise to develop such a thing, how are we to know just how to balance the interests of all sentient beings, or to develop artificial beings with the capacity and direction to do so? I take it that it isn’t a rare occurrence for humans to be stumped by how best to balance their own interests with those of other humans who will be affected by their actions—even when they have a sincere desire to do so. Given this, how are we to know how best to balance the interests of all moral patients such that any relevant technical expertise could effectively be put to use?

Nevertheless, whether we ought to create AGI (whether that be IAI, FAI or any other AGI for that matter), I expect that humanity will, in fact, do so if it can. The reasons are many, but they surely include the pursuit of profit, power (i.e., military), and simple scientific curiosity.46

But with all this said, given that intelligence is, at the very least, the primary reason for the emergence of science and all of its innovations, it’s important to keep in mind the range of potential benefits that a superior intelligence might provide (e.g., curing diseases, resolving the dangers of global warming, solutions to the diminishing of democracy and a more likely environment for the harmony of interests overall, as well as a plethora of benefits that might be beyond our powers of conceiving). If our understanding of intelligence is adequate to develop sufficiently beneficial AGI then, all things considered, the foregoing provides two arguments for developing the sort of AGI that will be suitably impartial. And if a suitably impartial AGI can calculate the likelihood of future species that are morally superior to humans then, morally speaking, we have reason—one might even say that we have a duty—to develop such an AGI.

And while one might respond by claiming that we humans have a right of self-defense and therefore a right to not produce that which will potentially lead to our destruction or a significant reduction in our freedoms, it should be noted that the possession of rights doesn’t, by itself, imply that such rights are absolute or limitless. Presumably, there is some point at which the amount of good that is likely to result from an action swamps the rights that might be overridden by that action.

Finally, if AGI is to be developed recklessly, without a sufficient understanding of the workings of artificial intelligences, then we’ll be left hoping that, somewhere in an accessible part of the universe, a more enlightened species has developed AGI in time to reverse the inevitable damage.

Conclusion

I’ve argued that given certain assumptions regarding artificial general intelligence, as well as certain facts about human beings, AGI development ought to aim toward impartial AGI rather than a human-centric sort, the latter of which dominates current research and literature on AI and existential risk. My reasons rest upon [1] the claim, argued for in Sect. 2.1, that humans are not the only beings worthy of moral consideration, and [2] the fact that humans have likely destroyed and are likely to destroy species that could very well evolve into species that are morally equal or even superior, to ourselves (Sect. 4). So if it turns out that humans are as special as we seem to think that we are, and if we are, in fact, headed toward the development of AGI, then we have very good reason to develop an AI that is impartial in its moral orientation so that we might be more likely to facilitate beings with equivalent or improved versions of just this sort of specialness. At the very least, the issue of exactly whose interests should be given consideration, and why, should receive more attention than it currently receives.

Notes

  1. See, for example, Yudkowsky [27].
  2. See, for example, Tarleton [22], Allen et. al. [1], Anderson and Anderson [2], and Wallach et al. [26].
  3. See, for example, Omohundro [16], Bostrom [4], ch. 12; Taylor et al. [23], Soares [21], and Russell [18].
  4. See Armstrong et al. [3] and Bostrom [4], pp. 177–181.
  5. As an example of a company aiming at the latter, see https://openai.com/charter/.
  6. While ‘intelligence’ is notoriously difficult to define, Russell [18], p. 9 claims that something is intelligent “to the extent that their actions can be expected to achieve their objectives”. According to Tegmark (2017) p. 50, intelligence is the “ability to accomplish complex goals”. And Yudkowsky [25]: intelligence is “an evolutionary advantage” that “enables us to model, predict, and manipulate regularities in reality”.
  7. Central to explaining AGI’s move to ASI is ‘recursive self-improvement’ described in Omohundro [14].
  8. This is consistent with Yudkowsky [12], p. 2, according to which: “The term ‘Friendly AI’ refers to the production of human-benefiting, non-human-harming actions in Artificial Intelligence systems that have advanced to the point of making real-world plans in pursuit of goals”.
  9. With ‘considers the interests’ I’m anthropomorphizing for simplicity. I expect it to be a matter of controversy whether AGI of any sort can consider the interests of anything whatsoever.
  10. See Regan [17], chapter 5 for a discussion of the notions of ‘moral patient’ and ‘moral agent’.
  11. For opinions regarding when AGI will be attained see Bostrom [4], pp. 23–24 and Müller and Bostrom [12].
  12. See, for example, Bostrom [4], Kurzweil [11], Yudkowsky [7], Chalmers [5], Vinge [25], Good [9]. There are differing views on the timelines involved in the move from AGI to ASI. For a discussion of the differences between ‘hard’ and ‘soft takeoffs’ see, for example, Bostrom [4] chapter 4 (especially pp. 75–80), Yudkowsky [25], Yudkowsky [30], and Tegmark (2017), pp. 150–157.
  13. IAI may favor particular species if species-neutral values dictate favoring some species over others. For example, it may be the case that while all animals are worthy of moral consideration, some species are worthy of a greater level of consideration than others.
  14. Of course, another possibility is that AGI develops hostile values in which case issues of human and non-human interests are likely moot.
  15. Of course, it should be noted that while IAI may not be consistent with FAI, it is at least possible that IAI will be consistent with FAI. I take it that we are not in a position to know which is more likely with any degree of certainty.
  16. The term ‘speciesism’, coined by Ryder [19], is meant to express a bias toward the interests of one’s own species and against those of other species.
  17. By ‘moral patient’ I mean anything which is sentient or conscious and can be harmed or benefitted. A moral patient is anything toward which moral agents (i.e., those entities that bear moral responsibilities) can have responsibilities toward for their own sake. For present purposes, I will take the capacity to suffer as a reasonable sufficient (and possibly necessary) condition for being a moral patient.
  18. By ‘possible’ here I don’t intend a distant, modal sense according to which there exists some possible world in which the relevant beings exist. I mean that, in this world, such beings could very well actually exist in the future given that we don’t exterminate the preceding species or beings.
  19. Even if the goals, as specified, are consistent with human interests, ASI might take unintended paths toward the accomplishing of these goals, or it may develop subgoals (or, instrumental goals) that are ultimately inconsistent with human interests. For the latter issue, see Omohundro [14, 15] and Bostrom [4], ch. 7.
  20. I acknowledge that there is a debate to be had regarding what is ‘in the interest’ of a species. Nonetheless, I do not see the plausibility of my thesis turning on the choices one might make here.
  21. In terms of FAI based upon values we believe to be consistent with human interests, the main problem involves the widely discussed ‘unintended consequences’. The worry stems from our inability to foresee the possible ways in which AGI might pursue the goals we provide it with. Granting that it will become significantly more intelligent than the brightest humans, it’s unlikely that we’ll be capable of discerning the full range of possible paths cognitively available to AGI for pursuing whatever goal we provide it. In light of this, something as powerful as AGI might produce especially catastrophic scenarios (see, for example, Bostrom [4] ch. 8 and Omohundro [15]. As for FAI based upon what are, in fact, human-centric values, an initial problem arises when we consider that what we believe is in our interest and what is actually in our interest might be quite distinct. If so, how could we possibly go about developing such an AI? It seems that any hopeful approach to such an FAI would require our discovering the correct theory of human wellbeing, whatever that might happen to be. Nonetheless, for the purposes of this paper I want to grant that we are, in fact, capable of developing such an objectively human-friendly AI.
  22. By ‘a set of impartial, species-neutral moral facts’ I mean simply that, given the assumption that the interests of all moral patients are valuable, there is a set of moral facts that follow. Basically, there are a set of facts that determine rightness and wrongness in any possible situation given the moral value of all moral patients, where this is understood in a non-speciesist (i.e., based upon morally relevant features rather than species-membership) way.
  23. I thank an anonymous reviewer for this point.
  24. Muehlhauser and Bostrom [12], p. 43.
  25. Yudkowsky [29], p. 388.
  26. Singer [20].
  27. Singer [20], p. 6.
  28. DeGrazia [7], p. 36.
  29. Singer [20], p. 8.
  30. See Singer [20], p. 20.
  31. DeGrazia [7], pp. 35–36.
  32. The arguments in the remainder of the paper will clearly still follow for proponents of the ‘equal consideration approach’. In fact, my conclusions may still follow on an even weaker anti-speciesist view according to which we ought to treat species as morally equal to humans (or of even greater moral worth than humans) if such beings evolve from current species (see Sect. 4 below).
  33. See, for example, De Waal [8].
  34. In addition, it’s also likely that there will be many cases in which, despite non-human interests receiving no consideration, such interests will remain consistent with human interests. I happily admit this. The point I’m making is that there will be cases where non-human interests will not be consistent with human interests and therefore will be disregarded by FAI.
  35. See, for example, Bostrom [4], Yudkowsky [31], Omohundro [14, 15], Häggström [10], and Russell [18].
  36. This might be accomplished by harvesting and altering their genetic information then producing the new ‘versions’ via in vitro fertilization. This is outlandish, of course, but no more so than the scenarios suggested by many AI researchers regarding existential threats to humanity via unintended consequences.
  37. See Omohundro [15] for a discussion of ‘basic AI drives’. Of these, the most relevant to the current point is ‘resource acquisition’. ‘Efficiency’ is another relevant subgoal, as AGI/ASI will become more efficient with regarding to pursuing its goals as well as its use of resources.
  38. It’s also important to recall that there’s every reason to believe that IAI will, as well as FAI, develop the basic AI drives presented in Omohundro [15].
  39. I remind the reader that by ‘possible’ beings here I intend those that could very well actually exist in the future given that we don’t exterminate the relevant preceding beings and not some logically distant, modal sense of beings.
  40. In addition, given that such species could develop from currently existing species, it is not a major leap to accept that we ought to develop AGI with them in mind as well, even if one rejects that currently existing species are not now worthy of consideration.
  41. Darwin [6], pp. 34–35.
  42. See, for example, https://www.theguardian.com/environment/2018/oct/30/humanity-wiped-out-animals-since-1970-major-report-finds, https://www.ipbes.net/news/Media-Release-Global-Assessment and https://www.forbes.com/sites/trevornace/2018/10/16/humans-are-exterminating-animal-species-faster-than-evolution-can-keep-up/#451b4d6415f3.
  43. I would suggest that this is analogous to cases in which, when presented with a moral dilemma, children should defer to suitable adults to make decisions that will have morally relevant consequences.
  44. In fact, it seems that beyond all of the foregoing, a sufficiently competent and powerful ASI could well fit the environment of the earth, as well as the universe beyond, to the most morally superior of possible biological beings. If it turns out that the optimal moral scenario is one in which the highest of possible moral beings exists and has its interests maximized, then we ought to develop IAI to bring about just this scenario, regardless of whether we are included in such a scenario. On the other hand, if we’re supposed to, morally speaking, develop that which will most benefit humans, then we are left not only scrambling to do so, but also hoping that there are no smarter beings somewhere in the universe working on the analogous project.
  45. I thank an anonymous reviewer for this point as well.
  46. Unfortunately, there is precedent in past human behavior for this attitude. For example, I expect that, with the benefit of hindsight, many believe that nuclear weapons ought not have been created. The same can be said for the development of substances and practices employed in processes that continue to contribute to climate change. Nonetheless, global dismantling of nuclear weapons and moving away from practices that proliferate greenhouse gases remain far off hopes.If this is correct, then I would suggest not only that the foregoing provides support for the preferability of species-neutral AGI but that the scope of interests to be considered by AGI ought to be given far more attention than it currently receives.

References

  1. Allen, C., Smit, I., Wallach, W.: Artificial morality: top-down, bottom-up, and hybrid approaches. Ethics Inf. Technol. 7, 149–155 (2006)
  2. Anderson, M., Anderson, S.: Machine ethics: creating an ethical intelligent agent. AI Mag. 28(4), 15–26 (2007) Google Scholar 
  3. Armstrong, S., Sandberg, A., Bostrom, N.: Thinking inside the box: controlling and using an oracle AI. Mind. Mach. 22, 299–324 (2011)Article  Google Scholar 
  4. Bostrom, N.: Superintelligence. Oxford University Press, Oxford (2014) Google Scholar 
  5. Chalmers, D.: The singularity: a philosophical analysis. J. Conscious. Stud. 17(9–10), 7–65 (2010) Google Scholar 
  6. Darwin, C.: The Descent of Man, and Selection in Relation to Sex. John Murray, London (1871)Book  Google Scholar 
  7. DeGrazia, D.: Animal Rights: A Very Short Introduction. Oxford University Press, New York, NY (2002)Book  Google Scholar 
  8. De Waal, F.: Chimpanzee Politics. Johns Hopkins University Press, Baltimore, MD (1998) Google Scholar 
  9. Good, I.J.: Speculations concerning the first ultraintelligent machine. In: Franz, L., Rubinoff, M. (eds.) Advances in Computers, vol. 6, pp. 31–88. Academic Press, New York (1965)
  10. Häggström, O.: Challenges to the Omohundro—Bostrom framework for AI motivations. Foresight 21(1), 153–166 (2019)Article  Google Scholar 
  11. Kurzweil, R.: The Singularity is Near: When Humans Transcend Biology. Penguin Books, New York (2005)
  12. Muehlhauser, L., Bostrom, N.: Why We Need Friendly AI. Think 36, 13(Spring) (2014)
  13. Müller, V., Bostrom, N.: Future progress in artificial intelligence: a survey of expert opinion. In: Fundamental Issues of Artificial Intelligence, 2016-06-08, pp. 555–572 (2016)
  14. Omohundro, S.: The nature of self-improving artificial intelligence [steveomohundro.com/scientific-contributions/] (2007)
  15. Omohundro, S.: The basic AI drives. In: Wang, P., Goertzel, B., Franklin, S. (eds.) Artificial General Intelligence 2008: Proceedings of the First AGI Conference. IOS, Amsterdam, pp. 483–492 (2008)
  16. Omohundro, S.: Autonomous technology and the greater human good. J. Exp. Theor. Artif. Intellig. 26(3), 303–315 (2014). https://doi.org/10.1080/0952813X.2014.895111.
  17. Regan, T.: The Case for Animal Rights. University of California Press, California (2004) Google Scholar 
  18. Russell, S.: Human Compatible: Artificial Intelligence and the Problem of Control. Viking, New York (2019) Google Scholar 
  19. Ryder, R.: http://www.criticalsocietyjournal.org.uk/Archives_files/1.SpeciesismAgain.pdf (2010)
  20. Singer, P.: Animal Liberation. HarperCollins, New York, NY (2002) Google Scholar 
  21. Soares, N.: The value learning problem. In: Ethics for Artificial Intelligence Workshop at 25th International Joint Conference on Artificial Intelligence (IJCAI-2016), New York, NY, USA, 9–15 July 2016 (2016)
  22. Tarleton, N.: Coherent Extrapolated Volition: A Meta-Level Approach to Machine Ethics. The Singularity Institute, San Francisco, CA (2010) Google Scholar 
  23. Taylor, J., Yudkowsky, E., LaVictoire, P., Critch, A.: Alignment for Advanced Machine Learning Systems. Machine Intelligence Research Institute, July 27, 2016 (2016)
  24. Tegmark, M.: Life 3.0: Being Human in the Age of Artificial Intelligence. Alfred A. Knopf, New York, NY (2017) Google Scholar 
  25. Vinge, V.: The coming technological singularity: how to survive in the post-human era. Whole Earth Rev. 77 (1993)
  26. Wallach, W., Allen, C., Smit, I.: Machine morality: bottom-up and top-down approaches for modelling human moral faculties. Ethics Artif. Agents 22(4): 565–582 (2008). doi:https://doi.org/10.1007/s00146-007-0099-0
  27. Yudkowsky, E.: Creating Friendly AI 1.0: The Analysis and Design of Benevolent Goal Architectures. The Singularity Institute, San Francisco, CA, June 15 (2001)
  28. Yudkowsky, E.: Artificial intelligence as a positive and negative factor in global risk. In: Bostrom, N., Cirkovic, M. (eds.) Global Catastrophic Risks. Oxford University Press, Oxford, pp 308–345 (2008)
  29. Yudkowsky, E.: Complex value systems in friendly AI. In: Schmidhuber, J., Thórisson, K.R., Looks, M. (eds.) Artificial General Intelligence: 4th International Conference. AGI 2011, LNAI 6830, pp. 388–393 (2011)
  30. Yudkowsky, E.: Intelligence Explosion Microeconomics. Technical Report 2013-1. Machine Intelligence Research Institute, Berkeley, CA. Last modified September 13, 2013 (2013)
  31. Yudkowsky, E.: There’s No Fire Alarm for Artificial General Intelligence (2017). https://intelligence.org/2017/10/13/fire-alarm/

Facebook’s Distorting Lens: The Danger of Social Deference

               Recently I got over my revulsion for Facebook and once again activated an account.  I did it in part because though I dislike the platform for some obvious reasons, I feel it’s important to engage with something that is so monumentally influential.  It’s important to know firsthand just what the atmosphere is like, what sorts of effects it has on its users, and what sorts of changes happen in the environment and the effects they seem to have.  I’m quite familiar with the way in which it creates echo chambers and epistemic bubbles, the draining effect it tends to have on my psyche, but in my recent interactions I feel most upset by what seems to be a lack of autonomy in the social realm.  I feel shuffled from post to post without knowing why and without having any sense that I can control what and who I see.  It’s all the more distressing that in Facebook my social interactions are being governed by unknown algorithms.  I am troubled by what seems to be an integral part of Facebook, something I’ll call social deference.

               It’s impossible to live in the modern world without deferring to others about a good deal of things.  We simply don’t have the ability to know firsthand and from the ground up all the information we need to know.  The most obvious form of deference is deference about facts.  When I accept someone’s word on something, I’m taking on what they say as my belief.  We defer to doctors about the safety of medications and treatments, to engineers about the safety of our planes and bridges, and to news organizations about the events of the day.  This sort of thing is both commonplace and necessary: it would be difficult to get out of bed without trusting other people to sort some of our facts for us.

               There are, on the other hand, facts about which it seems peculiar to defer.  Several years ago, I proposed the following thought experiment.  Suppose that Google offered an app called Google Morals.  You could enter in any question about morality—should I be a vegetarian? Is it permissible to lie to achieve one’s ends?  Is abortion permissible? —and Google Morals would give you the answer.  Set aside for the moment that it would be unclear just how the app would work and how it would have access to the moral truths.  Suppose we had reason to believe it did.  Nevertheless, I maintain, there is something peculiar about deferring to Google Morals, something that isn’t peculiar about deferring to Google Maps in order to learn how to get from Springfield to Capital City.  There is a way in which one is shirking one’s responsibility as a person when one simply takes Google’s word when it comes to moral matters.

               A good part of the problem with moral deference is that we don’t have access to why Google provides the answers it does.  It wouldn’t be a problem if we could “see the work” and understand why Google provides the verdicts it does.  In that case it’s likely we wouldn’t simply be deferring—we wouldn’t be accepting Google’s verdict simply because of Google’s output, we would be altering our beliefs because we understood the reasons why Google said what it said.  Understanding why something is true, being able to articulate the ins and outs, is important when it comes to some of our beliefs—namely the moral beliefs that make us who we are.

               Ok, so suppose this is right; what does this have to do with Facebook?  It strikes me that Facebook encourages a sort of deference as well that is likely as problematic as moral deference.  Call it social deference.

               Suppose that you systematically deferred to others about who was a good friend.  Instead of evaluating someone based on their merits, based on how they treated you, you simply asked a friend expert, a “friendspert,” whether someone was a good friend.  It’s not just that the friendspert recommends you check someone out and that they might be a good friend, but that you adopt the belief that the person is your friend based on their advice and you organized your life accordingly.  This is a sort of social deference—one is allowing one’s social circle to be determined based on the sayso of another.  In some sense one is shirking one’s duties as a friend and is offloading important work onto others that really should be done by each of us—evaluating people based on their perceived merits and demerits and befriending them based on how they treat you.  There would be something wrong if someone asked “why are you my friend” and your answer was “because the friendspert told me to be.”  Acting that way depreciates friendship to the point that it’s not clear that one really has a friend at all.

               The friendspert is an extreme case, and though it’s tempting to say that Facebook, with its friend suggestions, is acting like a friendspert, that’s probably not quite right.  There is perhaps a little truth to this, but it almost certainly overestimates what is really going on when we “friend” someone on Facebook.  It’s not as though when we click that blue button the person actually becomes our friend in any robust sense, and it’s not as though we shut down our independent evaluation of that person and just defer to Facebook’s algorithm.  We form beliefs about the person and make attachments based on what we see on our feed or how we interact with them.

               There is, though, a type of social deference involved in Facebook however that might even be more insidious.  We are deferring in this case to an algorithm that affects how our friends and social circles appear to us.  Who we see and which posts we see are determined by a system that is unknown to us.  To the degree that we let our attachments be shaped by those algorithms we are guilty of social deference.  We are allowing our connections to other people to be shaped based on decisions and frameworks that are not our own.  In doing so we are ceding our social autonomy and we’re allowing one of the most essential parts of ourselves—the social part—to be molded by a third party.

               Most of us know, at least after adolescence, that we should not judge people simply by what others report about them.  Even if those reports are accurate, the intermediary in this case is apt to distort our picture of other people, thereby shaping our judgments about them.  It is important, indeed it’s our responsibility, to judge people as much as we can without intermediaries shaping our perception of them.  The problem isn’t just that falsehoods and misrepresentations enter the mix.  Even supposing they don’t, it is our responsibility to form our interpersonal relationships—especially our friendships—ourselves.  Forming and nourishing friendships requires a subtle navigation between revealing too much about oneself and not enough, foregrounding some features and not others.  This isn’t dishonest, it’s a recognition that not every fact is relevant to every relationship, and sometimes the order and emphasis of what one reveals about oneself says as much about oneself as the information revealed.  (If I start every conversation announcing my religion or political affiliation, that fact will tell you as much about me as whatever you learn about my faith or politics.)

When we use Facebook, we are ultimately introducing an intermediary between us and our social world and are placing trust in it to provide an accurate picture of our social world.  In fact, what we get is a distorting lens that highlights some parts of our friends at the costs of others.  Importantly, the algorithms that generate what posts we see is not interested in generating or preserving true friendship, nor it is interested in showing us the truth about people.  It is interested in what keeps us clicking, and as such it tends to show us the most provocative parts of our social sphere.  People’s most outrageous opinions are foregrounded and those features that are relevant to true friendship are irrelevant.

               We needn’t rest with abstractions to see the point.  How many of us have seen the political posts of our family members and changed forever how we see them?  How many of us have seen the posts of our friends only to resent them for their self-righteousness or for what might appear to be their self obsession?  Our perspective on our social world is being shaped by the hidden algorithms that lead users to spend time on the site, not by anything that matters to friendship.  This is a kind of social deference and by engaging in it we are handing over responsibility for our relationships to a source we all know is untrustworthy.  The result is a weakening and cheapening of our relationships, but we can’t just blame Facebook.  It’s our decision to give a third party the power to distort and mediate our relationships, and to that degree we deserve a large share of the blame for abandoning our responsibilities to our friends and our social sphere.

Why We Shouldn’t Be Allowed to Waive our Privacy Rights

There is little doubt that privacy clauses and terms of service agreements don’t support the moral burden they are meant to carry.  All too often they are designed to provide political cover rather than to generate informed consent.   Not only does no one read them, but even if someone did and had the attention span and intelligence to follow them, it’s doubtful that they would find all the policies hidden in documents several clicks deep. Interesting fact: If the average American actually read all the policies they encountered, they would lose 76 full workdays in the process. The cost to productivity if all Americans were so conscientious would approach $1 trillion.

There is no arguing it, really: clicking on an AGREE button no more means that you agree with the content of a terms of service agreement than politely nodding your head during a mumbled conversation in a noisy bar means you are agreeing with the opinion you aren’t really hearing.

              This is a big problem with the way we are doing things, but there is another, more fundamental issue that few have recognized: our privacy rights aren’t ours to waive. 

              That sounds paradoxical, but there are other rights we intuitively can’t waive—I cannot waive my right to self-determination by selling myself into bondage, for example, and I can’t waive my right to my body by selling myself to a cannibal for Thanksgiving Dinner.  It’s not plausible, though, that privacy violations inflict such extreme harms, so those probably aren’t the best places to look for analogues. 

A closer analogy to privacy rights is voting rights.  I cannot waive my right to vote.  I can choose not to exercise it, but I cannot waive it.  I cannot exchange my right to vote for internet access or for a cushy job. I certainly can’t transfer my right to you, no matter how much you want to pay me. It’s my right, but that doesn’t mean I can give it up. That’s because my right to vote doesn’t only protect me—it protects my fellow citizens and the institution of democracy we collectively cherish. 

If I have the right to sell my vote, it endangers the entire democratic franchise.  It is likely to make your vote less valuable in comparison to someone else’s—plug in your favorite malevolent billionaire here for a scenario in which electoral outcomes are determined by the mass purchase of voting rights.  We cannot waive our right to vote because that right doesn’t primarily prevent a harm to us as individuals; it prevents a harm to an institution that undergirds the rights of others.

              I suggest privacy rights are like voting rights in this respect.  While we can suffer individual harm if someone knows our political preferences or gains access to the subtle triggers that sway us for or against a product or a candidate, the more important harm comes with the threat to the valuable institutions we collectively comprise. 

If I have the ability to waive my access to privacy rights so does everyone else.  If we all waive those rights we enable the collection of data that enables significant control over the electorate as a whole.  Given enough information about the thoughts and behaviors of voters, propaganda and advertising can be extremely effective in swaying enough attitudes to change the outcome of an election. Though votes aren’t being bought, the result is similar: each individual vote is now outweighed by the statistically certain outcome of a data-informed campaign of voter manipulation.

              If this is right, we’ve largely been looking in the wrong direction both for the harms of privacy rights violations and for the harms involved in our wanton disregard of those rights.  In an age where data-analytics can discern surprising connections between different elements of human personality and behavior, our data is not our own.  By giving up our own data, we are essentially informing on those like us and enabling their manipulation.  We shouldn’t do that just because we have an itch to play Clash of Kings.

So where does this leave us?  I like to play Clash of Kings as much as the next guy and frankly, when I think of it in terms of the harms likely to come to me, Clash of Kings can win pretty easily.  When I realize that my own visceral reaction to privacy harms really isn’t to the point, I’m a little less cavalier about parting with my data.  The truth is, though, that this is a place for governmental regulation, just as it is in the case of voting rights.  In today’s political climate I won’t hold my breath, but the way we all think of these issues needs to undergo a shift away from our worries about our own individual private lives.  As important as each of us is as an individual, some of the most worrisome harms come from the effect on the groups to which we belong.  We need to shift our focus toward the harm these privacy violations cause all of us by enabling the manipulation of the public and the vitiation of our democracy.

Originally appeared in The Hill

Emotional Manipulation, Moving Too Fast and Profiting on the Broken Things

The task of keeping up with tech news has become rather harrowing as of late. The avalanche of information keeps the constant replacement of stories flowing and our attention overloaded. This has become so clearly the case that it’s easy to forget what happened just a few weeks ago. Facebook’s weak stance on political ads quickly became Google’s acquisition of our medical records before both companies then announced they would narrowly expand the minimum number of profiles required for targeted ads. In fact, I expect companies like Facebook bake our forgetting into their internal, day-to-day practices.

This hurtling forward coupled with our inability to keep up with the resulting scandals has allowed for the actualizing of the oft-derided ‘move fast and break things’ motto. While one takeaway might be that our attention spans have contracted due to informational overload, it’s certainly not the only possibility. One might suspect that we are incapable of focusing on any particular tech scandal, not because of the shrinking of our attention spans but because of the ever-evolving techno-scandal culture we now inhabit. To recognize the ease with which we forget, one need only revisit one particularly troubling example of ‘breaking things’ from just a handful of years ago.

In 2013, many people were unknowing subjects in a social experiment run by Facebook. Curious about whether the influence they had acquired could allow them to cause changes in the moods of its users, they manipulated their News Feeds. For some, they displayed only uplifting, cheerful stories while for others only negative, depressing stories. Their hypothesis was verified and the resulting paper was published by a prestigious peer-reviewed scientific journal (The Proceedings of the National Academy of Sciences, June 17, 2014, V. 111(24), p. 8788–90). It’s worth having a look at the abstract for the paper:

Emotional states can be transferred to others via emotional contagion, leading people to experience the same emotions without their awareness. Emotional contagion is well established in laboratory experiments, with people transferring positive and negative emotions to others. Data from a large real-world social network, collected over a 20-y period suggests that longer-lasting moods (e.g., depression, happiness) can be transferred through networks [Fowler JH, Christakis NA (2008) BMJ 337: a2338], although the results are controversial. In an experiment with people who use Facebook, we test whether emotional contagion occurs outside of in-person interaction between individuals by reducing the amount of emotional content in the News Feed. When positive expressions were reduced, people produced fewer positive posts and more negative posts; when negative expressions were reduced, the opposite pattern occurred. These results indicate that emotions expressed by others on Facebook influence our own emotions, constituting experimental evidence for massive-scale contagion via social networks. This work also suggests that, in contrast to prevailing assumptions, in-person interaction and nonverbal cues are not strictly necessary for emotional contagion, and that the observation of others’ positive experiences constitutes a positive experience for people.[1]

And within the first page:

On Facebook, people frequently express emotions, which are later seen by their friends via Facebook’s “News Feed” product. Because people’s friends frequently produce much more content than one person can view, the News Feed filters posts, stories, and activities undertaken by friends. News Feed is the primary manner by which people see content that friends share. Which content is shown or omitted in the News Feed is determined via a ranking algorithm that Facebook continually develops and tests in the interest of showing viewers the content they will find most relevant and engaging. One such test is reported in this study: A test of whether posts with emotional content are more engaging. [italics added][2]

It’s one thing to read this from an academic perspective. It’s an entirely different thing to truly consider the fact that Facebook manipulated the emotions and mental states of millions of people. It’s important to feel the outrage that’s appropriate toward something so outrageous. It’s worth reflecting upon the power that such an ability and the willingness to use it implies. And finally, it’s unnerving but necessary to acknowledge that we now live in a world where this power is wielded by numerous profit-driven companies that have come to dominate a significant portion of the global, online distraction economy.

Concerning such questionable activities, I fear we’re no longer shockable. We see that these companies absorb our health and fitness data, track our purchase and click patterns, and buy our driving, employment, arrest and voting records. All the while, another video of a ‘disenfranchised’, petulant white lady raging at the sight of a black child selling water ‘without a license’ goes viral. Because the latter is more visceral it becomes a more likely object of our fleeting anger, and hence a more likely object of our attention.

In light of all this, it’s natural to wonder, what’s the difference between a state-run media outlet that attempts to placate its citizens with inspirational, dangling kittens and a social media company that manipulates the emotions of its users? While one is powerful, immensely profitable and potentially oppressive, the other is unlikely to be run by a barely grown-up billionaire who stumbled upon too much power after launching a website aimed at rating the ‘hotness’ of women on his college campus.

It’s one thing for these companies to harvest then profit from our data. It’s another thing altogether to experiment on us — without our consent, mind you — while doing so. It’s about time that we ask, at what point does free access to their services no longer suffice as compensation for being unwitting subjects in a social experiment? I expect that our giving this the consideration it deserves would require us to remember the last scandal long enough to recognize that the experiment is ongoing and that many more ‘things’ have been broken.

[1] Adam D. I. Kramera, Jamie E. Guillory and Jeffrey T. Hancock. “Experimental evidence of massive-scale emotional contagion through social networks”. The Proceedings of the National Academy of Sciences, June 17, 2014, V. 111(24), p. 8788.

[2] Ibid.

Pigeonholing and Personalization of the Online Experience

The personalization of our online experience – a result of the algorithmic extraction of our personal data, the subsequent curation of what we see, and the boundaries of our own clicking behavior – threatens to lead to our being pigeonholed into increasingly narrow categories determined by online platforms and data brokers. Such pigeonholing will further constrain what we encounter online, as with each stage of narrowing we will continue to click on increasingly limited subsets of what is made available to us. While the amount of information we encounter will, I expect, remain as robust as ever, the content of this information will be constrained by the bubbles to which we’re assigned. One troubling implication is that what we encounter will continue to narrow until the original promise of the internet ‘opening’ the world may eventually have the opposite result, leading us to become more easily predictable consumers and more easily persuaded actors in an increasingly curated spiral of contracting content.

To see how we are already being categorized, consider one of the many pigeonholing practices of Acxiom, one of the world’s most powerful data brokers:

“Acxiom assigns you a 13-digit code and puts you into one of 70 ‘clusters’ depending on your behavior and demographics… People in cluster 38…are most likely to be African American or Hispanic, working parents of teenage kids, and lower middle class and shop at discount stores. Someone in cluster 48 is likely to be Caucasian, high school educated, rural, family oriented, and interested in hunting, fishing, and watching NASCAR.”[1]

As companies like these persist in selling our data and the content of our online experiences narrow further, we will continue to be treated as categorizable cogs in an increasingly competitive attention economy.

In fact, Acxiom’s own ‘Consumer Data Products Catalog’ gives us a look inside just how they view us:

“Information includes consumers’ interests — derived, the catalog says, “from actual purchases and self-reported surveys” — like “Christian families,” “Dieting/Weight Loss,” “Gaming-Casino,” “Money Seekers” and “Smoking/Tobacco.” Acxiom also sells data about an individual’s race, ethnicity and country of origin. “Our Race model,” the catalog says, “provides information on the major racial category: Caucasians, Hispanics, African-Americans, or Asians.” Competing companies sell similar data.”[2]

It must be admitted that being placed in categories provides us with ads for products we’re more likely to desire, but it’s nonetheless natural to wonder if these benefits can compete with the costs. As each company learns more about you, it more finely shapes what you see. Beyond limiting the range of information we’re exposed to, this may, as noted by Frischmann and Desai[3], lead to the standardization of the individual. In other words, we have become subjects in a massive social engineering project. If companies can determine what we want with ever-increasing precision, they may ultimately be able to (at least partially) determine our online behavior by way of precisely tailoring the options they provide. In short, corporate knowledge of individuals may allow them to psychologically pigeonhole us in ways that are conducive to the ends of the corporation itself rather than our own. Consider the following from Frischmann and Desai:

“Suppose we’d like to induce a group of people to behave identically. We might personalize the inducements. For example, if we’re hoping to induce people to contribute $100 to a disaster relief fund, we might personalize the messages we send them. The same applies if we’re hoping to nudge people to visit the doctor for an annual check-up, or if I’m hoping to get them to click on an advertisement. Effective personalized ads produce a rather robotic response—clicks. Simply put, personalized stimuli can be an effective way to produce homogenous responses.”[4]  

A closely related worry involves the emergence of echo chambers and filter bubbles. The personalization and filtering of our online experience can lead to homophily, or the forming of strong connections to, and preferences for, people who share our beliefs and attitudes. While this can be psychologically comforting it can also reinforce confirmation bias and lead to the dismissal of opposing ideas.[5] Clearly, this phenomenon is problematic on many fronts, one of which involves the erosion of democracy. A vibrant, well-functioning democratic society requires the free, active and willing exchange of diverse ideas. The outright dismissal of opposing ideas yields pernicious polarization that undercuts both the likelihood of these crucial exchanges as well as the open-mindedness and willingness to truly consider competing opinions.

One finds oneself in a filter bubble when one is presented with limited perspectives on any relevant issue(s).[6] The filtering of our online experience may lead us to mistakenly believe the information we’re receiving is comprehensive while leaving us informationally and epistemically sheltered. Alternative, competing ideas are likely to seem not only foreign but reasonable targets of scorn and dismissal.

The more any entity knows about you the more likely it will be able to persuade you to act in particular ways. This, in fact, is the goal of social engineering. And clearly this would be an attractive scenario for any organization seeking results of any kind. We know that companies – and possibly countries – exploited partnerships with Facebook by microtargeting individuals during the 2016 Presidential campaign. In addition to Cambridge Analytica, the Russian Internet Research Agency targeted minorities, amongst others, by creating fake accounts and nudging them either toward voting for the third-party candidate or not voting at all. The more companies know about us, the more they can target us (or small, pigeonholed groups of us) directly in order to affect our beliefs and, therefore, our actions.

Nonetheless, the complete determination of our desires, the orchestrated directedness of our attention and the erosion of our democracy are not inevitable. It’s important to recognize that we are not without responsibility or control in this brave new world, regardless if it takes some significant reflection and understanding of the workings of the online universe.

It would be entirely unreasonable for our IRL (i.e., in real life) behavior to be constantly monitored in order to commodify our attention and modify our behavior. While I expect little resistance to this claim, this is the reality when it comes to our online lives. We need to at least consider the possibility that what we see is being limited by organizations seeking to monopolize our attention and direct our behavior. But we also need to realize that our online behavior is part of what is leading to our limited purview. In my very limited wisdom, I would suggest that we seek out opposing viewpoints, alternative news sources, new experiences and attempt to engage with information that transcends what we find comforting and agreeable. Harder still, we need to truly remain open to changing our opinions in the face of new data.


[1] Lori Andrews, I Know Who You Are and I Saw What You Did. p. 35.

[2] https://www.nytimes.com/2012/06/17/technology/acxiom-the-quiet-giant-of-consumer-database-marketing.html

[3] Frischmann, B. and Desai, D. “The Promise and Peril of Personalization”. https://cyberlaw.stanford.edu/blog/2018/11/promise-and-peril-personalization

[4] Ibid.

[5] For further discussion see C Thi Nguyen, “Escape the Echo Chamber”, https://aeon.co/essays/why-its-as-hard-to-escape-an-echo-chamber-as-it-is-to-flee-a-cult

[6] See Eli Pariser’s The Filter Bubble.

The Norms that Undermine Online Privacy

When it comes to dominant online platforms like Google, Amazon and Facebook, the idea of ‘privacy norms’ may bring to mind the default settings that one might tweak if one cares at all about privacy. On the other hand, there are additional ‘norms’ that lurk in the background and provide the foundation for the riches these companies have accumulated.

These norms, developed and perpetuated by the corporate behemoths in question, provided the groundwork for their present-day domination by allowing them to harvest massive quantities of our data. These companies have come to occupy such a ubiquitous presence in our lives that it’s easy to overlook the background assumptions that gave rise to the paradigm shift that made the phrase ‘data is the new oil’ so fitting.

While introducing the topic of privacy and big data to my Tech Ethics students this semester, I mentioned Jaron Lanier’s idea that we ought to be paid for our data. If there’s anything problematic about sweatshop workers being paid next to nothing for their roles in the production of products that are then sold at exorbitant prices (just to mention one category of worker exploitation) then there’s certainly something to Lanier’s claim. These companies are selling us. We generate the data, data that simply would not exist without us. And while one might claim that the same information could be generated by any other platform user, the fact is that these companies profit from the fact that we generated it. Moreover, if anyone else can generate the same data, then why do they need ours? Because, obviously, more data is better than less.

My students’ reactions ranged from chuckles to furrowed brows. And I don’t blame them. I expect that these reactions are a result of the simple fact that we don’t (and not that we couldn’t or shouldn’t) get paid for generating this data, the fact that they came of age in a time where these companies have always dominated, as well as the fact that we have been socialized to acquiesce to certain default assumptions as though they were handed down from a higher power. And no matter how independently plausible Lanier’s idea may be, it’s likely to seem bizarre to most. Once again, there’s reason for this.

The default assumption is that we have little to no rights when it comes to our online data, as though any such rights evaporate when we touch our phones or open our laptops. We forfeit these rights (so goes the assumption), even if we do so unknowingly (or, as I would contend, without consenting) every time we click in the tiny, psychologically insignificant ‘I agree to the terms of service’ box which allows us to get to the next page of uncountably many sites that are central to our everyday lives (e.g., paying credit card bills, uploading checks, renting a car, booking a flight, sending money to a friend, ordering food online, paying our mortgage, signing up for newspaper subscriptions, etc.). Because it’s the ‘default’, we accept it without question. Nevertheless, I believe this assumption is outrageous when considered with any care.

If I went to my local Target and, when given the credit card receipt, signed next to an analogous, analog ‘terms of service’, would this really give Target the right to have someone follow and record me for the rest of my life? What if the cashier somehow affixed a tracking device to me that would accomplish the same thing? What’s the difference between tracking my activities with an algorithm and doing so with a camera or a microphone? While the latter is clearly wrong, we somehow accept the former as an unassailable fact of online life.

It’s as though these companies are driven by impersonal algorithms developed for nothing more than extracting data and the profits they bring (*incredulous look implied), all the while treating the originators of such data – flesh and blood human beings – as mere means to these ends. And the scope of these aims seems to have no boundaries. According to a March 2019 article in The Atlantic:

“Amazon has filed a patent for a voice assistant that would recommend cold and flu medicine if it overhears you coughing.”

And…

“The health-care start-up Kinsa drew sharp criticism from privacy experts last year for selling illness data. Kinsa makes a smart thermometer that takes a user’s temperature, then instantly uploads it to a server along with gender and location information.

Kinsa used this data to create real-time maps of where people were getting sick and to refine its flu predictions for the season, with accuracy levels matching those of the Centers for Disease Control and Prevention. But it also sold the information to Clorox, which beefed up its marketing of disinfecting wipes and similar products in zip codes where thermometers reported a spike in fevers.”[1]

When one considers these examples and others, such as iRobot’s plan to use their Roomba vacuum products to map the layout of users’ homes in order to sell the data to the Googles and Amazons of the world, it becomes difficult to imagine anything being off limits.

Moreover, I’m writing this within weeks of Google’s announcing its acquisition of Fitbit (the makers of one of many WiFi-enabled fitness and movement trackers) and just days after The Wall Street Journal’s reporting that Google, by partnering with Ascension (a Catholic network of hospitals and clinics across 21 states), had harvested medical records from over 2500 hospitals and clinics in a venture codenamed ‘Project Nightingale’. The data includes lab results, doctor’s visits, prescriptions, hospitalization records, diagnoses, as well as patient names and dates of birth. And all of this occurred without informing those whose records were harvested or the doctors who provided the services. Within hours of WSJ’s story breaking, Google and Ascension made their partnership public.

When condemnation from a handful of Senators on both sides of the aisle became public, Google released a statement including the following: “We believe Google’s work with Ascension adheres to industry-wide regulations (including HIPAA) regarding patient data, and comes with strict guidance on data privacy, security, and usage.” Meanwhile, Ascension stated that “all work related to Ascension’s engagement with Google is HIPAA compliant and underpinned by a robust data security and protection effort and adherence to Ascension’s strict requirements for data handling.”

It turns out that they just might be right. According to HIPAA (i.e., the Health Insurance Portability and Accountability Act of 1996), health care providers can disclose health records to third parties if the goal of doing so is to improve the quality of the health care provided. If this is correct, then it highlights a clear case where what is legal and what ought to be legal part ways. If we care at all about privacy, especially when considering the sensitive information that will be extractable from us as technology inevitably continues to advance into the future, then we need to hold these companies to ethical standards and not just legal benchmarks or we risk losing far more control over our own lives.

According to a Whistleblower claiming to have worked on the project: “Patients haven’t been told how Ascension is using their data and have not consented to their data being transferred to the cloud or being used by Google. At the very least patients should be told and be able to opt in or opt out”.[2]

With that said, Google also announced that “under this arrangement, Ascension’s data cannot be used for any other purpose than for providing these services we’re offering under the agreement, and patient data cannot and will not be combined with any Google consumer data”.

This might sound familiar. Before 2012 when Google announced that they would merge profile information across their many platforms (including Google search, Gmail, YouTube and Android OS) without allowing users to opt-out, they said they would not do so. In addition, a recent Wall Street Journal investigation revealed that Google does, in fact, curate its search results, despite stating on its blog that “we do not use human curation to collect or arrange the results on a page.”[3] Such tweaks to its algorithms include those that favor big businesses like Amazon, Facebook and eBay over smaller businesses as well as “blacklists to remove certain sites or prevent others from surfacing in certain types of results”.[4] Google employs contractors to evaluate the results of its search result rankings and, according to some of these contractors, they are informed about ‘the correct ranking of results’. The company then uses the evaluations from the contractors to adjust their algorithms. The overarching point is that Google has often said that it doesn’t or wouldn’t do things that they in fact do or that they eventually have done. In light of this, one can be forgiven for being skeptical of their claim that “patient data cannot and will not be combined with any Google consumer data”.

It’s worth stressing that, in light of the relatively recent mind-boggling advances in artificial intelligence and extraction algorithms, it may be impossible to conceive of the power of future technologies. As a result, the current importance of our privacy rights over our online data cannot be overstated.


[1]https://www.theatlantic.com/technology/archive/2019/03/flu-google-kinsa-sick-thermometer-smart/584077/

[2] https://www.theguardian.com/technology/2019/nov/12/google-medical-data-project-nightingale-secret-transfer-us-health-information

[3] https://www.blog.google/products/search/how-we-keep-google-search-relevant-and-useful/

[4] https://www.wsj.com/articles/how-google-interferes-with-its-search-algorithms-and-changes-your-results-11573823753?mod=hp_lead_pos7

Five Problems with Google’s Expanding Reach

 Five Problems with Google’s Expanding Reach

               This morning, within an hour of my first cup of coffee, I heard an ad for Google’s telephone services, and read news of Google’s foray into health care data processing and its plans to enter the banking sector.  I listened to the news on my Android phone, and not one to buck a trend I Googled “Google” to see what other news there was about the tech giant.   (Fitbit…Stadia …Quantum Supremacy.)  All this before I checked my email, some of which will come to me through Gmail, and looked at my schedule for the day, courtesy of Google calendar.  You get the point:  Google is everywhere, and it is on its way to doing just about everything. 

               Shouldn’t we be concerned about a company that is so all encompassing?  It’s not that I think Google is an evil company or that it is bent on a dystopian project of world domination.  Perhaps I’m naïve, but from what I can tell, those who run the various branches  of Google and its parent company Alphabet—which is, really, just Google under a different set of, um, letters—are well intentioned, idealistic people who believe they are part of an unprecedented force for good.  They have some good arguments on their side: their products have made so many important things so much easier for so many people, at a cost to the consumer of approximately zero dollars.  Google appears to be the leader of the pack in artificial intelligence, which will likely lead to incredible developments in medicine, education, communication, engineering, and, well, everything. 

               Yet I think we should be concerned, even if we grant that Google has done everything legally and in accordance with privacy regulations, and even though it might be the case that within any particular industry Google doesn’t constitute a monopoly.  Here are 5 reasons for concern:

1.   Too Big to Fail and Too Powerful to Counter

It was supposed to be a lesson from the Great Recession: when companies become too integral to the workings of the economy, the possibility of failure becomes remote, not just in the minds of the company executives but in actuality.  If a single company becomes too essential, it can virtually be guaranteed that it will be propped up in case of major failure.  Google will not be Lehmann Brothers.  At the moment, it seems extremely unlikely that it will face that sort of disaster, but if a major series of mistakes threaten Google, the U.S. Government will almost certainly step in.  This means that Google doesn’t face one of the biggest checks on private corporations—the possibility of failure.  The worry here is not so much that this will lead to financial recklessness, though that too is possible.  The worry is that it lacks a major check on ethical recklessness.  Set aside the fact that its lobbying power is astonishing, and that Google executives or ex-employees wind up having a hand in crafting regulation. If Google violates our trust there will be little we can do.  Consumers will find it difficult to escape their ecosystem, and even if there is a financial toll for ethical problems there is good reason to believe it will be protected.  Its failure is our failure.

2.  A Single Point of Vulnerability

There is a reason that nature encourages biological diversity, and it’s not just because of the Hapsburg jaw.  It’s because a diverse system is much less likely to be wiped out by single threats.  If our food chain, for example, lacked genetic diversity we risk starvation due to a single blight.  (See the Great Famine of Ireland.)  If our economic, social and personal lives are intertwined with a single company we face a similar threat.  No doubt the bigger they are, the more robust their security and the more established their corporate firewalls.  (I hope so, anyway.)  But given their involvement in every sector of our lives, a major mistake at Google, or a single successful attack, could be utterly disastrous.  Maybe this Titanic won’t sink, but will we bet everything on it?

3.  Power over the People

Google might abide by privacy regulations, but the fact is that these regulations are largely crafted with a poor understanding of the value of privacy.  The main danger of our information being held by a government or private corporation isn’t the possibility of leaks or hacking, it’s the power it gives others to shape our lives.  Google knows this, and intuitively we do too; it’s what enables Google to give us the best search results and deliver excellent products.  But that power is inextricably linked with the power to manipulate users, both individually and as a group.  This power increases with the scope of Google’s data collection: it grows exponentially, one imagines, with knowledge of health records, for example.  It’s not that Google will sell this information to your insurance company, or even  that it will become your insurance company (though don’t bet against it) but that it can influence you and your environment in ways you can’t even comprehend in order to achieve its goals.  This is made all the easier because as individuals who believe in their unassailable free will, we believe ourselves beyond such influence, even though hundreds of studies in social psychology and billions of dollars spent on advertising argue otherwise.

4.  Dominance over Norms

We are subtly shaped by the technology we adopt.  This occurs in obvious ways, such as the default margins and fonts in our word processing client, but it also occurs in more subtle ways, such as which emails make it to a priority inbox and which get relegated for later attention.  Do we memorize phone numbers anymore?  Carry cameras?  Do my students talk to each other during the breaks in class, or are they looking at their phones?  We know we are shaped by our devices and technological environment, but shouldn’t we worry about the fact that more and more our environment is shaped by a single corporation?  This is one of the themes brought out in Brett Frishmann’s and Evan Selinger’s excellent book Re-engineering Humanity, and though the point is somewhat subtle, it’s extremely important: the ability to shape our technologies comes with  the ability to shape our norms, and the shaping of those norms isn’t driven by an abiding concern for our own deepest values.  It’s driven at least in part by profit and market share.  When a company like Google becomes a Leviathon, we have to ask whether that is too much power for one company to wield.

5.  Artificial Intelligence Supremacy

Though Google may not be a monopoly in any particular sector now, they are set up to be a monopoly in the future, with utter dominance what might be mankind’s most powerful invention: artificial intelligence.  Artificial intelligence thrives off of data and the more domains in which an AI trains the more powerful it will be.  Alphabet and Google aren’t looking to dominate us in Starcraft and ancient Chinese board games.  They are aiming at leading the way to general artificial intelligence, and the more domains in which they gain traction the more dominant they will be in that field.  If we thought a telecommunication monopoly made Ma Bell too powerful, we had better open our eyes to the worries that will come with a single company dominating artificial intelligence.  It’s not an exaggeration to say that dominance in AI could easily lead to dominance in any field, especially if a singularity-style intelligence ramp-up is a possibility.  If there is such a thing as a company having too much power, that would surely be it.

These are just a few of the worries that come to mind as Google expands its reach.  I don’t claim that Google should be broken up, or that we should block them from new markets.  I’m not certain that the dominance of Google will be a bad thing.  But I do think we need to give it some thought and recognize that old models of the dangers of the monopoly might not do justice to the rise of the tech giants.  Things go badly in surprising ways, but the more centralized power becomes the more we have to lose in our next surprise.

Social Media, Democracy & Citizen Responsibility

In today’s climate of justifiable suspicion about the Googles and Facebooks of the world, it’s easy to overlook the responsibilities of the individuals using these platforms. While I’m always happy to point out the problematic nature of the data harvesting and information dissemination that these companies are built upon, I would also suggest that this does nothing to diminish our own social and moral obligation to make significant efforts to inform ourselves, resist contributing to increase polarization and do whatever necessary to escape our cozy echo chambers and information bubbles.

Being a good citizen in a democracy requires more than many of us seem to think and much more than our actions often suggest. Pointing fingers, even when done in the right directions, is nowhere near enough. We need to wade through the oceans of bullshit emanating from partisan talking heads, fake news peddlers, marketing driven, agenda-suffused cable-news stations and algorithmically curated newsfeeds in order to determine which actions, policies, and candidates best represent our values, as well as the values most conducive to the health of a thriving democratic society.

President Obama, in a recent interview at the Obama Foundation Summit offered the following: “This idea of purity and you’re never compromised, and you’re always politically woke and all that stuff, you should get over that quickly. The world is messy. There are ambiguities. People who do really good stuff have flaws. People who you are fighting may love their kids and share certain things with you.”

The point, I take it, is not to disregard the mistreatment of marginalized groups but to do something beyond mere posturing and attempting to appear ‘woke’.

Too many of us today seem to believe that it’s enough to call out the many who err or those we may simply disagree with. “Then I can sit and feel pretty good about myself”, said Obama, “because, ‘man, you see how woke I was? I called you out.’ That’s not activism. That’s not bringing about change. If all you’re doing is casting stones, you’re probably not going to get that far. That’s easy to do”.

And while I’m quick to agree that platforms like Twitter and Facebook lend themselves to this practice of racing to be the first to spot and out injustice or ignorant speech, we still need to recognize when we’re being lulled into an ineffectual gotcha game of virtue signaling that, though it may provide fleeting feelings of superiority, produces very little in the way of lasting change or dialogue.

—-

The speed with which Facebook and Google have come to play a central role in the everyday life of so many makes it easy to overlook how recent these companies are. Nonetheless their effects are undeniable. As we shared baby photos and searched for information on anything that might spark our curiosity, they’ve been aggregating our offerings and feeding us what they know will keep us coming back.

None of us like to be confronted with the possibility that we’re alone, that our beliefs might be false, or our deeply held values ultimately misguided. So social media curates our experience to provide us with the validation we so often seek. How better to do this than to gift us our own beliefs and values through the words and stories of others? This keeps us clicking and keeps the advertising dollars pouring in for the companies involved. Just like the angry incel savoring the hateful rantings of Donald Trump, we all feel the cozy pull of having our own views echoed back to us.

But, of course, none of this provides anything by way of truth or understanding. And more to the point at issue, none of this is conducive to an open-minded population willing to do the work required to breathe new life into an ailing democracy teetering on the precipice of unbridgeable polarization. While Aristotle, in the first democracy, aptly said (and I’m paraphrasing) it’s the mark of an educated mind to be able to entertain a thought without accepting it, social media has given us the means of reinforcing our own thoughts without subjecting them to the slightest scrutiny. In fact, one might find these two distinct ideas to be fitting bookends for the nearly 2500 year run of democracy.

While this characterization of things may be a bit hyperbolic, the existence of problematic echo chambers and curated tunnel vision is quite real. Fox News acolytes dig in their heels while liberals roll their eyes, and each side drifts further away from the possibility of honestly engaging with the views of the other. (*I refuse to equate the so-called ‘extremes’ on the left with those on the right. There’s a clear moral and epistemic difference between an oblivious (or worse) refusal to acknowledge, for example, the current resurgence of xenophobia and white supremacy and the desire for health care for all or basic income).

The online social media environment, with its intrusive notifications and conduciveness to mindless scrolling and clicking, falls short of providing an optimal arena for informed evaluation and close examination of information. It’s for this reason that I believe we need to slow our online experience. So many of us ‘grown-ups’ impose limits on our children’s technology usage but do so while staring into the void of facile stumpers and bottomless distraction. Maybe a simple break would do us well. Forget that. A break would do us well. Many of us need to take a goddamn walk…without the phones. Look someone in the eye. It might turn out that the ‘idiot Trump supporter’ or the ‘snowflake Socialist’ is just an ordinary, imperfect human like yourself (*hate-filled racist, nationalist misogynists to the side – there is nothing worthy of engaging in such cases).

Moreover, in these days where our every move is harvested and aggregated, and where massive data companies commodify our very lives, it’s crucial that we recognize all of this while avoiding a victim’s mentality. We have an obligation to inform ourselves, evaluate our beliefs, reevaluate when new information arrives, then incorporate or discard where appropriate.

Navigating the world of ideas and agendas has become far more difficult due to social media, the intentions of the all-pervasive corporate giants, the sheer quantity of information which leads to more skimming than careful consumption, the ever-lurking pull of fatigue-based complacency and politically-motivated fake news, amongst countless other factors. But, one way or another, we need to adapt if we’re going to have any hope of preserving democracy. Otherwise, we’re likely to revert to a power-dominated state-of-nature in which the only difference is the fact that this time around it was ushered in by technology.

Peeping Bots vs. Peeping Toms

Why do we care more about violations of privacy by conscious agents?

Most of us know that we have become data production engines, radiating our locations, interests and associations for the benefit of others. A number of us are deeply concerned about that fact. But it seems that people really get outraged when they find out that actually humans are listening to Alexa inputs or that Facebook employees are scoping out private postings. Why is that? We can call it the Peeping Tom effect: we have a visceral reaction to our private lives being observed by living breathing agents that we lack when the same information is collected by computers. Perhaps this seems too obvious to remark upon, but it deserves some serious scrutiny. One hypothesis, which I advance in a forthcoming paper with my colleague Ken Daley, is that we are likely hard wired–perhaps evolutionarily–to have alarm bells ring when we think about human agents in our “space” but that we have no such inborn reactions to the impersonal data collectors we have developed in the past fifty years. The fact that alarm bells ring in one instance and not another is not a reason to ignore the silent threat. There’s a good case to be made that the threat of corporate knowledge–even if it doesn’t involve knowledge by a human–is quite a bit more dangerous than the threats we are more inclined to vilify.

Two features of human versus machine knowers stand out. Humans are conscious beings, and they have personal opinions, plans and intentions. It’s hard to swallow the idea that corporations or computer networks are themselves conscious, and it’s therefore hard to think of them as having opinions, plans and intentions. I’m inclined to grant the former–though it’s an interesting thought experiment to imagine if computer networks were, unbeknownst to us, conscious–and for the sake of argument I’ll grant that corporations don’t have opinions, plans or intentions (though we certainly talk as if they do). It’s worth asking what extra threat these features of humans might cause?

It’s admittedly unappealing to think of a Facebook nerd becoming engrossed in the saga of my personal life, but what harm does it cause? Assuming he (pardon the assumption, but I can’t imagine it not being a he) doesn’t go rogue and stake me out and threaten me or my loved ones, why does it matter that he knows that information? From one perspective, assuming he’s enjoying himself, that might even be thought to be a good thing! If the same information is simply in a computer, no one is enjoying themselves and isn’t more enjoyment better than less? Perhaps we think the privacy violation is impermissible and so the enjoyment doesn’t even start to outweigh that harm. But we’re not really talking about whether or not it’s permissible to violate privacy–presumably it’s just as impermissable if my privacy is violated and the illicit information is stored in a network. We’re asking what is the worse situation–a violation of privacy with enjoyment by a third person and a violation of privacy without. I share the feeling that the former is worse, but I’d like to have something to say in defense of that feeling. Perhaps it’s the fear that the human will go rogue and the computer can’t. But my feeling doesn’t go away when I imagine the human is spending a life in prison, nor does it go away when I realize that computers can go rogue as well, causing me all sorts of harm.

There’s lots more to say and think about here. But for now let’s just let the question simmer: Are violations of privacy more harmful if they involve knowledge by conscious agents, and if so, why?

Facebook’s Free-Speech Charade

Seeing through Zuckerberg’s Talk at Georgetown University

               Mark Zuckerberg spoke in defense of Facebook this week at Georgetown, and then later in the week faced pushback from Democrats in Congress.  The complaint is that by maintaining a hands-off policy with respect to political misinformation, Facebook is setting up our democracy to be hijacked once again by those who would rather confuse than inform.  His defense is to wrap himself and Facebook in the mantle of liberty and trumpet the virtues of free speech.  It’s natural to view Zuckerberg’s position as self-serving.  He certainly flailed under the questioning of Congresswomen Ocasio-Cortez, Waters and Porter.  (It’s also true that his obvious disorientation under scrutiny gives lie to idea that he had anything to do with writing the Georgetown speech.  It resembled nothing so much as a student talking to a professor about a paper he’s just plagiarized.)  But set these things aside.  What about the argument itself?  It’s worth a close look, because as is usually the case, the situation isn’t simple.  A lot of what Zuckerberg says is right, and the fact that it’s him saying it—self-serving though he is—doesn’t make him any less right.  The problem is that it’s difficult to get clear on what he’s really arguing for.  It’s not at all obvious that what he’s right about—namely the value of free speech—supports what he’s really defending—namely, Facebook’s permissive approach to political information and advertising. 

               Here’s the gist of Zuckerberg’s argument.  (I encourage you to read the whole thing here.)

               Free, uncensored speech is a necessity in a democratic society, and Facebook is a platform for that speech.  But, Zuckerberg admits, lines have to be drawn. It’s permissible, even desirable, to restrict speech that “puts people in danger” as well as things like pornography which “would make people uncomfortable using our platforms.”  It’s tough to draw the line. In general, he seems to think, it’s best that Facebook avoid regulating the speech on its platform.  There is a need to protect ourselves from the sort of manipulation Russian hackers perpetrated in 2016, but this is best done by requiring user verification:  they now “require you to provide a government ID and prove your location if you want to run political ads or run a large page.”  While Facebook works to weed out viral hoaxes, they want to avoid trying to restrict misinformation in general, which might include satire or the unintentionally wrong views many of us hold.  In the end, we should be careful, because Zuckerberg doesn’t “think most people want to live in a world where you can only post things that tech companies judge to be 100% true.” Facebook allows speech by political figures, even when it is wrong, and doesn’t fact check political ads. “I don’t think it’s right for a private company to censor politicians or the news in a democracy,” Zuckerberg says.  Again, there are difficult lines to be drawn.  If they were to ban political ads, should they also ban ads on political issues?  Zuckerberg seems to think we face a choice: we could constrain free expression on the internet, as it’s done in China, or we can have an internet that privileges open speech.  Zuckerberg’s position is basically that Facebook has “two responsibilities: to remove content when it could cause real danger as effectively as we can, and to fight to uphold as wide a definition of freedom of expression as possible — and not allow the definition of what is considered dangerous to expand beyond what is absolutely necessary.”

               It’s obviously hard to disagree with the general notion that people should be allowed to speak their mind, and that free speech is important to democracy.  That’s the strategic brilliance of Zuckerberg’s speech.  But what exactly is Zuckerberg arguing for?  What is he arguing against?  This matters.  It’s one thing to say that people should be allowed to say and think what they want. It’s another to say that a company should accept money to promote content that is demonstrably false.  The biggest problem with Zuckerberg’s argument is that he makes statements that are true, all things being equal, when applied to society as a whole, but that don’t obviously apply to the behavior of companies like Facebook.  Facebook’s very business model is predicated upon doing things we would strenuously object to if they governed social discourse as a whole.  Would we want democratic discourse to be governed by proprietary algorithms that bring certain voices to our attention and push others to the background?  Would we want democratic discourse to be engineered to addict us to having that discourse in a particular place, for the benefit of a particular company?  Would we want to have to verify our identity and location in order to speak our mind?  Would we want a government to record and store everything we say, only to turn around and market that information to advertisers?  These are all things that Facebook does, and if it’s ok for Facebook to do them, it’s because Facebook is a private business, not a government or a country.  Zuckerberg and Facebook can’t have it both ways: if they embrace the arguments for open democratic discourse, they need to hold themselves to those standards across the board.  You can’t be laissez faire while rigging the circulation of ideas behind the scenes.

               It’s one of the ironies of Zuckerberg’s speech that he almost makes the case for turning Facebook into a public utility.  His arguments are really only plausible if Facebook is such an essential platform for public discourse that restricting speech on the network would be tantamount to censoring free speech.  But if it is such an essential platform, should it really be governed by a company that isn’t democratically representative, that doesn’t answer to the public or the government, and that is driven by a profit motive?

               I don’t expect Facebook to be turned into a public utility anytime soon, and there would be some obvious drawbacks to doing so.  Given that, can’t Zuckerberg make the argument that it should be as free as it can be within the limitations of being a company with a profit motive?  That is, given Facebook is what it is, shouldn’t it avoid restricting speech?

               The fact is, we do restrict certain speech in certain places, not because it risks physical harm but because it is a danger to our democratic institutions.  You cannot stand by a voting booth and make a stump speech (or even brandish an advertisement) because doing so would threaten to corrupt the political process.  There are numerous rules about political advertising that limit what can be done—they must include disclaimers, for example, indicating whether they are affiliated with a particular campaign. Regulations like this arise in part as a response to new forms of media and the particular threats they pose.  We shouldn’t let our love of free speech blind us to the need to make sure our political processes can flourish, and that there are likely to be unique threats posed by new technologies that require considering new rules.  I don’t pretend to know what all of those are, or what the appropriate steps are, but given the potential impact, does it not make sense to err on the side of protecting our political heritage while we find our footing?

               Despite his repeated insistence that these problems are nuanced, Zuckerberg completely fails to recognize that the solutions can be as well.  No one is suggesting that Facebook police Uncle Joe’s posts to make sure he’s got his facts straight—though it’s certainly not morally wrong or anti-democratic to erect a social network that tried to do so.  (Wikipedia, it’s worth saying, has done pretty well holding user generated content to strict standards.)  And while it’s true that there are some tricky distinctions to be drawn between lying political ads, ads with scientifically inaccurate information and ads about issues with controversial truths, that doesn’t mean those distinctions shouldn’t be drawn.  There’s no clear way to draw a line between a kid who is too immature to use Facebook and one who can handle it, but Facebook manages to draw the line anyway, at the rather young age of 13.

               Zuckerberg is right that Facebook needs to tread carefully here.  It has become too influential in our political system for decisions to be made rashly.  That’s some reason to believe that it has simply become too influential period.  In the end, though, taken as a defense of Facebook, Zuckerberg’s paean to free speech is unconvincing.  Facebook’s policies now as ever are justified more by keeping people on the network than by democratic ideals, and Zuckerberg—whose own shares in the company confer super-voting rights many times greater than the typical share—knows this perfectly well.