The Would-Be Technocracy

Evaluating Efforts to Direct Social Change with Internet Protocol Design

This is a preprint of an article I wrote with first author Farzaneh Badii, which is now published in the Journal of Information Policy. The version posted here has a round of final clarifying edits that didn’t make it in to the published version. Both versions have the same structure and argument.

Introduction

‘Values in Design’ (ViD) is both a research and a political program. As a research program, ViD scholars investigate how Internet protocols may be designed to so that their use alone will promote human rights.[1]  As activists, their goal is to reform Internet standards bodies—and the Internet Engineering Task Force (IETF) in particular—such that future protocol design will take these human rights considerations into account. As research and as activism, ViD relies on the assumption that Internet protocols can be designed such that, when deployed as software, these protocols will promote human rights. More specifically, ViD assumes that the rights promoted will be of a specific type (e.g., individual, liberal), and set (e.g., the Universal Declaration of Human Rights). More generally, this claimed relationship is a special case of a general principle: that through intentional design, technological artifacts can exercise a context-independent political force.

To this end, a group of ViD scholars collaborate through the Human Rights Research Group, which operates a Working Group in at Internet Research Task Force named Human Rights Protocol Considerations (HRPC). The research group is chartered to determine whether standards and protocols can enable, strengthen, or threaten human rights. Through it, human rights activists, academics, and engineers discuss and publish research on the relationship between Internet protocols, protocol design, and human rights. Much of this paper deals with Request for Comment (RFC) 8280, which contains an overview of their (still valid) core positions.[2]

The “politics” in question carry a broad sociological and humanistic framing: not merely party or geopolitics, but the activity and thought devoted to the conscious organization of human affairs. The term carries no negative or positive connotations, and, much Claude Shannon’s information, it is stripped of its semantics or ideology. It is not relativist, but operational. In the case of speech rights, the political encompasses not only human rights laws and regulations, but parliaments and town halls, laptop monitoring programs and Terms of Service contracts. Although it is unlikely that Internet protocols can impact speech rights everywhere, ViD discourse does not put a conceptual limit on the social domain(s) that Internet protocols may affect. Here we should also distinguish a technology from a technological artifact. An artifact is a real instance of a technology, for example a tool or machine that exists in our world. A technology, in contrast, can also refer to designs, specifications, or plans. As we will see below, this distinction is important.

This paper argues against aspects of HRPC thinking, as well as the IETF proposals that result. Normatively, this paper’s authors share the HRPC’s commitment to human rights as a concept and politics, and to the use of the Internet to further the cause of human rights. However, there is no empirical or analytical support on offer for the mechanism that the HRPC expects to connect Internet protocols to the furthance of human rights. We also argue that the “human rights considerations” that the HRPC proposes for the IETF will fail to institutionalize such a mechanism.

We proceed as follows. Next, we identify a long-running history of context-dependent thought on the relationship between technology and politics, and contrast it with the context-independent assumptions common in the HRPC. We then return to HRPC work, and extend Mueller and Badiei’s prior work[3] on the analytical difficulties with such a program. We extend their work and analyze further analytical and also social difficulties with the basic proposal. We provide case studies of how ViD has failed in practice, and conclude with both i) what we believe to be the analytical prerequesits of such a program, and ii) the limitations of our own critique.

1.    Political Technologies and Context-Dependence

The relationship between technology and politics can be traced back at least to the origins of the sciences and the social sciences. There far too much here to synthesize, but nonetheless we will begin by identifying a major trend in “context-dependence,” and its commonplace appearance in the history of social and humanistic thought.

Some of the earliest thinkers we associate with the modern west preoccupied themselves, at least in part, with this question. Adam Smith and Karl Marx, early economists of very different stripes, believed that certain technologies, when widespread, would promote certain kinds of politics.[4] Marx believed—or, at least, many Marxists believed and believe—that the productive technologies of industrial capitalism would create new politics and ultimately force a new society.[5]  Max Weber, who set much of the program for modern sociology, elaborated a more complex set of relationships between transportation, communication, and polity, but nonetheless saw a role for economic and also bureaucratic technologies as carrying a special social force.[6]  Alexis de Tocqueville and his discussion of the relationship between manufacturing and political structures fits in to this tradition as well.[7] These basic questions extended to the early sciences, too. Robert Boyle, regarded as one of the first chemists and experimentalists, saw the technologies of the scientific experiment as making possible a new political order.[8]

Although the present-day definition of “technology” did not come into use until the 1930s,[9] the place of tools, machines, and technique in the broader society is an unmistakeable theme in modern western thought. In the social sciences, the role of technology has typically been understood as context-dependent, operating and influencing in complex environments or systems. This line of inquiry is not specific to any thinker or field, but to the social sciences and humanities, arising in force alongside the industrial revolution and continuing to the present day.

By the 1980s, a new set of interdisciplinary fields were busy revisiting the question, although not necessarily in response to this intellectual tradition. Of those of greatest relevance to our analysis, the interdisciplinary field of Science and Technology Studies (STS) centered these questions. Affiliated scholars have undertaken case studies relevant to our inquiry; these include electrical networks the Internet, scientific research networks, transportation, building architecture, and music.[10] A major methodological camp of STS, identified loosely as the Social Construction of Technology, or SCOT, holds that the social significance of technology is structured heavily by its context; it opposes the “technological determinism” of (what we call) context-independency.[1] In communications studies, the Handbook of New Mediaprovides a useful overview of how scholars in this field have studied the social shaping and consequences of information and communication technologies (ICTs), technologies whose social force is at least partially dependent on their context. This study of “system features” entailed analysis of the two-way causal links between the user and the technology.[11]There is also a branch of scholarship focusing on values “for” design, considering values such as privacy in another context-dependent approach.[12]

Overlapping with both STS and (New) Media Studies, “infrastructure studies” represents a similarily interdisciplinary group of scholars, whose focus is on the social construction, life, and consequences of infrastructures.[13] While highlighting how infrastructures play major roles in social change, infrastructure studies nonetheless emphasizes the societal contexts that shape the creation and diverse experiences of infrastructure. While infrastructures may have long-lasting political impacts, they do so in a historically contingent manner. (Prominent scholars associated with infrastructure studies have also contributed to the sociology of standards, emphasizing how standards and classification regimes are deeply social in their construction. They also argue that abstract conceptualization of technological design that does not consider concrete use cases and users will be insufficient.[14]) Perhaps the most radical response to the question of value in design is the variety of methods known widely as Actor-network Theory (ANT).[15] ANT is better understood not as a theory but as a method, as it does not make predictions, or seek to measure or respond to major debates concerning networks. [16] However methodologically, ANT provides agency to nonhuman artifacts, and is a fundamentally contextual approach, to the extent that it renders incoherent any notion of context-independency.

Another strand of ViD research is known as Privacy by Design (PbD)on data protection by design, identifying both Privacy Enhancing Technologies (PETs) and Privacy by Design (PbD). Traced back at least to a 1997 European Commission report and working group, PETs “involve organising and engineering the design of information and communication systems and technologies with a view to avoiding, or at least, minimising, the use of personal data.” [17]The early language surrounding PETs provides a context-dependent, perhaps even “best effort” philosophy behind minimizing harm. Eventually the PET concept gave way to Privacy by Design (PbD) tendency, which while more deterministic, still preserved some of its context-dependent characteristics. [18] Cavoukian, an early PbD advocate, noted that in addition to IT systems and broader infrastructures, believed it applied to creating accountability in business practices. In our view, redesigning the context in which a technology operates—such as the practices and organizations through which business operates—necessary makes this a context-dependent approach. PbD later on appeared in legislations such as General Data Protection Regulation (GDPR) but it was not clear how it would work in practice; the privacy-enhancing force of the GDPR lies in that it is regulatory. [19] Koops and Leenes (2014) noted that that “privacy regulation cannot be hardcoded in the system or an architecture. Hence the concept of PbD has not embedded values in technology; rather, policies, laws, and incentives have changed the ways software companies use technology that collects and handles data.”

The context-independency sought by ViD scholarship in general, and the HRPC in particular, should be understood in this context: decades- and sometimes centuries-long intellectual and analytical histories that locate the political force of a technology in the conditions of its use.

2.    Limits to Context-Independence

The twentieth century also saw scholarship on comparatively context-independent technologies. This line of thought is often associated with what is identified, normally with pejorative undertones, as “technological determinism.” In this section we begin by separating modes of thought associated with technological determinism from the context-independent model of political technologies offered by ViD and the HRPC. We argue that, in the case of the HRPC, ViD outlines a far more radical vision of political technologies than offered by the ‘standard bearers’ of technological determinism in the social sciences and humanities. We begin with two of its best-known proponents, Lewis Mumford and Langdon Winner.

Mumford argued that technological artifacts and systems fall into two broad categories, democratic and authoritarian. The existence of technological artifacts from one of these camps will, for Mumford, exert a powerful and durable political force on the society it inhabits. But two major differences stand out between Mumford and ViD. In the first, he was concerned with classes of technology, such as nuclear power or animal husbandry. Under Mumford’s framework, a design shift in one system of a nuclear power plant would represent something, but nothing close to guaranteed social change. Indeed, writing in 1964, he wrote how “[t]he inventors of… computers are the pyramid builders of our age: psychologically inflated by a similar myth of unqualified power” and possessed in particular by “the notion that the system itself must be expanded, at whatever eventual cost to life.”[20] The cost here is the destruction of democratic social forms: computers pose a threat because of the large-scale social machines that they both implement and require. This point leads us to the second difference between Mumford and ViD thought, namely, that Mumford was talking about sociotechnical systems, or in his terminology, technics. Even high technologies like nuclear power and computers were, for Mumford, systems whose social force was only in part the result of its specific technological artifacts.

Langdon Winner’s canonical “Do Artifacts Have Politics?” was, in part, an elaboration of Mumford’s thought. Winner described technological artifacts with the potential to exercise strong and weak determining force, and to impact either those in close proximity to the artifact(s), or to the society as a whole. They can either act as a political force, or settle a pre-existing issue for a community. (A useful example may be that an industrial paper mill requires regimentation and hierarchy for its workers inside, and a regulatory environment with certain properties on the outside.) Winner does not claim anything for all technologies, and instead focuses on examples that show a technological artifact embodying “specific forms of power and authority.”[21] Here we turn to the differences and, indeed, the incompatibility of his thought with ViD.

Winner’s defense of context-independency contains three critiques of context-dependent strategies (what we refers to as social constructivist) for understanding technology.[22] For ViD, each critique is cautionary. First, Winner laments that, by privledging inventors and technologists as the locus of change and agency, such work conceals the actual social impact(s) of a technology. Second, it reflects a form of liberal pluralism that adopts the restricted conditions of choice already imposed by engineers and other elites. Third, its focus on design is unable to detect larger technological trends over longer periods of time (a century-long trend toward modular design, or its origins in architecture, for example, is not visible from a localized set of design choices concerning modularity[23]). Winner’s preoccupation with design that occurs outside of design groups, and impacts knowable only socially (and not from the design itself) are criticisms, not affirmations, of the HRPC framework.

Ultimately, in the long history of scholarship on political technologies, the authors cannot identify and analytical or empirical tradition that can be used to support ViD. Instead, these diverse lines of thought stack up a list of reasons why prescriptive design of political technologies is, at best, merely unworkable as policy. 

Mueller and Badiei have analyzed the varying adoption of these context independent ideas in thinkers and organizations associated with the Internet.[24] Of the groups and trends they identify, two stand out as encompassing the dual claims, described above, that a technological artifact’s politics may be context-independent and known in advance. One, the “Code is law” school, originated in the late 1990s by Reidenberg[25] Lessig[26] and others. It argues that “code” is one of a small number of major sources of political order. In an argument echoing similar claims by Veblen,[27] Lessig notes that the relative importance of code is increasing vis-a-vis the other sources (law, norms, and the market). For now, however, we turn to recent proposals in Internet standards bodies that take as their starting-point varying aspects of the context-independent view, identify potential problems with the mechanisms being proposed, hoping that it can modify the way forward in the pursuit of knowledge about Internet protocols and human rights.

3.    Method

Our historical method is historicist, applied to a series of short case studies. Modern historicism seeks to understand historical phenomena without anachronism or teleology. It provides a strategy for understanding plans, values, techinques, and even technical architectures in their original, historical terms—rather than a post-hoc evaluation from the present. One goal of historicist research is to identify why certain decisions or designs came to be, and not take for granted knowledge acquired or opinions standardized after the fact. Thomas Kuhn’s The Structure of the Scientific Revolutions is an important work in this tradition, as it reveals the different, inner logic of each scientific paradigm, rather than portray the history as the gradual reduction of error. Shapin and Schaeffer’s Leviathan and the Air Pump is another example. We put this strategy to use in understanding the original motivations, design philosophies, and beliefs concerning a set of core Internet technologies. As our argument is that these ideas have changed significantly over the decades, a historicist approach is important to not let present-day understandings overwrite historical reality. To take the Domain Name System of an example, it is important for us to understand how its social function was understood in 1982 (before it was called DNS) and 1992, not only what we now understand it to be. 

We draw on three case studies: the Exterior Gateway Protocol (EGP) and the Border Gateway Protocol (BGP), the Domain Name System (DNS) and WHOIS.  We selected these cases in order to better understand i) the unobservability of politics in technological design, ii) how the understanding of a technology’s core function can shift over time, and iii) how a changing sociohistorical context will alter a technology’s perceived (or actual) political function. Empirically, we restrict our sources to primary, contemporaneous materials, or peer-reviewed secondary sources that draw on such primary sources. Here, records from the DARPA Internet Program, which funded and directed EGP and DNS development, are important; so too are standards documents and representative listserv discussions. The conclusions reached from this evidentiary standard, typical of peer-reviewed historical scholarship, can and does diverge from community lore and official histories (compare Russell’s analysis of the early Transmission Control Protocol standards-setting process to the Internet Society’s recollection-based A Brief History of the Internet[28]).

In what follows, we first provide a brief background on the research group’s work and its evolution. Then we identify empirical and analytical difficulties that we believe are embedded in ViD and related approaches. We then turn to a historical analysis of key Internet protocols to assess their suitability for context-independent thinking and as potential tools of social control. 

4.    Human Rights Protocol Considerations 

In 2015, the Internet Research Task Force (IRTF) chartered the Human Rights Protocol Consideration Research Group (HRPC RG). Its goal remains, in part, “to research whether standards and protocols can enable, strengthen or threaten human rights,” as the rights-enabling qualities of Internet protocols “might be degraded if they are not properly defined, described and sufficiently taken into account in protocol development”.[29] Mueller and Badiei identify problems with the group’s charter and related publications. Their criticism might be summarized as: while the intended functions of the HRPC RG appear to rest on the assumption that context-independent values can be known and thus encoded in advance,[30] a close look at the language used by the group hedges, and often retreats to the heavily contextual, multi-causal thinking: the kind we see in DeNardis’ work.[31] In other words, when pressed on the causal relationship between technological artifacts and politics, the HRPC RG rightly rejects any certainty in our ability to encode human rights into Internet protocols. 

Indeed, as the work of the research group progressed and discussions took place, it became apparent that encoding rights in the design of Internet protocols was, at the very least, challenging. ten Oever and Cath argued:

The research group’s position is that hard-coding human rights into protocols is complicated and changes with the context. At this point, it is difficult to say whether or not hard-coding human rights into protocols is wise or feasible. Additionally, there are many human rights, but not all are relevant for information and communications technologies (ICTs).[32]

When it comes time to actually theorize the relationship between technological artifacts and politics, the HRPC RG takes a sophisticated and contextual approach, which can be seen in their documentation of the back-and-forth between protocols and politics in RFC 8280.[33] Such a position allows us to continue in the long tradition of studying the intersection of politics and technology, and building new tools and adjusting our political orders in response to the other. It does not permit us much more than that, and it does not give us the tools to encode context-independent political values into technical artifacts. Unfortunately, it has given a slight pretext to begin introducing a new regulatory regime or simply cultural pressures into standards-making, a matter we return to below.

However, the practical goals of the HRPC RG go beyond this measured analysis. RFC 8280 provides 34 technical concepts and maps them to “rights potentially impacted.” They continue:

It is, however, important to make conscious and explicit design decisions that take into account the human rights protocol considerations guidelines... In addition, it contributes to (1) the careful consideration of the impact that a specific protocol might have on human rights and (2) the dissemination of the practice of documenting protocol design decisions related to human rights.[34]

Their goal, then, is to subject protocol development to explicitly political considerations. As noted by Mueller and Badiei the IETF has long required a discussion of security implications, and most observers conclude that the work of engineers, as a creative activity, reflects at least in part their values. In contrast, the HRPC RG is advocating for an explicit consideration of political values in protocol design, in the furtherance of specific political ends. Engineers would be expected to identify their own political views, and to explain and perhaps justify their design decisions on political grounds.[35]

This approach has the potential to create difficulties on a few grounds. Mueller and Badiei identify four areas of difficulty or impossibility. One, human rights, especially as encoded in the UN’s Universal Declaration of Human Rights (UDHR), are contradictory and can only be interpreted locally and interpretively (both in terms of their relevance to protocol considerations and their contemporary significance). Two, protocols that do provide some functionality that proves inconvenient to large organizations can simply be replaced, as has already happened, such as with Transport Layer Security (TLS) 1.3. Three, using protocols to pursue political ends immediately raises the question of legitimacy: the IETF lacks the political legitimacy to set communication policies for the planet, let alone the United States. Turning it into a political body for certain political values might raise its legitimacy for dispersed and highly cosmopolitan populations in a limited number of countries, but it would likely trigger the use of replacement organizations by large portions of the rest of the planet. Their final objection is related to the issues raised above in this paper, namely, that the political impact of protocols can only be known after (ex post) they are designed, but in order to successfully encode political force into a protocol in advance, these politics must be knowable before (ex ante) they are implemented.[36]

5.    Problems with Expanded Political Considerations

Mueller and Badiei’s work, then, can be understood to contain two strands of critique: institutional and geopolitical realities, on the one hand, and the temporal sequence of implementation and political consequence, on the other. Here we introduce a new set of fundamental challenges, largely in the second strand, that further warn us against the ViD program. 

A.    The Difference Between Design and Design Decisions

We can begin by delineating at least two meanings that are attached to early design: i) as a basic architecture or specification, and ii) as a set of decisions concerning the design. The additional moral and ideological introspection required by the ViD framework conflates these two, by assuming that a set of design decisions, made in the absence of countervailing forces, will be instantiated in the design (e.g., an architecture, specification, or similar). For what we can actually measure ex post is not the decision-making, intent, or moral character of the designer. All we can resolve is the design (as architecture, etc.). Linking the demonstrated outcome of a protocol to private, subjective states is not possible. The field of psychology has, since Freud, amassed an impressive body of evidence and analysis that we are not even capable of understanding these connections in ourselves. We are all pathologically bad at understanding the significance of our behavior and the genesis of our thoughts and feelings, and we are usually instinctively wary of people who claim to have this power.

Protocols are complex things, and their development necessarily involves weighing trade-offs produced by different technologies, institutions, and interests. RFC 8280 identifies “protocol design decisions related to human rights,” which necessarily refers to some of the decisions made during the design of a protocol. But Internet protocols are usually complex, requiring whole classes of decisions, some made freely, others not, some unknown to the author, others taken for granted as facts of life. The act of delineating design decisions from a cluster of thought is not an objective operationalization. We cannot see, as Winner remarked in his critique of constructivism, larger design influences beyond our own limited spacetime horizon. There is no technique on the table (or imaginable to the authors that would permit us to know in advance which parts of the design decision were political, which were not, and what their impact might be in the future and in other parts of the world. In Mueller and Badiei’s argument, the subsequent political impact must be mapped to design after the fact, creating the illusion of a knowable causal chain. But we must also bridge the chasm between design and design decision. Ultimately we are concerned that it will be impossible “to make conscious and explicit design decisions that take into account the human rights protocol considerations guidelines.”[37]

B.    Toward Performance Considerations

The problems we identify above are not merely logical and ontological contradictions. There are serious institutional risks at work as well. If the designer of a protocol is asked to self-analyze these human rights considerations, they can reflect on (but not measure or verify) their intent, before reporting it with the usual levels of process transparency that we witness in Internet standards documents. But intent is logically and morally different from consequence, the latter of which is the reason that the HRPC would like to induce engineers to participate in this exercise. The authors are unclear as to the grounds on which HRPC thinkers believe that inner motivational states will be self-reported reliably. In the absence of these things, what remains is individual, self-interested performance.[38]

6.    Case Studies

We now present The following case studies serve to highlight the multiple problems identified above. We have chosen initial case studies to cover the areas of routing and naming..

A.    Exterior Gateway Protocol (EGP)

The Exterior Gateway Protocol (EGP)[39] and its successor the Border Gateway Protocol (BGP)[40] both serve as routing and reachability protocols for Autonomous Systems. Autonomous Systems are groups of networks (or a single network) under the policy control of a single entity.

The Defense Advanced Research Projects Agency (DARPA) directed its contractor Bolt Beranek and Newman to begin creating the framework for Autonomous Systems in the late 1970s, in keeping with design ideas sketched no later than the summer of 1978 recent architectural innovations from its Internet Program. The original purpose of the Autonomous System was varied, and like many other architectural features of the modern Internet, has multiple “fathers” and just as many views as to its ‘true’ purpose. EGP and BGP both contributed to Internet scaling, and specifically, overcoming hurdles to effective routing on the public Internet.[41] Prior to autonomous systems (AS), Internet routers (then, gateways) were visible to one another, and a single misconfiguration could, theoretically, have brought down internetwork routing, just as similar errors had cratered ARPANET connectivity in a famous event some years before.[42] Autonomous Systems insulated routing errors within each autonomous system from the inter-AS routing architecture of the Internet. Furthermore, autonomous systems permitted organizations to configure arbitrary interior routing algorithms, for routing within and between their own networks, so long as they could participate in Internet routing.[43]

The EGP specification is explicit that it was one part in the gradual creation of the autonomous system architecture, then referred to as “domains.” Under EGP, the Arpanet became the first core Autonomous System, and the expansion to an arbitrary number of Autonomous Systems would await a protocol like BGP.[44] As such, it increased the autonomy of individuals and organizations to set their own routing policies, and made it far easier to connect networks to the Internet.[45] Cisco, a company that owes its initial existence to the opportunities created by EGP, and which employed one of the original BGP authors, portrayed EGP as a technical matter of interconnection efficiency.[46] Others made economic arguments, for example in a 1998 IETF meeting: “The costs of the current interconnectivity approach are large. They result in either having very labor-intensive routing configurations or in less than adequate interconnectivity and the resulting long paths and lack of robustness.”[47] Others focused on control: Cisco’s Newsletter stated that with the Internet’s diversification, network managers needed to assert some control over their resources through introducing types of user policies but EGP made no provisions for. These policies were related to technical and economic issues (especially in the case of ISPs.)[48] Note that the move to BGP was not equally welcomed by all actors. Vendors or other implementers of BGP had concerns about implementing a new and complex protocol at the same time.[49]

EGP and BGP deployed into the “protocol wars,” circa 1970s-80s,a conflict largely between the DARPA-led community and its internetworking protocols centered around TCP and IP, and the International Organization for Standardization’s (ISO) Open System Interconnection (OSI) networking stack.[50] In the early 1980s, the official line within the US DoD and NIST was that DARPA protocols were an important innovation and would be used as a stop-gap until the OSI model was ready to deploy.[51] Within DARPA, this was seen by at least some engineers as the necessary delay to cement their dominance via global adoption.[52] As such, the rapid expansion BGP enabled also helped spread and institute the Internet’s increasingly homogenous system of Internet Protocol and Ethernet, in place of the heterogenous mix of local networks that the TCP/IP designers envisioned in the 1970s. Each consequence of EGP and BGP listed here have implications for privacy. Which design decision was responsible for each of these political consequences? And who should be called to account for the political consequences of BGP, whatever it may be? 

B.    The Domain Name System (DNS)

Researchers based out of the University of Southern California Information Sciences Institute (USC-ISI) began developing the Domain Name System (DNS) in the late 1970s. By 1985, they had an early DNS server running at USC-ISI. DNS was designed for multiple reasons: to create a distributed database to replace the aging hosts file to provide greater flexibility to local sites in managing their own name (and mail) bindings,[53] to create autonomy for top level domains[54] and to create some kind of new political structure that could mediate Internet governance.[55] As is well known, in order to accomplish these goals, the system required a “root:” the highest point of authority necessary to delegate authority to the top-level domains.[56]

Despite the extensive historical records (listservs, interviews, and RFCs, for example) it is unclear who first specified the root and why. While the USC-ISI team solicited proposals from the broader Internet community, none of these offered a distributed system that would have avoided some central point of control. We do not know if Su, Postel, or Mockapetris (the main contributors to the DNS design) considered other possibilities, or how they understood the politics of their decisions.[57]

Today’s IETF standards are no longer the domain of DARPA, but the same relationships hold: individuals working on behalf of large and often somewhat secretive organizations. If asked to identify the politics of design, there are routine and powerful constraints that prohibit a DARPA researcher or Google contractor from speaking freely. Dominant accounts of technological innovation are biased in favor of individual inventors,[58] and the expectation that a single or small group of engineers can speak for the complex ecology of thought behind design decisions falls within that mode of explanation. Nonetheless, what can we make of Su, Postel, and Mockapetris’ motivations? Let us assume they had been instructed to “make conscious and explicit” design decisions and consider their politics. At the time, the Internet was a US-run infrastructure. The move toward Regional Internet Registries and the globalization of Internet governance was years away.[59] The term Internet governance was not used at least until the commercialization of the Internet in 1995.

We can consider three moments in the evolution of DNS, beginning with the first servers in 1985, and following twice at ten-year intervals, in 1995 and 2005. Each sample varied dramatically in their political character. The first (1985), an experimental system for a largely American ecosystem of government contractors and researchers, governed by the Department of Defense (DoD); the second, a global infrastructure governed by an emerging global Internet community that was strategizing on how to best sever its vestigial ties to that same DoD;[60] the third, a global multistakeholder organization preparing a stewardship transition with the Department of Commerce. In 1982, the then-“Domain Naming Convention” was, explicitly, to generalize the existing Arpanet naming convention (user@host) to an internetwork environment.[61] There was no multistakeholder governance, stable Internet connections were just years old, and the system was run both in theory and practice by defense agencies and their contractors. The naming universe would differ from the network topology, but was designed to conform to existing hierarchies. Besides making naming work for the defense-run infrastructure, we will never a unambiguous list of the DNS design choices. In the (recent) words of co-designer Paul Mockapetris, “Many people think that the sole objective of the DNS was to go from names to addresses, but it was designed to be a much more general purpose than that. There are about 60 or 70 different uses that people have come up with.”[62] Thirteen years after Su and Postel’s original design document, Postel attempted to take over control of the DNS root from the US Government. His true motivations are of course unknown, but they concerned a growing public Internet and the legitimacy of private companies as monopolists in the name space. Whatever Postel came to understand as the political function of DNS, or what he saw for it, it could not have been the same as 1982.

C.     WHOIS

Protocol designs made early on had unpredictable effects once they operated on the far larger Internet that emerged in the 1990s and thereafter. When the potential harms of the protocols on people’s rights came to the light, those protocols were not modified or updated to moderate the effect. We can see this clearly in the evolution of WHOIS protocol. WHOIS pre-dates the domain name system rollout. In 1982, WHOIS was the directory that included the contact information of ARPANET users, as well as those who accessed the Arpanet via connected networks (known initially as the “ARPA Internet”).[63] WHOIS provided an easy way to retrieve contact information for other registered users, in a community of technologists who typically knew each other personally. It was not clear whether there were political considerations when developing WHOIS, but the engineers did not foresee its the political implications it would bring after commercialization.

The political impact of implementation of WHOIS began with  policymaking institutions such as Internet Corporation for Assigned Names and Numbers (ICANN), which together with Regional Internet Registries (RIRs) stewards the Internet’s IANA functions. ICANN was formed in 1998 to ensure the stable and secure operation of the Internet's unique identifier systems. One part of ICANN’s mission is to coordinate

the allocation and assignment of names in the root zone of the Domain Name System ("DNS") and [coordinate]… the development and implementation of policies concerning the registration of second-level domain names in generic top-level domains ("gTLDs”)

ICANN also became in charge of making policies that affected the implementation of WHOIS among the domain name registrars and registries. ICANN policymaking processes included various stakeholders such as intellectual property rights constituency, business constituency, security advisory committee (SSAC) and even the law enforcement engaged through the ICANN’s Government Advisory Committee (GAC).

Intellectual property rights owners, along with law enforcement and cybersecurity researchers, demanded for WHOIS to be public and display every domain name registrant personal information in the directory which was accessible worldwide with no data protection considered for the domain name registrants (there were some measures considered later which are beyond the scope of this paper). The directory had many data points valuable for businesses, security researchers and others. Hence, they wanted to protect the status quo of implementation of WHOIS but each with evidently different incentives and political intentions.

IETF had acknowledged the problems that open public WHOIS created from early 2000s. In 2004 it convened the working group CRISP (Cross Registry Information Service Protocol) to address some of the issues. As a result of consultations in CRISP the “Cross Registry Internet Service Protocol (CRISP) Requirements was issued. In 2005, a protocol called IRIS: The Internet Registry Information Service (IRIS) Core Protocol which aimed to provide a tiered access system to users and as a result all the domain name registrants information such as mailing address, email and other identifying information would not have been public.

We can see efforts at the IETF (from early 2000s) to replace WHOIS and provide tiered access to personal information instead of displaying all the information (including sensitive private contact information) in public. In the eyes of human rights activist, those protocols might protect privacy of domain name registrants to some extent so there might be some political intention behind them. But it turned out that it was not enough to just design the “human rights protecting” protocols. IRIS, for example, was hard to implement. While its nonadopting can be just a technical failure we can also attribute it to the potential political pressure by other stakeholders not to implement it hence IRIS became redundant. 

Another problem we face on the Internet is that not all the protocols have to be adopted by all the actors. In the case of WHOIS, the operators of country code top level domain names (such as .DE, .US, .CA, .IR) did not have to implement any specific WHOIS protocol (though some of them did so voluntarily). They also reconfigured and changed WHOIS policies for their country code top level domain name that even made using privacy respecting protocols impossible by mandating the personal information of domain registrants to remain public.[64] Hence, even if the protocol developers had human rights in mind when developing those tiered access information protocols, without a contractually binding commitment the operators would not have implemented them. Moreover, it was only through the passage of time that the possible human rights implication of public WHOIS were revealed. 

7.    Conclusion

We cannot say that the HRPC efforts will fail. We hope to see their goal of the generalization of human rights succeed. This study is limited by the fact there are numerous case studies upon which to draw: it is possible that ours, chosen for their overall significance in Internet history, architecture, or politics, are not representative. Our arguments about why further human rights considerations may harm the IETF are hypothetical, and may not come to pass. But we find it alarming that HRPC has not grappled with the intellectual or political/economic histories of these ideas. There are very real problems, in their relationship to representation, psychology, and chronological time. Rather than claim that human rights have no place in protocol deliberations, we want to see the ViD and the HRPC’s ideas stress-tested against intellectual, political, and organizational histories. 

Nonetheless, Internet protocols have human rights consequences, and they are political artifacts. This much is obvious, and probably uncontroversial. Studies that evaluate the specifications, actors and the processes that consider their impact are valuable. The HRPC research group’s program of “data analysis and visualization of (existing) protocols in the wild to research their concrete impact on human rights” is a useful idea insofar as it relies on sociotechnical context. However, we argue that “expanded human rights considerations” has the potential to harm both individual engineers and the IETF, without generating any advances in human rights. 

We accept that there are already many socially consequential decisions being made in the private sphere, away from public deliberation. We also note that this is a trade-off inherent in liberal democratic societies and that there exists a long history of justifications for that delineation of public and private spheres. Protocol standardization in Internet standards bodies, which are completely non-binding and only offered to the world for voluntary use, is distinct from the manner in which they are imposed on consumers.

In addition to noting what HRPC is not, and cannot do, we should point to things that it is like. As noted, the program is deeply technocratic, in that it eschews representational (and specifically, democratic) legitimacy in favor of policy-setting by political and technical expertise.[65] It also resembles the attempts by large social media firms to modify behavior and opinion through invisible  technological means.[66] Rather replacing one set of values with another, we see this program as seeking to replace a heterogeneous set of influences mediated by a myriad of market, user, regulatory, and professional logics with, as much as possible, an explicit political program. It is not that power exists in standards bodies, or that standards are political. Rather, it is that the HRPC’s attempt to re-architect technological design in civil society into a directed program run by engineers and philosophers.

Works Cited

[1] DeNardis Protocol politics: The globalization of Internet governance (Cambridge, MA: MIT Press, 2009).  Lawrence Lessig, Code: And other laws of cyberspace, Second ed. (New York: Basic Books, 2009).[2] Niels ten Oever and Corrine  Cath, "Research into Human Rights Protocol Considerations,"  (2017),https://doi.org/10.17487, <https://www.rfc-editor.org/info/rfc8280>.[3] Mueller and Badiei "Requiem for a dream: On advancing human rights via internet architecture," Policy & Internet 11, no. 1 (2019).[4] Adam Smith An inquiry into the nature and causes of the wealth of nations: in 2 vol, vol. 1 (Allason, 1819). and Karl Marx Capital: A critique of political economy, trans. David Fernbach, ed. Ernest Mandel (Penguin Classics, 1867).[5] Marx, The Communist Manifesto[6] Max Weber Economy and society: An outline of interpretive sociology, vol. 1 (University of California Press, 1978).[7]  Eda Kranakis, Wiebe E Bijker, and Trevor Pinch, Constructing a bridge: An exploration of engineering culture, design, and research in nineteenth-century France and America (MIT Press, 1997).[8] Robert Boyle, New experiments physico-mechanicall, touching the spring of the air, and its effect (Oxford: H. Hall for T. Robinson 1662). Steven Shapin and Simon Schaffer, Leviathan and the air-pump: Hobbes, Boyle, and the experimental life, vol. 109 (Princeton University Press, 2011).,[9] “ Technik” Comes to America: Changing Meanings of“ Technology” before 1930[10] Thomas Parke Hughes, Networks of power: electrification in Western society, 1880-1930 (John Hopkins University Press, 1983). (Abbate, 1994Bruno Latour and Steve Woolgar, Laboratory life: The construction of scientific facts (New Jersey: Princeton University Press, 1986). Kranakis, Bijker, and Pinch, Constructing a bridge: An exploration of engineering culture, design, and research in nineteenth-century France and America.Stewart Brand, How buildings learn: What happens after they're built (Penguin, 1995). Trevor J Pinch and Frank Trocco, Analog days (Harvard University Press, 2004)..[11] Ronald Rice, Network analysis and computer-mediated communication systems, ed. Joseph Galaskiewicz Stanley Wasserman, Advances in social network analysis: Research in the social and behavioral sciences, (United Kingdom: Sage Publications, 1994); Everett M Rogers, Communication technology (Simon and Schuster, 1986).[12] Helen Nissenbaum, Privacy in context: Technology, policy, and the integrity of social life (Stanford University Press, 2009); Mark Ackerman, Trevor Darrell, and Daniel J Weitzner, "Privacy in context," Human–Computer Interaction 16, no. 2-4 (2001); Martijn Warnier, Francien Dechesne, and Frances Brazier, "Design for the Value of Privacy," Handbook of ethics, values, and technological design: sources, theory, values and application domains. Springer, Dordrecht  (2015).[13] Geoffrey C Bowker and Susan Leigh Star, Sorting things out: Classification and its consequences (MIT press, 2000).[14]  Bowker and Star, Sorting things out: Classification and its consequences, 32.[15] Bruno Latour and Steve Woolgar, Laboratory life: The construction of scientific facts (Princeton University Press, 1979).[16] Anttiroiko, Ari-Veikko. "Castells' network concept and its connections to social, economic and political network analyses." J. Soc. Struct. 16, no. 1 (2015).[17] Working Party on the Protection of Individuals with regard to the Processing of Personal Data, XV/5025/97-final-EN corr. WP3, FIRST ANNUAL REPORT, 1997 (p.15) [18] Koops, Bert-Jaap, and Ronald Leenes. "Privacy regulation cannot be hardcoded. A critical comment on the ‘privacy by design’provision in data-protection law." International Review of Law, Computers & Technology 28, no. 2 (2014): 159-171.[19] Koops, Bert-Jaap, and Ronald Leenes. "Privacy regulation cannot be hardcoded. A critical comment on the ‘privacy by design’provision in data-protection law." International Review of Law, Computers & Technology 28, no. 2 (2014): 159-171.[20] Mumford, 1964, Authoritarian and Democratic Technics, [21] Winner, Do Artifacts have Politics[22] Winner, 1993, On Opening the Black Box and Finding it Empty[23] Russell 2012 - Modularity - An Interdisciplinary History of an Ordering Concept[24] Mueller and Badiei "Requiem for a dream: On advancing human rights via internet architecture."[25] Reidenberg "Lex informatica: The formulation of information policy rules through technology," Texas Law Review 76 (1997).[26] Lessig Codes and Other Laws of Cyberspace. (Basic Books, 1999).[27] Thorstein Veblen The Place of Science in Modern Civilisation and Other Essays (New York: B.W. HUEBSCH 1919).[28] Russell, Open Standards and the Digitl Age; Internet Society, https://www.internetsociety.org/internet/history-internet/brief-history-internet/[29] "The Charter of Human Rights Protection Considerations Research Group," IRTF, 2015, https://datatracker.ietf.org/rg/hrpc/about/.[30] Mueller and Badiei "Requiem for a dream: On advancing human rights via internet architecture."[31] DeNardis’ work "Hidden levers of Internet control: An infrastructure-based theory of Internet governance," Information, Communication & Society 15, no. 5 (2012).[32] ten Oever and Cath, "Research into Human Rights Protocol Considerations."[33]  ten Oever and Cath, "Research into Human Rights Protocol Considerations."[34]  ten Oever and Cath, "Research into Human Rights Protocol Considerations."[35] Mueller and Badiei "Requiem for a dream: On advancing human rights via internet architecture."[36] Mueller and Badiei  "Requiem for a dream: On advancing human rights via internet architecture."[37] ten Oever and Cath, "Research into Human Rights Protocol Considerations," 11.[38] Samer Faraj, Dowan Kwon, and Stephanie Watts, "Contested artifact: technology sensemaking, actor networks, and the shaping of the Web browser," Information Technology & People 17, no. 2 (2004): 185,https://doi.org/10.1108/09593840410542501, https://doi.org/10.1108/09593840410542501.[39] "Exterior Gateway Protocol (EGP)," RFC 0827 1982, <https://www.rfc-editor.org/info/rfc827>.[40] "Border Gateway Protocol (BGP)," RFC 1105, 1989, <https://www.rfc-editor.org/info/rfc1105>.[41] Bradley Fidler, "The evolution of internet routing: Technical roots of the network society," Internet Histories 3, no. 3-4 (2019).[42] John M McQuillan and David C Walden, "The ARPA network design decisions," Computer Networks (1976) 1, no. 5 (1977).[43] Fidler, "The evolution of internet routing: Technical roots of the network society."[44] Rosen, "Exterior Gateway Protocol (EGP)."[45] Andrew L Russell and Valerie Schafer, "In the Shadow of ARPANET and Internet: Louis Pouzin and the Cyclades Network in the 1970s," Technology and Culture  (2014); Bradley Fidler, "The evolution of internet routing: Technical roots of the network society" (IEEE Annals of the History of Computing, 2019).[46] "Cisco Systems Customer Service Newsletter ", Cisco, 1989, accessed 23 February 2021, https://weare.cisco.com/c/dam/r/weare/assets/files/packet-winter89.pdf [47] IETF Meeting, Internetworking Working Group, 1988, https://www.ietf.org/proceedings/11.pdf[48] Matthew Caesar et al., "Design and implementation of a routing control platform" (paper presented at the Proceedings of the 2nd conference on Symposium on Networked Systems Design & Implementation-Volume 2, 2005).[49] "Proceedings of the Twelfth Internet Engineering Task Force ", https://www.ietf.org/proceedings/12.pdf, January 18-20, 1989, https://www.ietf.org/proceedings/12.pdf.[50] Andrew L Russell, Open standards and the digital age (Cambridge University Press, 2014).[51]  Mark Handley, "Why the Internet only just works," BT Technology Journal 24, no. 3 (2006): 120.[52] Fidler, interview Haverty, [53] Janet Ellen Abbate, "From ARPANET to INTERNET: A history of ARPA-sponsored computer networks, 1966-1988" (Ph.D. University of Pennsylvania, 1994), https://repository.upenn.edu/dissertations/AAI9503730 (AAI9503730).,[54] "The Domain Naming Convention for Internet User Applications," RFC 819, 1982, <https://www.rfc-editor.org/info/rfc819>.[55] Craig Lyle Simon, "Launching the DNS war: dot-com Privatization and the Rise of Global Internet Governance" (Ph.D. University of Miami 2006), http://scholarlyrepository.miami.edu/dissertations/2484; Bradley Fidler and Andrew L Russell, "Financial and administrative infrastructure for the early internet: Network maintenance at the defense information systems agency," Technology and Culture 59, no. 4 (2018).[56] Milton L Mueller, Ruling the root: Internet governance and the taming of cyberspace (Cambridge, MA: MIT press, 2002).[57] Several hierarchical data bases were popular such as ANSI SQL[58] Thomas Haigh and Mark Priestley, "Innovators assemble: Ada Lovelace, Walter Isaacson, and the superheroines of computing," Communications of the ACM 58, no. 9 (2015).[59] Vint Cerf, "IAB recommendations for the development of Internet network management standards," RFC 1052, DOI 10.17487/RFC1052 <https://www.rfc-editor.org/info/rfc1052>  (1988).[60] We can cite Mueller[61] Su and Postel RFC 819[62] https://www.welcometothejungle.com/en/articles/btc-interview-paul-mockapetris[63] "NICNAMES/WHOIS," RFC 812, 1982, https://tools.ietf.org/html/rfc812.[64] This law has been mentioned in .DK registry website: https://www.dk-hostmaster.dk/en/danish-act-internet-domains[65] (William E Akin and William Ernest Akin, Technocracy and the American Dream: The technocrat movement, 1900-1941 (Univ of California Press, 1977); Jathan Sadowski and Evan Selinger, "Creating a taxonomic tool for technocracy and applying it to Silicon Valley," Technology in Society38 (2014).[66] Shoshana Zuboff, "Big other: surveillance capitalism and the prospects of an information civilization," Journal of Information Technology 30, no. 1 (2015).