This is the written reflection to The Stack for the course Contemporary Media Theory.
This reflection focuses mainly on P2P networks and how they manage to continue operating even if all six layers in Bratton’s model were controlled by some sovereignty.
I would first like to clarify what the term “P2P network” actually means. Usually, it refers to file sharing programs such as eMule and BitTorrent. In this reflection, this term is used in reference to the algorithm on which these applications are built, and the network layered upon clients communicating through these algorithms. For example, BitTorrent is built upon DHT (Distributed Hash Table) and eMule is built upon KAD (Kademelia). Other instances include I2P and Tor, both of which providing anonymous network access to everyone (or every user, in Bratton’s term).
Now we can begin to look into the apparatuses of P2P networks and how, at every layer of Bratton’s model, they could circumvent the ubiquitous surveillance agents of the sovereignty, beginning from User. In his elaboration of this section, Bratton particularly maintained that the identification of a user should not rely on the physical presence of an intelligent form, whose only known agent being human, but on the ability to initiate columns in the Stack. P2P networks approach in a similar way to the goal of hiding the actual identities of their users, by mimicking the behaviour of automated bots. More specifically, they randomize their behaviour so that they do not feature common characteristics of human involvement. This drastically increases the cost of the sovereignty to interfere with their operation based on behavioural patterns, for that the sovereignty itself also relies heavily on automated surveillance agents and computational units, whose observable behaviour indistinguishable from the P2P networks and whose mission too important to abort. The same reasoning applies to Interface as well. By providing a minimalistic interface that accepts nothing but essential commands, P2P networks gives no chance that additional information is gathered during the interaction. Whether it is human A, human B or Wiki-bot C or even monkey D, there is no way to tell, not even the P2P networks per se.
Continuing to Adress. This is where P2P networks bear the least resemblance to traditional networks. In the book, Bratton quotes that “whatever that can be addressed can be ruled”. In the same way, we can say that “whatever that cannot be ruled must not be addressable”. Being able to address and access the concerned entities is one fundamental premise of a sovereignty’s authorisation over it, either electronically or geographically. P2P networks, on the contrary, have no such entity that could be addressed and approached. Put in a plainer term, there’s no one complete entity that corresponds to an end user. Each functional unit is sliced, duplicated and distributed across multiple, usually hundreds of end users. This strategy takes advantage of the fact that millions of millions of addressing takes place at a very low level in terms of the position on the network hierarchy. Such tremendous amount makes impossible the surveilling of each and single one of them. Again, similar approach on City level. P2P networks respond to regimes where thorough and constant inspection is employed, by slicing, duplicating and adhering small pieces of itself onto “clean” contents so that the inspection would not yield any fruit.
Now to Cloud and Earth. Imagine a world where Cloud have essentially been Earth, that is, where various Cloud platforms have penetrated the entirity of non-paper world. It’s either one or another, with no choice of none of the above. These platforms have access to all of the traffic and interaction and the ability to compute across time, for that they archive everything ever happened. The solution to this is to dive deeper down into Earth, where millions of physical elements now engage in the computation as well. By again dividing themselves into pieces and modules which, when alone, are just arbitrary and meaningless data, whereas when properly combined, could turn into functional units. Data and information are no longer the target, but the order and structure they are positioned, which turns out to be incalculable given the total number of computing agents involved in Earth level, let alone their geographical sparsity.
To some extent, all these solutions are the same, or at least similar in essence. They all exploit the inborn incapability of the soverignty that even though it could exercise power wherever it wants and in whatever way it wishes, it could not do this to all units on all scales simultaneously. In fact, no sovereignty would do this were there any sense left. This gives the chance for P2P networks to hide by making sure that their units would never aggregate either geographically or temporally, leaving no chance for the sovereignty to acknowledge the whole picture, let alone to control them.