Much of the public policy discussion in Internet circles in recent months – including the Global INET conference in Geneva last month celebrating the 20th anniversary of the Internet Society - has been on the topic of Internet governance: how to steer the Internet to balance the diverse objectives of its numerous stakeholders.
The debates around control versus creativity, voluntary versus mandatory, centralized versus distributed, have counterparts in other dialogues on how society should share and sustain vital resources. Humans have been forming and reforming structures for balancing stakeholders’ interests in the world’s often limited resources for millennia.
But the Internet is different than nearly every other resource, natural or constructed, that humans have aspired to govern in two fundamental ways. First, it encompasses vast computational capabilities, even an emergent form of distributed intelligence. Second, those capabilities are not only renewable, but they are expanding at a dramatic pace.
In June 1993, a year after the formation of the Internet Society, the world’s fastest supercomputer, according to the TOP500 survey, was a Connection Machine at Los Alamos National Labs. This record-holder measured in at a computational rate of 60 gigaflops (billion floating-point operations per second).
Turn the clock ahead to November 2011, and the top spot belongs to Fujitsu whose K computer has achieved the astounding speed of 10 petaflops, i.e., 10 million gigaflops.
Fujitsu’s K computer is thus nearly 200,000 times faster than its Connection Machine predecessor, a Moore’s Law-driven improvement of just under a factor of two for each year of the past two decades.
Although there’s considerable uncertainty about how, and for how much longer exponential growth will continue in terms of circuit speeds and component counts – especially as sizes approach fundamental limits at the atomic level – it’s still reasonable to anticipate some degree of further expansion of overall computational and data capabilities over the next decade or two.
Even a more modest pace of a factor of two every two years would imply a 1000-fold increase by 2032, and the conventional 18-month cycle would yield an increase of 10,000 over that same period.
And these expansions in capability, as they do today, would apply not only to supercomputer speed records, but also to everyday information technology; for example, the mobile device I’m using to type this post, the servers that deliver it to my colleagues for review, or the global network that makes it available to the reader.
At least some of the future capability ought to be used to augment today’s complex committee and consensus-based decision processes, and assure their effectiveness. Maybe just one part of the next 10,000?
Reflecting on these anticipated technological advances, I offered a prediction in my initial remarks at the Global INET’s closing panel (see the video below): By the year 2032, the Internet will be self-governing.
There are two aspects to this prediction.
First, from a technological perspective, within 20 years, even at conservative assumptions on resource growth, the Internet will have sufficient capabilities to understand the policy objectives of its stakeholders, reconcile them with one another, adapt its operations to fulfill the objectives and verify that they have been achieved (or not). In other words, to govern. Information technology is a game of orders of magnitude, and at each new threshold, new applications are unlocked that simply become possible based on scale and the skill of their developers. Consider IBM Watson, which surpassed human competitors in the game of Jeopardy at the mere computational ability of 80 teraflops, less than one percent of the power of today’s top supercomputer. What could a million machines of this sophistication do?
Second, by 2032, the Internet will have become so fundamental to the global economy – and will itself be so complex – that anything but a computationally assisted governance process will be inadequate to the task. Traditional human oversight just won’t keep up.
Self-governance of the Internet is really co-governance: vast computational capabilities assisting humans and their institutions in their objectives. Just as the resources have been applied to communicate, to transact, to search, to analyze, they’ll be applied to the even more subtle task of ensuring that all is done according to stakeholders’ overall objectives, and conversely to inform the selection of those objectives to sustain the stakeholders and the Internet for the long term. The various organizations involved in Internet governance over the past 20 years and more have carried out formidable responsibilities with computational resources that were originally quite limited. Now that those resources are much richer, what new opportunities will they create for these organizations to conduct their business?
The answer to the dilemma of how to govern the Internet, it seems to me, rests in the empowerment of the governed: the Internet itself. Bring the Internet to the table, its scale of its operations, the skill of its billions of users, and the game is changed. This unique human-created and -curated resource will assist humans by optimizing itself to fulfill their objectives. And of course, for the foreseeable future, a self-governing Internet won’t build itself. People and companies will continue to play a central role in designing, implementing and operating Internet services, including the ones that make it more useful in governance.
Whether my prediction is on target for 2032, even whether the Internet as we know it is a very different thing by then, applying advances in technology toward positive human objectives will stand the test of time. It is good motivation for each one of our parts of the forward-looking work to be done today.
What are your predictions for how the Internet will look in 20 years and the next steps to get there?
Hear what all the panelists for the Global INET Closing Roundtable, Game Changers: Where will they take us by 2032? had to say in the below video recording of the session.