8/3/2019 Transcript of the Dynamic Colation on Core Internet Values
1/30
T ransc r i p t
Second IGF Me e t in g o f the Dynam icCo l a t i on on Co re I n te r ne t Va l ues
S i x t h Annu a l Mee t ing o f t he In t e r ne tGove r nance Fo rum27 - 30 Sep t embe r 2011Un i ted N a t i ons O f f ic e i n N a io rb i ,Na i r ob i , Kenya
Sep t em be r 28 , 2011 - 14 : 30PM
8/3/2019 Transcript of the Dynamic Colation on Core Internet Values
2/30
AimsThe objective of the dynamic coalition on 'Core Internet Values is to debate and
find answers to fundamental questions such as Wh a t i s the I n t e r ne t ? Wha tmakes i t wha t i t i s ? Wha t a r e i t s a r ch i te c t u r a l p r i n c ip l e s? W ha t a r et he co r e p r i n c i p le s and va l ues? And w ha t i s happen ing t o t he co r eva l ues i n t he p r ocess o f i t s evo l u t i on? W ha t i s i t t ha t needs to bep rese r ved and wha t chang es a re i nev i t ab le? The coalition would seekanswers and define the Core Internet Principles and Values.
The Internet model is open, transparent, and collaborative and relies on processes
and products that are local, bottom-up, and accessible to users around the world.
These principles and values are threatened when policy makers propose toregulate and control the Internet, with inadequate understanding of the core values.
Wha t i s i t t ha t mus t be p rese r ved i n t he p r ocess o f po l i c y mak ing byl eg i s l a to r s who see k to r egu l a t e t he I n t e r ne t and i n t he p r ocess o fdes i gn change s by the B us i ness sec t o r i n pu r su i t o f bus i ness f r i end l ymode l s ? W ha t does the I n t e rne t Commun i t y say as wha t can ' t bec hanged? How cou l d c hanges and imp r o v emen t s be b r ough t abou tw i t hou t comprom is i ng on the co r e va l ues? Ho w wou ld t he d i ff e r en tpos i t i ons be tween s t akeho lde r s be r econc i l ed t o commi t t o t he co r eI n t e r ne t va l ues?
About the Dynamic CoaltionThe Dynamic Coaltion continues its work along the lines of the discussions during
the IGF 2009 Workshop (319) on Fundamentals: Core Internet Values. The first
meeting of the Coaltion was held at the IGF Vilnius, chaired by Alejandro Pisanty
and the second IGF meeting at the IGF Nairobi with Dr Vint Cerf moderating the
proceedings as Chair with Alejadro Pisanty as Co-Chair. The Second IGF meeting
was held on September 28, 2011
A fair 'list' of principles and values is emerging from the discussions. The coaliton
would work to emphasize that these values need to be preserved. The Coaltion
continues its work outside the IGF.
Webloghttp://coreinternetvalues.org
8/3/2019 Transcript of the Dynamic Colation on Core Internet Values
3/30
Transcript
Dynamic Coalition on Core Internet Values
Sixth Annual Meeting of the Internet Governance Forum27 -30 September 2011
United Nations Office in Naiorbi, Nairobi, Kenya
September 28, 2011 - 14:30PM
***
The following is the output of the real-time captioning taken during the SixthMeeting of the IGF, in Nairobi, Kenya. Although it is largely accurate, in some
cases it may be incomplete or inaccurate due to inaudible passages or
transcription errors. It is posted as an aid to understanding the proceedings at the
session, but should not be treated as an authoritative record.
***
>> SIVASUBRAMAN MUTHUSAMY: Could everyone please come forward, occupyone of the front seats?
(Pause)
>> Test, test. I have a request to make a test. I hope that it's okay for
everybody. It seems that we have some echo. I don't know if we can solve
that, it would be great if we can. Decrease the echo. Thank you.
>> Testing, 1, 2, 3. Testing, 1, 2, 3.
>> That works for me. Thank you.
>> VINT CERF: That was a slow introduction.
>> By the way, we have a lot of people online. They are waiting to start as
soon as you are ready. They will be very happy. Thank you.
8/3/2019 Transcript of the Dynamic Colation on Core Internet Values
4/30
>> ALEJANDRO PISANTY: This is Alejandro Pisanty from International University
of Mexico and Mexico chapter of the Internet Society.
I am very thankful to Siva, Sivasubraman Muthusamy, who is sitting at the right
from my side, from your left, end of the table, for the enormous work he put increating this session and giving it continuity from last year's. The session will be
moderated and chaired by Vint Cerf.
I will be an acting chair until he picks up. This is the meeting on the Dynamic
Coalition on Internet core values and principles.
We were set up first a year ago in Vilnius. The objective, as you may
remember, the Dynamic Coalitions are, were described by Nitin Desai, in his daysas the chair of the Internet Governance Forum, as potential for mentions that
would emerge from the Forum spontaneously formed by people with similar
interests.
We have come together around the idea that there are some design principles of
the Internet which extend well into the layers above, including some of the ways
that the Internet is adopted in society, whatever field you look at, education,
politics, health, and that it is important to keep an eye open within the InternetGovernance Forum's framework on these very basic design principles,
interoperability, the end-to-end principle, and a few others.
One of the beauties of the thing is that there aren't that many, it is not a long
list, but it's a very powerful list; and that we have to work together with other
Dynamic Coalitions that are forming that are concerned with very important things
like Internet rights and principles at the more political or social level in higher
layers, or with freedom of expression, where again the technological support has
to be available and has to be kept open and interoperable.
On the other hand, there are principles that can or not be supported by the
technological architecture of the Internet or identity or proper management that
allows for both that can interfere with the architecture, if there is suddenly a legal
order to design things in a specific way, and the Dynamic Coalition would be
working to keep an eye open and to eventually produce some statements of
warning or of support for things that could go either way.
8/3/2019 Transcript of the Dynamic Colation on Core Internet Values
5/30
Last year, we had a very lively meeting, and I'm sure this one will become lively
as people are free to enter the room, as well as to participate remotely. We
have some views which were quite different among the different participants.
Maybe the most striking differences were the expressions of a woman engineer
from Lebanon who was all for anonymity.
She said one of the first values she wants to see preserved on the Internet is a
core value is anonymity, and her argument which is on the record of that session
is anonymity is very important for us because that is the only condition under
which women in countries like mine, meaning Lebanon and I'm sure many of the
surrounding ones, this is the only way she said in which women and young
people in countries like mine can have access to sexual and reproductive health
and conduct information. If we have to be identified, then we will just not be freeenough to access this information, because it would create different reactions that
we cannot predict.
On the other hand, we have a very young man from the host country from
Lithuania coming in to that discussion, with the following statement: I believe he
said that we need for this anonymity thing to end. We need for everybody who
comes on to the Internet to use it to be fully identified, and the reason for that is
that Lithuania on becoming part of the European Union and more active memberof Europe is going to become a country of culture.
We want to have everybody become potentially culture creator, and we want for
us to be able to make a living with that. We have to be able to protect our
intellectual property rights. And the only way we think, he thinks he could do this
is by making sure that everybody who makes a copy of anything on the Internet
is properly identified, so you can follow up on that.
So you can see that the views of the values on the Internet are as various and
diverse as values people hold. Some of them can be inactive, some of the
opposite values can be made compatible with certain architectural principles, and
some of them may actually be incompatible among themselves and may ruin if
there is, if a Government, for example, others in implementation may actually ruin
the way the Internet operates, at least it may become, old word, fractionalist.
Those are the issues we are trying to address.
With this introduction I will tell you people who are on the front panel. Myself,
8/3/2019 Transcript of the Dynamic Colation on Core Internet Values
6/30
Alejandro Pisanty, as I have mentioned. Dr. Vint Cerf, who will be chairing it, as
you know, Dr. Cerf was one of the key people at the start of the Internet,
together with Bob Kahn who we are honored to have in the room. He separately
disappeared from IP and in taking that rib out of the original single being, he
created, they created a great opportunity that the Internet has become.
We have Sivasubraman Muthusamy, who is the chair of ISOC, Internet society in
Chanai, an active businessperson and civil society actor, with great experience.
And I'm honored also to have on this panel, to sit on this panel with Scott
Bradner, who is now the chief independent genius for the Internet in Harvard -- is
that the correct job description -- and long-standing creator and supporter of the
evolution of Internet standards of the IETF, Internet architecture board steering
group amongst everything else that can be done for the Internet, and very veryvery esteemed friend.
So you can see we have the academic, the private, the civil society sector
already sitting at the panel, and we see there are Government officials. So we
have all four sectors present in the room, all four stakeholders. We hope to keep
this multistakeholder as we did for the work from last year. Vint, I will hand it
over to you .
>> VINT CERF: Thank you very much, Alex. And welcome, everyone. I'd like
to start out by observing that Scott Bradner's initials are SOB, and that might also
be an important indication of some kind.
Second, Alex didn't mention that he is Chairman of ISOC Mexico, so we have two
ISOC chairs here. It's really a pleasure to discover that these institutions which
were started so long ago continue to persist, grow and be more effective.
So, the topic is core values. I think we could span a fairly broad discussion
starting with the technological values or principles that have helped make the
Internet so persistent, and also able to evolve all the way up to and including the
social and economic values that the Internet has engendered, in part in
consequence of its origins, and the people who built it.
But I thought I would start out, first of all, with some very important specific core
values, and they are 32, 128, 16, 7 or 8, 13, and 42.
8/3/2019 Transcript of the Dynamic Colation on Core Internet Values
7/30
Now, it's an exercise for the reader to figure out what those numbers correspond
to, but indeed, every one of them is important to the Internet; although 42 is a
red herring, drawn from Doug Adams' wonderful writing, so long and thanks for all
the fish.
To go back in history, however, to the earliest notions of open networking, Bob
Kahn started thinking about this while he was still at Volper, Nex and Newman
before coming to DARPA in 1972, and although I'm paraphrasing, Bob, and if you
feel I've left something out, you should react, there were several things that I
noted. One of them is that his notion of open architecture, about networking,
started out with the assumption that each distinct network would have to stand on
its own, and no internal changes would be required or even permitted to connect
it to the Internet.
So this really was intended to be a network of networks.
The second notion was that communications would be on a best efforts basis. So
if a packet didn't make it to the final destination, it would be retransmitted from
the source.
Part of the reason for that is that some networking technologies didn't have anyplace to store the packets in between, Ethernet being a good example; although,
it hadn't been, Ethernet had not quite been invented at the point that Bob was
writing these ideas down. The third notion was that black boxes would be used
to connect the networks. Later, these black boxes would be called gateways, and
then later, routers.
There would be no information retained by the gateways about the individual flows
of packets passing through them, thereby keeping them simple and avoiding
complicated adaptation and recovery from various network failures. So a memory-
less environment was attractive because of its resilience and robustness.
And finally, among other important notions, was the idea that there would be no
global control at the operational level, that the system would be fully distributed.
In the prehistory of Internet, work was done on the outernet. And out of that
work came notions of layers of structure with the lowest layers bearing packets
and bearing bits, and the higher layers carrying more and more substantive
content.
8/3/2019 Transcript of the Dynamic Colation on Core Internet Values
8/30
Substantive information. Some people took layering to be a strict kind of thing,
and the term layer violation was often bandied about. The notion of keeping the
layers ignorant of what the other layers were doing had advantages. It meant you
could remove or change or reimplement a layer without having any effect on theupper or lower layers, because the interfaces were kept stable.
Similarly, the notion of end-to-end allowed the network to be ignorant of the
applications or the meaning of the bits that were flowing in the packets, and those
bits would be interpreted only by software at the end.
There had been debates about these two ideas, subsequently, and some
arguments had been made for permeability. It's pretty clear that in some cases,let's even say at the routing layer, it might be nice to know what is going on with
regard to the underlying transmission system, because you might decide that some
path on the net is not appropriate for use because it's failing. If you don't know
that, the routing system can't know it should switch to a different alternative path.
One can make similar arguments at higher layers where there's, for example, a
loss of capacity. If this is known to an application layer, the application might
respond by changing the coding scheme for let's say video or audio. This notionof layering and end-to-end treatment could be argued to be not necessarily
absolute, but it has turned out to be a very powerful notion, because we have
swept new transmission technologies into the Internet as they have come along,
without having to modify the architecture.
Frame relay and X25, and ATM and MPLS, became part of the tools for moving
packets around the basic Internet Protocol layer, didn't have to change except for
adapting it by figuring out how to encapsulate an Internet packet into the lower
level transmission system.
Interoperability was a key notion in the system. The whole idea behind the
Internet was that if you could build something that matched the Internet's
architecture and technical specifications, then you should be able to connect to the
rest of the Internet, if you could find someone who was willing to connect to you.
This notion of organic growth has been fundamental to the Internet's ability to
grow over time.
8/3/2019 Transcript of the Dynamic Colation on Core Internet Values
9/30
In the Internet Engineering Task Force, there are some other principles that have
emerged, one of them which Scott, you might care to comment on, is that if you
are going to do something, do it one way, not two ways or three ways or four
ways. If you can get away with that, it's helpful because you don't have to figure
out which way is the other party choosing to do this particular function.
The IETF also underscored some other important principles. There is no
membership in the IETF. You can't become a member of it. All you can do is
show up and contribute. It's a meritocracy. If your ideas attract others, you may
actually succeed in getting a standard out of the IETF process. If nobody
considers your ideas to be particularly attractive, then you may not succeed.
But the idea here is that it's the ideas that count. I think there is also awonderful quote from Dave Clark who served as the Chairman of the Internet
architecture board during its previous incarnation of the Internet activities board.
Scott, I'm not sure I can get this exactly right. But it was something like we
don't believe in voting, something in -- .
>> SCOTT BRADNER: Kings or voting .
>> VINT CERF: We don't believe in kings or voting, we believe in roughconsensus and running code.
I would say that that principle continues to guide much of what the IETF does.
I have other observations to make. I'm going to set them aside for the moment,
and turn to my fellow panellists and ask them to make a few remarks from their
point of view on what is important in Internet principles. Scott, can I ask you to
take the -- .
>> SCOTT BRADNER: I'd like to sort of pop up a level, based on what Vint is
talking about. The result of the, these principles that Vint just articulated was a
sort of a different, higher level principle, which was the ability to innovate without
permission, that you and I could agree on a new application, and deploy it without
having to get permission of the network to do so.
This was the initial driver and is still, in the corporate environment, corporation to
corporation, a very strong ability. It's less so within a corporation because those
8/3/2019 Transcript of the Dynamic Colation on Core Internet Values
10/30
firewalls and within ISPs, some of which filter to residences, but still on a one
business talking to another one, it's been key.
It is what allowed the World Wide Web to come about because when Tim
Berners-Lee decided he was going to make things easier for physicists who didn'tlike to type, and allow pointing and clicking, he could put together a browser and
put together a server, and distribute it to his friends, and they can start using it
without getting any permissions from anybody, a very important thing.
Another piece that Vint alluded to is what is called the end-to-end argument or
end-to-end principle from Dave Clark and others at MIT, Salza Reed and Clark
which can be paraphrased to say render under the ends what can be best done
there. That the network itself is agnostic to the traffic going over it. It doesn't tryand do a better job for traffic that it thinks wants better service.
It doesn't look into the traffic to see that it's voice and therefore should be
accelerated or something. This is a principle which is constantly under attack, in
that it doesn't make sense. It doesn't make sense from the point of view of
somebody who is focused on a particular application. Somebody from the
telephony world wanting to use the Internet for telephony and the Internet is now
the underlying connectivity for most of the world's communication, and by Internethere I'm drawing it broad as in the Internet Protocol and the way it's used, not all
of it is public Internet. They look at the Internet Protocol and say, but it doesn't
do a very good job with voice. It is not tuned for voice. It is not architected for
voice.
Bob Braden said, he was a person on the Internet activities board, architecture
board that Vint mentioned, optimization was not one of the goals of the Internet
Protocol, and wasn't one of the goals of the Internet standardization. Flexibility
and ability to create new things was.
So, we constantly in the IETF come under pressure from various folks who want
to make the Internet better for some particular application at the expense of other
applications, because their application is the most important one, at least to them.
The rough consensus in running code bit that Vint mentioned, there were two
important parts of that. The IETF works on rough consensus, and this was
mentioned earlier in another session I wasn't at, but it was paraphrased for me,
that consensus in the original way that that term developed many centuries ago
8/3/2019 Transcript of the Dynamic Colation on Core Internet Values
11/30
was not that everybody agreed.
It was that everybody had a chance to discuss, and even if there were a few
people who disagreed, you could still move forward. Consensus in many
standards bodies has come to mean unanimity, that everybody has to agree. Andwhen you have it mean that, it means that your standard that you develop has to
take into account all of the weirdnesses that any particular participant might want.
So standards tend to be complicated, difficult to maintain and difficult to
understand.
The IETF strictly believes in rough consensus, meaning that if some number of
people really don't like the result, but they can't convince the majority, the vast
majority of the badness of the idea, then it will go forward.
Running code is that the standards process, and the original standards process,
three-steps standard process where the middle step required that you actually had
interoperable implementations of code before you could move forward, that's been
actually in the last few weeks dropped to a two-stage process where the second
stage requires that. The running code was not to prove that somebody is
interested. It was to prove that the standard was clear.
So that if you implemented a standard, and I implemented a standard, both
reading the standard without resorting to looking at other materials, and we could
interoperate, that means the standard was clear enough. And requirement for
running code was to ensure clear standards.
It also mentioned that the IETF has a tendency to go do it one way. That's true
at one level and not true at another level. It is true when we are talking about
an approach to a problem where there is one architectural principle for that
problem. It is not so true where there are multiple architectural principles. Take
an example, the IETF developed an Internet voice protocol called SIP, session
initiation protocol, that is an end-to-end principle protocol. At the same time, we
also developed a core centric, carrier centric voice over IP protocol, called make
aco, that is a fundamentally different architecture.
And different providers would use the different architectures in different
environments, and they are both being used today in very different environments.
They compete with each other, at the result level, but not at the architecture and
8/3/2019 Transcript of the Dynamic Colation on Core Internet Values
12/30
implementer level. So where we do tend to try for a single solution, is where it's
just different variants of the same architecture, because then Vint is absolutely
right, if you have more than one way to do it, it dilutes the picture.
But having different architectures, different fundamental philosophies of how toapproach something, trying to argue them into one bin can be completely
counterproductive. You will get the entire Working Group spending most of its
time to fighting that sort of thing.
One other thing I want to bring up, we have a constant pressure in the
community against the Internet. Here I mean the Internet Vint described. We
have folks who believe that it needs to be optimized for one application or
another. Or folks that Alejandro mentioned believe that attribution is required foranybody who actually uses the net, an Internet driver's license for example, or
people who believe that different applications should have their own Internets;
governments should have a private Internet, for example.
These are constant battles, and usually what is brought up as rationale is
protecting kids or fighting terrorism or something like that. But it's fitting into a
different architectural business model of control.
A few years ago, one of the big U.S. telephone companies tried to get the FCC
to require that Internet service providers architect their networks in such a way
that all the traffic went through a common central set of switches. Their rationale,
stated rationale, was that this was the only way that the phone company could
guarantee the quality of the connection, is to go through the central point. And
oh, by the way, this is basically wiretapping which the Government was interested
in. In reality, the fundamental reason they wanted to do it was because that was
a common taxing point where they could collect money.
The FCC didn't do that. In the U.S., the FCC Federal Communications
Commission has been pretty much hands off on the Internet. We have had
almost no regulation there, letting the styles and flowers bloom, letting anybody
innovate without having to get permission.
>> VINT CERF: Thank you very much, Scott. Alex, would you like to add
anything to this? Otherwise I have a bunch of other bullets to shoot.
8/3/2019 Transcript of the Dynamic Colation on Core Internet Values
13/30
>> ALEJANDRO PISANTY: I will just make a very brief comment. One thing
that impresses many of us who are latecomers to the use of the Internet --
although I must state that a couple of years ago, I discovered in some print styles
I have kept from my quantum chemistry workshop of 1979 that I was actually
using Harper net at that time to run things on computers in Berkeley, and acouple of other laboratories in the U.S. from Bloomington, Indiana, so maybe not
such a newcomer -- but I started as a user and certainly not as an architect.
But one of the things that many of us find very impressive is how these
architectural and design principles first are so fundamental, being very few, they
really are from the, meant to be growth of the net, and second, how they map
well into some social and even political principles which are pretty sound and I
will not say universal, but helpful universally.
And some people have pointed also to the fact that in the U.S., where much of
this work was done as well as in Europe, in the years in which this work was
being done by you guys. It actually was so universal, that two cultures that were
almost opposite were able to shape it, which was a sort of more collectivist
culture and a traditional, I will say, Yankee in all respective terms of these
characteristics, very strong individualistic self-reliant culture, and both were able to
coexist.
And that is a witness for the robustness of these principles. More so, when we
see that now the Internet is implemented in so many different political and cultural
environments.
>> SCOTT BRADNER: Can I say something else?
>> VINT CERF: Certainly, Scott.
>> SCOTT BRADNER: I want to build on that a little. One of the powers of the
Internet is ability for people to innovate end-to-end without getting permission, but
that is also one of the most basic threats of the net. Threats to society, in the
sense of the social order, we have seen in Arab Spring there was the level of
impact that the Internet has varied, but still it had an impact, of allowing
individuals to communicate that the state wouldn't necessarily want to be able to
communicate.
8/3/2019 Transcript of the Dynamic Colation on Core Internet Values
14/30
Many years ago, I was doing a series of tutorials for the Internet society, in
developing country workshops. And a representative of a particular country came
to me, as an instructor, and said that he really would like the Internet but didn't
like pornography. After a bunch of discussion, we concluded that no, he really
was using pornography a symbol, that what he didn't like was information thatwould confuse the citizens. And that is a direct quote. That is fundamental.
One of the by-products of not having that control point that Vint mentioned that
Bob had in his principles is you don't have a control point, you don't have a way
to filter what people can say to each other. Some countries try very hard with
different degrees of success.
But Larry Lussig, once of Harvard -- once of Stanford, now of Harvard, said that
code is law. He meant that the design of the Internet and the design of that kindof technology impinges on the ability of state control. You can't make laws to tell
it, to tell the net to do things that it's not architected to do, which is why the
telephone company in the U.S. wanted a requirement to rearchitect the Internet,
because the Internet doesn't support the kind of controls that they believe they
needed.
>> VINT CERF: Those are all very good points, Scott. A couple of other things
might be useful. Back in the technical domain, I would call this notion designfactorization. And to illustrate this notion, I would offer the observation that if you
read the protocol specification for the Internet Protocol, nowhere in that document
will you see the word "routing," or at least I don't think there will be any mention
of how that is done. The assumption is made that somehow, a packet with the
right format that is handed in to the Internet will find its way to the destination,
but the details of how that routing is done is distinct and separate from the basic
Internet Protocol.
The idea behind this is to allow, for example, the possibility of multiple alternative
routing algorithms, and indeed, we have a number of them. So the point here is
that by factoring things out, you offer significant flexibility.
Another interesting feature that was very deliberate in the Internet is that the
Internet addressing space is nonnational in character. We didn't start out with the
assumption that we should identify countries and then allocate some address
space to each of them.
8/3/2019 Transcript of the Dynamic Colation on Core Internet Values
15/30
Rather, we started out with the notion that every address in the network is
reflective of the topology of the network and the way in which or where you
connect to it. Interestingly enough, despite the fact that that was an important
principle and continues to be the case with regard to IP addresses, the actual use
of the net especially with the advent of the World Wide Web has led to peoplecreating tables that associate IP addresses with national locations, and in some
cases even more refined identifiers down to the city level.
It turns out that their rationale for this has, as far as I can tell, not much or
anything to do with control or identifying anything other than using this as a clue
for what kind of response should be offered to the party that is using the net.
So as an example, at Google, when you try to connect to WWW.Google.com, andthe domain name lookup is done, our name server asks where did this question
come from? What is the IP address of the source? Do I have any idea what
country that might be in? And it makes a guess or it looks up in the table, and
hopes that it's correct, and then it vectors the party to whichever version of
Google is specific to that country.
If you are here, using Internet addresses that are believed to be allocated to
Kenya, the Web page that would come up is not Google.com, butGoogle.CO.DC.ZKE.
So this is intended to be friendly response to try to offer, for example, in
language assistance, so in spite of the fact that the design principle is to be non-
national, people who saw application utility in having some mapping had to
implement it themselves.
I think it would be useful, I think, to move to some nontechnical kinds of
principles that have certainly been powerful element of the Internet's evolution.
The openness of the specifications turned out to be quite important. No one
constrained access to the information about how to build the network. It was a
very deliberate decision not to restrict the access to the design or the
specifications, and even effort was made to produce reference implementations of
the protocols, and make them available.
Probably one of the most important decisions made in the time that Bob was at
DARPA was to fund the implementation of the TCP/IP protocols for the UNIX
8/3/2019 Transcript of the Dynamic Colation on Core Internet Values
16/30
operating system.
Initially, that work was done at Ball Bearneck and Newman and later
reimplemented by Bill Joy at UC California, University of California Berkeley.
The Berkeley 4.2 release was the first of the UNIX implementations with TC/PIP in
it. And during that time frame in the early 1980s, this was a period when -- what
did they call them? They weren't personal computers yet. They were work
stations. The notion of a work station with an Ethernet connection and a local
area net running TCP/IP was enormously attractive to the academic community.
The consequence of making this application or this software implementation of
TCP/IP plus the operating system available freely was, certainly induced a rapiduptake in the academic community. This notion of freely available
implementations, the notion of source code, the notion of freely available
specifications, continues to permeate the Internet environment and continues I think
to stimulate its further growth.
You can see the same sort of thing happening in my company. For example, the
release of the Android operating system as source code or the Chrome browser
or the Chrome operating system are all examples. And there are many othershere in Africa. Ubuntu, which is the one of the popular versions of UNIX, is
widely used partly because it's freely available. I think the notion of stimulating
the use of the network by making its tools and raw materials readily available has
been an important part of its history and it should remain that way.
The same thing can be said for application programming interfaces. When Scott
Bradner mentioned Tim Berners-Lee and the World Wide Web, it was the
standardization of the protocols and interfaces to them that has allowed so many
new applications to be built on top of the World Wide Web.
Another example of this exposure of source code is dramatic. When the World
Wide Web was first released, one of the features of the browsers is that if you
wondered how did that Web page get built, you could ask the browser to show
you what the source of the Web page was. You could see the hypertext markup
language.
The side effect of showing people how this was actually accomplished is that they
8/3/2019 Transcript of the Dynamic Colation on Core Internet Values
17/30
learned how to make Web pages on their own. The notion of Web master
emerged out of the freedom to see and copy what other people have done, and
to experiment with new ways of implementing Web pages. Over time,
programmes were developed to make it easy for people, easier for people to
create Web pages. But the important thing is that this openness notionpermeated so many of the layers of the system.
I think we might want to move into some of the institutional consequences of, and
principles in the Internet world. Scott .
>> SCOTT BRADNER: I'd like to actually reflect on one of the points you made,
World Wide Web, and the way that worked. One of the things I mentioned earlier
is that the net is no longer quite as transparent as it used to be. We havesituations where there is firewalls in corporations and some ISPs, and some
countries are getting in the way and things like that.
One of the things that's happened is that what used to be the IP layer of the
Internet, the layer where everything could be innovated on, has moved up to the
World Wide Web layer port 80. You can now -- or the secure port for that -- so
have new applications running on top of that.
Where that is getting more and more important is with the html 5, the new
version of html, which allows you to build things that look like applications within
Web browsers. And where this may have the most effect is actually on smart
phones. Many smart phones have some levels of control by the vendor as to
what applications they will support. And with html 5, which for example is being
pushed by Apple, one of the ones that has very strong controls on what you can
put onto the phone, you can build applications in html 5 that would never get
approved by the app store, and therefore have a whole new layer of this
innovation without having to worry about control.
>> VINT CERF: That is actually, we are big fans of html 5 at Google too. I
wanted to note something about the ability of the system to evolve. One of the
things that is interesting about the Internet is that it's not a fixed architecture
which is trapped in time.
So over the 30-some-odd years that it's been in operation, or nearly 30 years, it
has evolved. And for example, we have run out of IP version 4 address space,
8/3/2019 Transcript of the Dynamic Colation on Core Internet Values
18/30
and we have to implement a new version of the Internet protocols which were
standardized in 1996 with 128 bits of address space. What is important is they
can run together in the same network.
It is not like you have to throw a switch somewhere. In fact, even the WorldWide Web is very important platform for many applications, as Scott points out. It
can also invoke non-http protocols. And so when you are talking Skype or when
you are talking Google Talk or you are doing some kind of video interaction or
some other application, you may very well be running multiple protocols at the
same time, some within the World Wide Web environment, and some below that
level, right above UDP or IP or RTP or some of the other very low-level bearing
protocols.
So this ability to invoke multiple protocols at the same time inside and outside the
World Wide Web environment means that the network continues to be a place
where innovation is possible, and I would not be surprised to find that there will
be new applications and new protocols arising sitting on top of IP or sitting on top
of UDP, or sitting on top of PCP, or possibly even others that come along.
So that is a very important part of the evolution.
Another example of that is the Domain Name System which was developed in the
early 1980s, 1984, 1985, heavily invested in things encoded in ASCII. In the last
several years, it's been quite apparent, it's been apparent for a long time that not
all languages in the world can be written using characters that are drawn from the
Latin character set.
Now we have internationalized domain names, and even though they ultimately
ended up being encoded in ASCII in order to avoid having to change everything
in the DNS to accommodate, the point is that it's been possible to evolve to a
much richer presentation of naming than would have been possible if we had
stuck only with the ASCII coding.
This notion of being able to continue to evolve and exploit new ideas, exploit new
kinds of transmission systems, is a very important part of the longevity of the
Internet and its ability to accommodate new ideas.
So maybe we could push a little further up now into the institutional layer,
8/3/2019 Transcript of the Dynamic Colation on Core Internet Values
19/30
because you have seen institutions emerge out of the Internet experience. The
most visible technical one is the Internet Engineering Task Force we have already
talked about.
The Internet society arose in part out of the belief that a society would emerge inconsequence of people using the Internet. I think it's fair to say that we are
seeing that. It is not one society. It is lots of societies. And that's okay. They
can run in parallel on the network and use whatever applications seem to be best
fitted to human interests.
The Internet corporation for assign names and numbers emerged out of this whole
process and the Internet Governance Forum emerged out of WSIS and they have
one feature in common. They believe in multistakeholder processes that are openand accessible to all who have something to say.
I hope that we are able to preserve that principle. There was a very strong
statement to that effect by Larry Strickland, the head of NTIA, in the ministerial
sessions on Monday. The reason that is so important is that it is the vital
interaction of all these interests that give the Internet the opportunity to evolve
new applications and new ways of serving the people who use it.
Since you have joined us here, would you like to offer some comments as well?
>> SIVASUBRAMAN MUTHUSAMY: My job in all this has been very easy. I
ask questions, and I've left the answers to come from you. So I'm comfortable .
>> VINT CERF: You want to ask some more questions?
>> SIVASUBRAMAN MUTHUSAMY: Some more. One question is, what can we
do to preserve core values? And are we doing enough to preserve the values?
Bits and pieces are happening in different parts of the world.
In one country, it's legislation about filters. In another country it's on surveillance.
In some other country, it's on some other problem. So, but all this happens in
complete isolation of what is being discussed here, and what can we do to
prevent these values from being altered? So that is my question to you.
>> VINT CERF: So, I'm not going to try to respond alone to that question.
8/3/2019 Transcript of the Dynamic Colation on Core Internet Values
20/30
Let me make a couple of observations. We have already seen the utility and
value of some of these core notions. On the other hand, it is not the purpose in
my view of the Internet to jam its principles down anybody's throat. The Internet
is not required. It is a thing that's offered for people to use if they want to useit.
I think the freedom to use the Internet, however you want to, is a very important
one. On the other hand, so I'm not sure that I would try to force everyone to
behave the same way.
I think though that we have to recognize that when the system gets to the scale
that it is today, that it can be used to do bad things as well as good things.And I think that we have to accept that as a society, we should be interested in
protecting ourselves from bad actors. The question is, how can we do that?
What means do we have to do that? In what Forum do we even talk about
that? The Internet Governance Forum is one place where we can and should talk
about the harms that could potentially occur, because the Internet is so open and
freely usable.
I guess I'd like to ask my two other colleagues here whether they have responseto that important question.
>> SCOTT BRADNER: I mentioned that one of the things about the Internet at
least in the U.S. has been a lack of regulation.
This has been a puzzle. It just doesn't make any sense that something as
important as the Internet has gone, has succeeded to exist for as long as it has
without any significant levels of regulation.
It's too important to the economic health and social health of the world to have
that continue, at least in some minds.
The net has succeeded because of the flexibilities and principles that Vint and
others have articulated, and it succeeded in arenas that the people in those
arenas never imagined that it would succeed; particularly, telecommunications. The
telephone companies did not believe that the net would ever work. The
competitor to TCP/IP at the time it came up was X25. And that made an
8/3/2019 Transcript of the Dynamic Colation on Core Internet Values
21/30
assumption that somebody did one thing at a time. You put up a connection
between you and one other thing.
Vint mentioned one of the things you get on the net is parallelism, multiple things
going on at the same time. This has enabled a telephone company to be usingIP in their backbone for a decade and not admitting it, maybe because they really
didn't think that they wanted to say that. IBM in 1972 said, in an IBM user
group, quote, "you cannot build a corporate data network out of TCP/IP."
And the reason for that was definitional. By definition, a corporate data network
had all of these quality of service and managerial controls, and TCP/IP had none
of them.
The very organisations that fought the net because it wasn't optimal, because it
didn't have controls, because it didn't do what they thought they needed to do, it's
done. It's taken over. The net has taken over IBM, has taken over the
telephone companies.
And they are in an environment that doesn't make sense to them regulation-wise.
We are at a precipice. We have been at it for a while, where Governmentsbelieve that the Internet is far too important to leave to the people who know
what they are doing in a technical sense, and they need to imply some, impose
some kind of controls. The President of France said that the Internet had no
management, and it was a moral imperative to fix that.
We are going to see more and more of that, on the organizational level. Vint
has talked about some of the organisations that have done wonderful things there.
But we have to be continually vigilant in order to preserve these rights and these
principles, because to the folks like the guy who came to me and said that he
wanted to control information that would confuse the citizens, the Internet doesn't
make sense at a societal level where the aim of some societies is to control the
society.
It just simply doesn't make sense. When something doesn't make sense, they
want to fight it.
8/3/2019 Transcript of the Dynamic Colation on Core Internet Values
22/30
>> VINT CERF: Alex, looked like you had a question coming from the Twitter
feed.
>> ALEJANDRO PISANTY: The back channel is active on several platforms.
There is a question coming in from the Twitter feed which is made by a colleaguein Mexico. It's, it asks me to ask Scott and Vint what they think of Microsoft's
efforts to control the hardware to access the Internet, and that refers to the UFI
stuff. We are reminded that UN rules do not allow ad hominem attacks, so we
are advised to express opinion that can be grounded in fact.
>> SCOTT BRADNER: This current effort, current thing that is probably being
referred to is the boot, the authoritative boot process. This actually comes from a
patent that Dave Farber and a few other people had from a number of years ago.And it's a big organisation put together to commercialize it, trusting computing
Forum or trusting computing environment.
There was a big play on that a few years ago where the aim was to say that
you could have a platform, a computing platform which content providers could
actually trust.
In theory, if I control the computer that is sitting in front of me, no, there is notheoretical way to have a digital rights management that allows a content control,
content producer to ensure that I'm only using the content in a way that I've paid
for. It's theoretically impossible to do, if I control the platform.
Trusted computing environment is hardware that allows the content owner to better
control that environment. The chips to support that have been in most PCs for
eight or nine years, maybe even longer. They were in Macs for a while but
Apple dropped them, not saying why; it's Apple's forte is to not ever say what
they are doing.
And the latest thing is simply an incantation of that. It is making use of the
functionality that has been there for a long time. The arguments in favor of it are
very strong. If you control that environment so that only software which has been
approved can be run, you get rid of all viruses, because the viruses aren't going
to be approved.
You get rid of all worms. You have an environment where the user doesn't have
8/3/2019 Transcript of the Dynamic Colation on Core Internet Values
23/30
to know how to protect themselves in order to protect themselves. And there is a
lot of power to that. But the other side of it is that the computer owner can
control what you can do. And the controversy that arose recently was whether
the PCs that were built to, in order to support this boot functionality, secure boot
functionality in Microsoft could refuse to run Linux and other operating systems.
Microsoft has assured the community that that is not their intent, and that they will
ensure that that's not blocked. But it is that level of control that has been
desired by the content community for a very long time. They have been fighting
very hard over an environment that they have no possible way of controlling, and
don't like it.
>> VINT CERF: Scott, I actually have a somewhat different interpretation of this,so this may be an interesting debate.
There has been focus of attention on protecting machines against the ingestion of
malware, and the most vulnerable moment in a machine's life is when it boots in
the operating system.
So the idea that the machine won't boot in a piece of code that hasn't been
digitally signed is a pretty powerful protection. It feels to me as if yourobservations made a fairly big leap from the ability to assure that the boot code
hasn't been modified, to the assumption that somehow that would inhibit all forms
of other operating systems or anything else.
I think that one has to be a little careful about under what circumstances a chunk
of boot code is signed and by whom and what the boot code is allowed to do.
>> SCOTT BRADNER: You are absolutely right, and you're wrong, in that the
TCE was specifically designed, the Farber patent is specifically talking about
sequential boot with signed blocks. That is exactly what it's for. But the
hardware in it involves a set of functionality that includes for example remote at a
station. So that a content owner can ask your PC whether it is running particular
software, particular flavors of the software of generations and things like that,
particular operating systems, and refuse to download content unless you are, for
example.
It is specifically built into the TCE functionality. Nobody is currently implementing
8/3/2019 Transcript of the Dynamic Colation on Core Internet Values
24/30
that. Microsoft currently is talking about only the boot process. But the chip that
supports that, supports all the way up the stack, to that nothing can run on the
machine that you don't approve, that the machine, the machine manager and I'm
carefully saying it's not machine owner, because there is a difference in concept
of whether if you bought the machine, are you the owner, when somebody else issaying what can boot on it. There is a philosophical question of whether you own
the machine under that condition. But certainly, Microsoft at the moment is only
talking about the boot process.
>> VINT CERF: It's not just Microsoft. I mean, this proposal to use strong
authentication of the, and validation of the boot sequence, is proposed for all
machines and all chips. The chip makers have been asked specifically to
implement that. Again, the intent being to avoid having a machine boot up inpiece of malware.
I think the correct formulation to get to your point is that whoever is able to sign
the boot code is the party that has control over whether the machine will run that
particular boot sequence.
By the way, there is another little nuance to this. If you are going to update the
boot sequence, you also have to check to make sure that the proposed new bootsequence is also digitally signed. So the issue here is who is the party that can
sign that boot sequence? If it turns out to be a particular manufacturer, maybe
Microsoft in this case, that would be different than some other party that you
might or might not trust.
I have a concern about time. So I'm going to suggest that we try to open this
up to interaction with the people who have joined us for this session, if that's
okay with you, Alex.
If there are people who would like to raise questions, either from the floor, or
possibly online, Sebastien, have there been any online -- why don't we start with
you.
>> Thank you, Vint. We have a question online from Olivia. The Internet could
be anything from a free-for-all network, where everybody and anything is allowed,
including criminal behavior, to the other extreme of content provider or Government
controlling it, filtering it, listening to it through deep packet inspection. How can
8/3/2019 Transcript of the Dynamic Colation on Core Internet Values
25/30
we solve the challenge of finding the right comfort zone in between those two
extremes? Are there methods to look out for? Are there any early warning signs
that we should watch out for, that will tell us we are going too far in one direction
or other? Thank you.
>> VINT CERF: Alex or Scott?
>> SCOTT BRADNER: You start .
>> VINT CERF: I start, okay. First of all, it is clear that we don't want the
extremes. It's also clear that, at least I would like to propose that we don't want
a network which is so open to abusive behavior that we, not only do we not feel
safe, we are not safe, and that our privacy is eroded or lost, our security andconfidentiality are eroded or lost, and they could be eroded or lost in both
directions; even a network which is completely and totally transparent and
controlled by the Government is not going to stop, that will lose all of our privacy
and confidentiality.
On the other side of the coin, if it's completely wide open, we already have
worked examples of people penetrating machines, creating zombies and so on.
There has to be some place in between. And it is my belief that there is nosolution which is purely technical in nature. There are a variety of ways of
increasing the safety of the network. We implied in this talk some of them about
talking about the secure boot, but that will get us only so far.
Then we have to deal with the fact that there are people who will use this facility
to exercise abusive behavior, and maybe even attempt to cause harm to others or
extract value from them.
The only way to deal with that is to detect the problem, and then come to fairly
broad common agreements, fairly widespread common agreements, that those
behaviors aren't acceptable; and if they are detected, that there will be
consequences. That still leads to a question of how you find the perpetrator.
This leads to questions of attribution. It leads to questions of reciprocity across
national boundaries. It leads to legal agreements about coping with these
unacceptable behaviors.
I think we are going to have to have discussions in the Internet Governance
8/3/2019 Transcript of the Dynamic Colation on Core Internet Values
26/30
Forum and possibly in other forums in order to establish norms that are
acceptable on a fairly wide scale. In the absence of any decisions along those
lines, I don't see how we will enact any protections that are worth anything at all.
Finally, we can't stop people from doing bad things. And because we can't stopthem, the only other thing that we have to do is to tell them that they shouldn't,
because it's ethically wrong. And that's the kind of educational thing that we
ought to be teaching kids as they grow up, to value national values and family
values and other things. You wouldn't want other people to harm you. This is
the golden rule all over again. We have all those three possible ways of dealing
with the problem. Somehow we are going to have to work our way through to a
place that is largely, let's say roughly comfortable for everybody. Scott.
>> SCOTT BRADNER: I want to add a little bit of flavor to some of the things
Vint said.
Deep packet inspection won't stop the bad guys, because if you remember in
World War II, the U.S. employed Navajo Indians to speak code and the code they
spoke was their native language. Even if you can intercept something, assuming
that it's unencrypted, which is a bad assumption, then you can talk in a code
which allows you to actually communicate, and many dissidents in many countrieshave found this out.
So that deep packet inspection is not quite the killer of communication that some
Governments might like, or some businesses might want. But it is still a definitely
a risk. It is still definitely a threat to one's personal life and privacy.
The question of attribution that Vint brought up is actually a very powerful one.
There is a great paper from Dave Clark and Susan Landou on attribution and the
difficulty of it. In particular, attribution is being able to determine who sent you
something or who did something to you. With the kind of attacks that we are
seeing today, the attack almost never comes from the party that is controlling the
attack.
It almost always goes through one or two or four or seven or 25 middle men.
Somebody hacks into a computer, a student computer at Harvard, and uses that
as a stepping off point to another student computer at Harvard, to a student
computer at MIT, to a student computer someplace else, to somebody's home
8/3/2019 Transcript of the Dynamic Colation on Core Internet Values
27/30
computer, and finally attacks the Pentagon. If the Pentagon says, we are under
attack, we are going to nuke who is attacking us, and they use the source IP
address for that, they are going to nuke some grandma. And that is probably not
what they have in mind.
So there aren't any easy answers. Attribution or holding countries accountable for
what happens from them, there was, if you look at the, I think it's the Potomac
Institute had a videoconference a couple months ago about that, where one of the
proposals was exactly that, to hold countries accountable for any attacks, any
cyber attacks that come from within the country.
But that doesn't stop somebody from the U.S. breaking into a computer in
Bulgaria and then using that to attack China. There is, attribution is very verydifficult. We did that many years ago, with pirates, and where you could actually
have some control. There was a doctrine of accountability, that countries were
held accountable from bandits that came out of their territory or pirates that came
out of their territory.
That doesn't really work in the modern Internet.
>> ALEJANDRO PISANTY: Thank you, Vint. To add to this replies to Olivia'squestion, the signs that something is going to go wrong, in what I understood of
the question, are very much embodied, and pending what has been said by Vint
and Scott, you know something is going to begin to be weird when you see a
mix of responses to behavior problems on the net, that leans too heavily on
technology and too little on the behavior that it actually wants to regulate, and
where the technological solution creates more problems than is intended to solve
or is just unachievable. The attribution problem as has been mentioned by Scott
is very important. As Vint has said, no law actually prevents crime. We have
terrible laws, that one can kill and there is the possibility in many countries of
being killed for killing and people continue to kill.
So we have to go back to our basic social problems, and make sure that we do
more with the Internet than against the Internet to solve them.
>> SCOTT BRADNER: One other note, that one of the approaches that some
Governments have worked on and law enforcement proponents frequently talk
about is to require ISPs to record the activities of their users. So to keep track
8/3/2019 Transcript of the Dynamic Colation on Core Internet Values
28/30
of every Web site you go to, every E-mail message you send, this is something
that is technically possible to do in the Internet, but imagine this in the physical
world. A Government that requires every letter to be opened and copied, and
recorded, would that ever survive in the physical world?
But that is something you can technically do quite easily in the Internet world, and
there are many Governments that want to do that.
>> SIVASUBRAMAN MUTHUSAMY: Yes. In line with what Scott said, if a
Government wants data to be retained, it can only go to ISP, and if it wants
something filtered, it can go to another business, a certain company. If this
business, businesses increase their resistance, or they team up better and try and
convince Governments that this is not right, or this is against the values, then canthis happen. If the ISPs say no, how could Governments have all the
information?
>> VINT CERF: Thank you. Alex, I think that we have according to my clock
just about five minutes left. So are there administrativia you believe we should be
tackling?
>> ALEJANDRO PISANTY: Yes, thank you. Especially after seeing otherDynamic Coalitions not treat the organizational stuff in public and on the record, I
think that we are particularly stressed to make sure that the path forward for the
Dynamic Coalition is at least set out in a proposal in this session.
Given the questions we have received, pretty broad nature of them, recognizing
visually many of the participants physically present in the session, I think that we
can safely put forward the following proposal, which is that, I will put forward my
volunteer effort, and I'm counting on Siva's continuing volunteer effort, he is a
man of incredible strength and initiative, and whoever else wants to volunteer to
be a core group to move the Dynamic Coalition forward.
What we need to do is to produce a report which you can easily do from this
session. Siva and I can take that responsibility based on the transcript, put it
forward, put it up on a blog which we will announce, and make sure that it gets
proper comment, so that if it's faithful rendition of the session -- .
>> SCOTT BRADNER: It's accurate.
8/3/2019 Transcript of the Dynamic Colation on Core Internet Values
29/30
>> ALEJANDRO PISANTY: -- we have the accurate transcript, but we will have a
summary that people can use without reading the whole transcript, and without the
"umms" of the transcript. Also try to get some continuity for this Dynamic
Coalition. What the Dynamic Coalition can actually do now as far as I can seewith the people here, is basically set up a very lightweight observatory, in which
we keep track of the most visible at least and facilitate other people keeping track
of initiatives, that either by being restrictive or by proposing things like filtering,
blocking, and so forth, or by putting forward set of principles, sets of policies,
digital agendas and so forth, may have an impingement, an impact on the, or new
requirement on the evolution of the Internet's architecture, to make sure that we
continue to have a dialogue in this conversation, continue to have a conversation
with the private sector, with the technical community, with researchers in theacademic community that are making sociological and political science research
about these things, Government and civil society, and to promote the activity
around this. I think that for now, we would not have an immediate pressing need
of establishing membership rules for the Dynamic Coalition, bylaws to regulate the
behavior in detail, and stuff like that, which has been found necessary in another
Dynamic Coalition .
>> SCOTT BRADNER: This is the Internet.
>> ALEJANDRO PISANTY: It's the Internet. We do it the Internet way. When
we have a problem to solve, for which the solution arose, we start seeing who is
there and solving it. For now what we want to do is keep this, stress more the
dynamic than the coalition side of Dynamic Coalition, and make sure that we can
make it useful and valuable over the coming year.
I would emphatically ask for comments on that.
>> SCOTT BRADNER: Sounds good to me.
>> VINT CERF: I'm certainly happy to help craft whatever draft documents you
have in mind. I hope we get a lot of feedback from others who are interested in
the same topic. I think we have run out of time. Sebastien.
>> As Sebastien, not as remote participant.
8/3/2019 Transcript of the Dynamic Colation on Core Internet Values
30/30
I think what I understood you suggest, it's a very good way. I would like to add
one point.
As we are the core Internet, I think we can show that Internet it's a good tool to
urge the Dynamic Coalition to work, and maybe we can be also the core of what,how could a Dynamic Coalition could work. We need certainly some ideas, some
tools, some people. But I am sure that we can pave the way for others using
the Internet with right tools for the future and the Dynamic Coalition.
I think it's important for the future of the IGF itself, because Working Group, it's
okay. Dynamic Coalition could be the way to go from one IGF to the other, then
let's try to do it. And I'm sure with the people that are around the table and in
this room, and connected, it will be possible. Thank you.
>> VINT CERF: Thank you very much, Sebastien.
I think that at this point we have to call the session to a close, but thank you all
very much for participating. We look forward to hearing more from you in the
future, and of course seeing you the rest of the week. Thank you.
(Applause.)(Session ends at 4:00.)
Top Related