A Brief History of the Internet
An anecdotal history of the people and communities that brought about the Internet and the Web
A Brief History of the Internet by Walt Howe is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 United States License.
Based on a work at www.walthowe.com.
The Internet was the result of some visionary thinking by people in the early 1960s who saw great potential value in allowing computers to share information on research and development in scientific and military fields. J.C.R. Licklider of MIT first proposed a global network of computers in 1962, and moved over to the Defense Advanced Research Projects Agency (DARPA) in late 1962 to head the work to develop it. Leonard Kleinrock of MIT and later UCLA developed the theory of packet switching, which was to form the basis of Internet connections. Lawrence Roberts of MIT connected a Massachusetts computer with a California computer in 1965 over dial-up telephone lines. It showed the feasibility of wide area networking, but also showed that the telephone line’s circuit switching was inadequate. Kleinrock’s packet switching theory was confirmed. Roberts moved over to DARPA in 1966 and developed his plan for ARPANET. These visionaries and many more left unnamed here are the real founders of the Internet.
When the late Senator Ted Kennedy heard in 1968 that the pioneering Massachusetts company BBN had won the ARPA contract for an “interface message processor (IMP),” he sent a congratulatory telegram to BBN for their ecumenical spirit in winning the “interfaith message processor” contract.
The Internet, then known as ARPANET, was brought online in 1969 under a contract let by the renamed Advanced Research Projects Agency (ARPA) which initially connected four major computers at universities in the southwestern US (UCLA, Stanford Research Institute, UCSB, and the University of Utah). The contract was carried out by BBN of Cambridge, MA under Bob Kahn and went online in December 1969. By June 1970, MIT, Harvard, BBN, and Systems Development Corp (SDC) in Santa Monica, Cal. were added. By January 1971, Stanford, MIT’s Lincoln Labs, Carnegie-Mellon, and Case-Western Reserve U were added. In months to come, NASA/Ames, Mitre, Burroughs, RAND, and the U of Illinois plugged in. After that, there were far too many to keep listing here.
Who was the first to use the Internet?
Charley Kline at UCLA sent the first packets on ARPANet as he tried to connect to Stanford Research Institute on Oct 29, 1969. The system crashed as he reached the G in LOGIN!
The Internet was designed to provide a communications network that would work even if some of the major sites were down. If the most direct route was not available, routers would direct traffic around the network via alternate routes.
The early Internet was used by computer experts, engineers, scientists, and librarians. There was nothing friendly about it. There were no home or office personal computers in those days, and anyone who used it, whether a computer professional or an engineer or scientist or librarian, had to learn to use a very complex system.
Did Al Gore invent the Internet?
According to a CNN transcript of an interview with Wolf Blitzer, Al Gore said,”During my service in the United States Congress, I took the initiative in creating the Internet.” Al Gore was not yet in Congress in 1969 when ARPANET started or in 1974 when the term Internet first came into use. Gore was elected to Congress in 1976. In fairness, Bob Kahn and Vint Cerf acknowledge in a paper titled Al Gore and the Internet that Gore has probably done more than any other elected official to support the growth and development of the Internet from the 1970’s to the present .
E-mail was adapted for ARPANET by the late Ray Tomlinson of BBN in 1972. He picked the @ symbol from the available symbols on his teletype to link the username and address. The telnet protocol, enabling logging on to a remote computer, was published as a Request for Comments (RFC) in 1972. RFC’s are a means of sharing developmental work throughout community. The ftp protocol, enabling file transfers between Internet sites, was published as an RFC in 1973, and from then on RFC’s were available electronically to anyone who had use of the ftp protocol.
Libraries began automating and networking their catalogs in the late 1960s independent from ARPA. The visionary Frederick G. Kilgour of the Ohio College Library Center (now OCLC, Inc.) led networking of Ohio libraries during the ’60s and ’70s. In the mid 1970s more regional consortia from New England, the Southwest states, and the Middle Atlantic states, etc., joined with Ohio to form a national, later international, network. Automated catalogs, not very user-friendly at first, became available to the world, first through telnet or the awkward IBM variant TN3270 and only many years later, through the web. See The History of OCLC.
Ethernet, a protocol for many local networks, appeared in 1974, an outgrowth of Harvard student Bob Metcalfe’s dissertation on “Packet Networks.” The dissertation was initially rejected by the University for not being analytical enough. It later won acceptance when he added some more equations to it.
The Unix to Unix Copy Protocol (UUCP) was invented in 1978 at Bell Labs. Usenet was started in 1979 based on UUCP. Newsgroups, which are discussion groups focusing on a topic, followed, providing a means of exchanging information throughout the world . While Usenet is not considered as part of the Internet, since it does not share the use of TCP/IP, it linked unix systems around the world, and many Internet sites took advantage of the availability of newsgroups. It was a significant part of the community building that took place on the networks.
Similarly, BITNET (Because It’s Time Network) connected IBM mainframes around the educational community and the world to provide mail services beginning in 1981. Listserv software was developed for this network and later others. Gateways were developed to connect BITNET with the Internet and allowed exchange of e-mail, particularly for e-mail discussion lists. These listservs and other forms of e-mail discussion lists formed another major element in the community building that was taking place.
In times past, it was fascinating to watch a BITNET message we sent as it proceeded from one stop to the next along the way to its destination. We would see it arrive at a site and then see it transmitted along to the next site and the next site and the next. The pace of life was slower then!
In 1986, the National Science Foundation funded NSFNet as a cross country 56 Kbps backbone for the Internet. They maintained their sponsorship for nearly a decade, setting rules for its non-commercial government and research uses.
As the commands for e-mail, FTP, and telnet were standardized, it became a lot easier for non-technical people to learn to use the nets. It was not easy by today’s standards by any means, but it did open up use of the Internet to many more people in universities in particular. Other departments besides the libraries, computer, physics, and engineering departments found ways to make good use of the nets–to communicate with colleagues around the world and to share files and resources.
While the number of sites on the Internet was small, it was fairly easy to keep track of the resources of interest that were available. But as more and more universities and organizations–and their libraries– connected, the Internet became harder and harder to track. There was more and more need for tools to index the resources that were available.
The first effort, other than library catalogs, to index the Internet was created in 1989, as Alan Emtage and Peter Deutsch, students at McGill University in Montreal, created an archiver for ftp sites, which they named Archie. This software would periodically reach out to all known openly available ftp sites, list their files, and build a searchable index of the software. The commands to search Archie were unix commands, and it took some knowledge of unix to use it to its full capability.
McGill University, which hosted the first Archie, found out one day that half the Internet traffic going into Canada from the United States was accessing Archie. Administrators were concerned that the University was subsidizing such a volume of traffic, and closed down Archie to outside access. Fortunately, by that time, there were many more Archies available.
At about the same time, Brewster Kahle, then at Thinking Machines, Corp. developed his Wide Area Information Server (WAIS), which would index the full text of files in a database and allow searches of the files. There were several versions with varying degrees of complexity and capability developed, but the simplest of these were made available to everyone on the nets. At its peak, Thinking Machines maintained pointers to over 600 databases around the world which had been indexed by WAIS. They included such things as the full set of Usenet Frequently Asked Questions files, the full documentation of working papers such as RFC’s by those developing the Internet’s standards, and much more. Like Archie, its interface was far from intuitive, and it took some effort to learn to use it well.
Peter Scott of the University of Saskatchewan, recognizing the need to bring together information about all the telnet-accessible library catalogs on the web, as well as other telnet resources, brought out his Hytelnet catalog in 1990. It gave a single place to get information about library catalogs and other telnet resources and how to use them. He maintained it for years, and added HyWebCat in 1997 to provide information on web-based catalogs.
In 1991, the first really friendly interface to the Internet was developed at the University of Minnesota. The University wanted to develop a simple menu system to access files and information on campus through their local network. A debate followed between mainframe adherents and those who believed in smaller systems with client-server architecture. The mainframe adherents “won” the debate initially, but since the client-server advocates said they could put up a prototype very quickly, they were given the go-ahead to do a demonstration system. The demonstration system was called a gopher after the U of Minnesota mascot–the golden gopher. The gopher proved to be very prolific, and within a few years there were over 10,000 gophers around the world. It takes no knowledge of unix or computer architecture to use. In a gopher system, you type or click on a number to select the menu selection you want.
Gopher’s usability was enhanced much more when the University of Nevada at Reno developed the VERONICA searchable index of gopher menus. It was purported to be an acronym for Very Easy Rodent-Oriented Netwide Index to Computerized Archives. A spider crawled gopher menus around the world, collecting links and retrieving them for the index. It was so popular that it was very hard to connect to, even though a number of other VERONICA sites were developed to ease the load. Similar indexing software was developed for single sites, called JUGHEAD (Jonzy’s Universal Gopher Hierarchy Excavation And Display).
Peter Deutsch, a developer of Archie, always insisted that Archie was short for Archiver, and had nothing to do with the comic strip. He was disgusted when VERONICA and JUGHEAD appeared.
In 1989 another significant event took place in making the nets easier to use. Tim Berners-Lee and others at the European Laboratory for Particle Physics, more popularly known as CERN, proposed a new protocol for information distribution. This protocol, which became the World Wide Web in 1991, was based on hypertext–a system of embedding links in text to link to other text, which you have been using every time you selected a text link while reading these pages. Although started before gopher, it was slower to develop.
The development in 1993 of the graphical browser Mosaic by Marc Andreessen and his team at the National Center For Supercomputing Applications (NCSA) gave the protocol its big boost. Later, Andreessen moved to become the brains behind Netscape Corp., which produced the most successful graphical type of browser and server until Microsoft declared war and developed its MicroSoft Internet Explorer.
Soon after the graphical browser Mosaic was introduced, the Library of Congress made available some wonderful graphics of the colorful illustrated Vatican Scrolls. With the slow connections of those days, it would take 20 minutes for a single page to load. We would start the download, go on coffee break, and return and marvel at the picture that had filled our screen.
Since the Internet was initially funded by the government, it was originally limited to research, education, and government uses. Commercial uses were prohibited unless they directly served the goals of research and education. This policy continued until the early 90’s, when independent commercial networks began to grow. It then became possible to route traffic across the country from one commercial site to another without passing through the government funded NSFNet Internet backbone.
Delphi was the first national commercial online service to offer Internet access to its subscribers. It opened up an email connection in July 1992 and full Internet service in November 1992. All pretenses of limitations on commercial use disappeared in May 1995 when the National Science Foundation ended its sponsorship of the Internet backbone, and all traffic relied on commercial networks. AOL, Prodigy, and CompuServe came online. Since commercial usage was so widespread by this time and educational institutions had been paying their own way for some time, the loss of NSF funding had no appreciable effect on costs.
The early days of the web was a confused period as many developers tried to put their personal stamp on ways the web should develop. The web was threatened with becoming a mass of unrelated protocols that would require different software for different applications. The visionary Michael Dertouzos of MIT’s Laboratory for Computer Sciences persuaded Tim Berners-Lee and others to form the World Wide Web Consortium in 1994 to promote and develop standards for the Web. Proprietary plug-ins still abound for the web, but the Consortium has ensured that there are common standards present in every browser.
Today, NSF funding has moved beyond supporting the backbone and higher educational institutions to building the K-12 and local public library accesses on the one hand, and the research on the massive high volume connections on the other.
Microsoft’s full scale entry into the browser, server, and Internet Service Provider market completed the major shift over to a commercially based Internet. The release of Windows 98 in June 1998 with the Microsoft browser well integrated into the desktop shows Bill Gates’ determination to capitalize on the enormous growth of the Internet.
During this period of enormous growth, businesses entering the Internet arena scrambled to find economic models that work. Free services supported by advertising shifted some of the direct costs away from the consumer–temporarily. Services such as Delphi offered free web pages, chat rooms, and message boards for community building. Online sales grew rapidly for such products as books and music CDs and computers, but the profit margins are slim when price comparisons are so easy, and public trust in online security is still shaky. Business models that have worked well are portal sites, that try to provide everything for everybody, and live auctions. AOL’s acquisition of Time-Warner was the largest merger in history when it took place and shows the enormous growth of Internet business! The stock market has had a rocky ride, swooping up and down as the new technology companies, the dot.com’s encountered good news and bad. The decline in advertising income spelled doom for many dot.coms, and a major shakeout and search for better business models took place by the survivors.
A current trend with major implications for the future is the growth of high speed connections. 56K modems and the providers who supported them spread widely for a while, but that is the low end now. 56K is not fast enough to carry multimedia, such as sound and video except in lowest quality. But new technologies many times faster, such as 5G cellular for phones and Wi-Fi 6 are predominant now.
Wireless has grown rapidly in the past few years, and travellers search for the wi-fi “hot spots” where they can connect while they are away from the home or office. Many airports, coffee bars, hotels and motels now routinely provide these services, some for a fee and some for free.
Another trend that is rapidly affecting web designers is the growth of smaller devices to connect to the Internet. Tablets, smart phones, ebooks, smart speakers, game machines, wristwatches, GPS devices, thermostats, and even light bulbs are now capable of tapping into the web on the go, and many web pages are not designed to work on that scale. Responsive web design to adapt to all size devices is important now.
The Internet of Things is adding devices, too. Most modern televisions are now connected, and the streaming devices that feed them. Add to this refrigerators, door locks, automobiles, garage doors, personal robots, exercise machines, ovens, cameras, taxi replacements, drones, virtual reality headsets, and more items and services every day.
As the Internet has become ubiquitous, faster, and increasingly accessible to non-technical communities, social networking and collaborative services have grown rapidly, enabling people to communicate and share interests in many more ways. Sites like Facebook/Meta, Twitter, Linked-In, YouTube, Flickr, Second Life, , blogs, Instagram, wikis, and many more let people of all ages rapidly share their interests of the moment with others everywhere. It has provided a huge boost to the genealogy industry. Protecting privacy and security is a major challenge in this environment, and scams and criminal fraud abound. Two-factor authentication is encouraged for all sensitive information, and newer public/private key and biometric based technologies called Passkeys are on the way to make us less dependent on passwords,.
The latest trend is the rapidly proliferatintg use of Artificial Intelligence, and it is being used in many ways. It has been around for over 50 years, but it is finally showing progress in becoming productive. It can collect and synthesize related information mined from the web and other sources to produce constructs that serve a goal such as a research report or minutes of a meeting or a new graphics or games or code for software. Used well, it can be very effective in simplifying repetitive tasks, but there are dangers in over relying on it. The web is full of misinformation, and AI will often mine incorrect informtion as well as correct information in producing a report on a topic. It demands a critical analysis of information and sources before it can be used well. Can AI be trained to do the critical evaluation part, too? We will see as time goes on.
As Heraclitus said in the 4th century BC, “Nothing is permanent, but change!”
May you live in interesting times! (ostensibly an ancient Chinese curse)
For more information on Internet history, visit these sites:
- Hobbes’ Internet Timeline . ©1993-2018 by Robert H Zakon. Significant dates in the history of the Internet.
- A Brief History of the Internet from the Internet Society. Written by some of those who made it happen.
Do you have a comment or question about Internet history?
Write to me.