Skip to content
January 9, 2013 / merrillm

How did we get here?

Ten rules of newsgathering from Live Apartment Fire, a blog by an Atlanta TV reporter

Podcasting 101, from the Know How show on TWIT

Garageband basic editing tutorial

Where to keep your podcasts


Some basic history

The Internet is a network of networks, and for that reason, you could say that the telegraph was the precursor to the Internet. Tom Standage, who wrote a good book about the telegraph, called his book “The Victorian Internet.”

Our modern-day Internet depends on computers, and they go back further than most people think. Charles Babbage tried to create a computer in 1822 in England, but it didn’t work.

John Vincent Atanasoff is considered the inventor of the computer, although he’s never gotten a lot of publicity for it. He and Clifford Berry made the computer at Iowa State University in the late 1930s. Here’s Iowa State’s site with a lot of info about it.

What really got computers going was the use of long-range artillery in World War II. As the artillery became more and more efficient and could shoot targets very far away, it became more difficult to figure the trajectory of the missiles. The military employed people — mostly women — to do these complex calculations. They were known as computers. But as time went on, it became clear that a machine was needed to do the job faster.

Also in World War II, the British Secret Service began using a computer to crack the Nazi’s codes. This machine, which some people wouldn’t call a computer, was created by Alan Turing, who is considered to be the founder of computer science.

The first general-purpose computer was ENIAC, invented by Eckert and Mauchley at the University of Pennsylvania in 1946. From this came more powerful machines, and by the 1950s, International Business Machines was selling computers that took up whole rooms, but could do all kinds of amazing things.

In the early days of computing and the Internet, the military was very important. ENIAC was used for calculations in developing the hydrogen bomb. The Internet was developed in 1969 mostly as a way for the U.S. to maintain communication in case of a nuclear attack. Here’s a good history of the Internet, and here’s the famous Hobbes’ Internet Timeline.

The people who came up with the Internet (which was called ARPAnet at the time, because of the federal agency that funded it) wanted it to be a decentralized system. In a centralized system, messages have to be routed through one node. If the state of Georgia were part of a centralized network, for instance, all messages might have to go through Atlanta. So if Atlanta went down, no messages could go through.

On the Internet, there are no central nodes. It’s a decentralized network. Each node, each computer, is equal and can distribute messages. That way, if one node on the network got taken out by a nuclear bomb, the other nodes would work fine and messages would go through. If Atlanta went down, the message could go through Macon, or Gainesville, or Rome, or wherever.

In addition to wanting a way to communicate in a nuclear attack, computer scientists wanted to find ways for their computers to work together. So the first nodes on the Internet were universities: UCLA, University of Utah, Stanford, and the University of California at Santa Barbara.

Here’s what people thought computers would be like back in 1969:

The Internet worked so well that it became a huge hit — with computer scientists. And communicating was so much fun that email became the biggest use of the Internet by 1973, even though it wasn’t really what people thought the Internet would be used for. The Internet expanded to more and more universities, and in the meantime, other places were setting up their own computer networks, using some of the knowledge that led to the invention of the Internet, such as packet-switching.

All these networks started connecting, and the Internet began to be a network of networks.

In the meantime, personal computers were coming along. They had been getting smaller in size and faster in processing time, and by the 1960s, kids were getting interested in building them. The famous Homebrew Computer Club in California started the Silicon Valley empire of computers that is there today. This is one of the ways in which the growth of the computer is similar to that of radio.

There’s a great history of computers called Triumph of the Nerds. I have it on VHS, but they haven’t released it on DVD yet. Of course, you can watch it on YouTube. Here’s the second part:

The first personal computer was the Alto, developed by Xerox, in the early 1970s, but Xerox made a huge mistake and didn’t put it on the market. It had a mouse, graphical user interface and other features we now take for granted. Steve Jobs saw the computer on a visit to Xerox, went back to Apple, and got his scientists working on the Macintosh, which debuted in 1984.

The Altair was the first commercially available PC. The first computer video game, Pong, also got people interested in computers more. People didn’t start having computers at home very much until the early 1980s, when commercial computer networks like Compuserve, Prodigy and America Online came along and when computers began to have modems to connect to these networks. You had to pay a monthly fee to get into these networks, and if your town didn’t have a local number to connect, you had to pay long distance fees as well. They were also not, in the early days, connected to the Internet, which was throughout the 1980s still the province of computer scientists and academics.

Computer hobbyists started local networks sometimes, called bulletin board systems. These were primitive message systems. When the World Wide Web came along in the 1900s, most of them began to die off.

The Apple Macintosh was significant because it allowed people to see a graphical user interface, as I mentioned above. Here’s the ad that introduced the Mac at the 1984 Superbowl:

The Microsoft DOS operating system, which was text-based, soon changed to Microsoft Windows to compete with Apple, and computers became more and more powerful and able to display graphics and colors.

Meanwhile, the Internet was evolving, too. As more universities joined it, more professors and students began to see that it was a useful tool. More computers were becoming connected through various networks, too, and it made sense to open up the old ARPANET to everyone.

In 1991, then, the government opened the Internet to the general public, and more and more people began to find uses for it. Tim Berners-Lee, who worked for a group of physicists in Switzerland, created HTML (hypertext markup language) to help display information on the Internet. His creation became the World Wide Web. Here’s a good site about how we got from ARPANET to WWW.

The first browser, Mosaic, was created at the University of Illinois. It became Netscape. Because it was free, it helped people get online and see the Web. Here’s a site that tells how the WWW works.

The Web was incredibly popular. By 1995, Netscape had more than 50 million users and the commercial online services like AOL were going on the Internet, too. The Internet and Web boom had arrived. Many people worried about the Web becoming commercialized, but it was too late. The Web exploded.

But with the growth of apps, smartphones and devices like the iPad, will the web become less important? That’s what Wired editor Chris Anderson says in an issue from a few years ago. But people have been making predictions about what will happen to the Internet and the web ever since they emerged, and few of them have come true.

The Web 2.0 concept:


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: