Seeding the cloud:How Amazon is making billions storing your files

 

Last year, Amazon Web Services brought in more revenue than McDonald’s. The branch of the e-commerce company that deals in cloud-computing has long been seen as a financial life-preserver, but for a while it seemed to be plateauing. Last year’s $25,7 billion figure – up 47% from 2017 – has to be a relief, especially at a time when all the tech giants are looking for ways to put our data out into the blue.

The cloud is also floating a reinvigorated Microsoft, which has been growing its Azure service at nearly 100% every quarter. Google is far behind both companies, but its cloud is the foundation for indispensable products like Gmail and Google Drive as well as the Chromebook, which challenges Apple’s pricier laptops. Apple is busy growing its own cloud revenue, as is Alibaba.

For consumers, the cloud powers the seamless data storage experience they have come to rely on. Gone are the days of organising hard-drive folders, maintaining external drives, and digging through awkwardly named files. For the industry, keeping the cloud afloat requires pushing the bounds of hardware, software, and physics, and consumes a staggering amount of energy.

So what does Amazon Web Services (AWS) do? It was a brilliantly simple idea: at its heart, AWS just rents computer power – power like server space – by the hour. In practice, it means companies can launch with tens of thousands of dollars in virtual computing costs rather than raising millions to buy their own hardware, or quickly add temporary capacity as needed. As a result, the service is reshaping the financial industry, among others, by lowering the barriers to entry for startups.

. How the cloud was seeded When did the cloud begin? Arguably, before the Internet. By 1955, the computer scientist John McCarthy, who gave us the term “artificial intelligence”, had theorised timeshared computers. In 1961, MIT researcher Fernando Corbató – the inventor of the computer password – demonstrated the first one.That became the heart of the legendary Project MAC (Mathematics and Computation), which ran for 10 years at a cost of $25 million in US government funding (about $184 million today) and a peak staff of 400.

By 1966-67, when the concept of the modern Internet was first described, Project MAC’s shared computer had 350 users storing an average of 35 files. Around the same time, commercial timesharing computers began spreading across the US, but long-distance telephone costs limited their utility.And then came the Internet. But its use as a robust cloud was kept in check by ever-smaller and more powerful computers, which put increasingly significant computing power right in the hands of the end user. The birth of contemporary cloud computing arguably came at Oracle in the 1990s, when future NetSuite founder Evan Goldberg pitched the idea of “[a customer-relationship programme] but on the Internet” to CEO Larry Ellison, explaining that customers would need accounting and enterprise resource planning software as well. From that brainstorm, the cloud formed: NetSuite in 1998, and Salesforce – which founder Marc Benioff, who had also heard Goldberg’s pitch, called “the end of software” – in 1999. Google recently brought on an Oracle veteran, Thomas Kurian, to lead its lagging cloud efforts.

Something was in the air: simultaneous invention. In 1996, a tech entrepreneur named Sean O’Sullivan went to Compaq with a proposal for a software platform that would let ISPs host and bill for video conferencing, streaming video, and file storage. Nothing came of it, but in meetings over the idea someone coined the term “cloud computing” to differentiate it from the prevalent use of “cloud” as a telecom network. In 2006, when the practice was more familiar, Google CEO Eric Schmidt’s use of the term finally put it in the vernacular.

All together now One of the most powerful strengths, and trickiest problems, in cloud computing is collaborative editing: who gets to do what when in a document that has two, 10, or 100 people working on it? Programmers have been working on the issue since they started working on office software. One of the pioneers was also the first African-American to get a PhD in computer science: the late Clarence “Skip” Ellis. He was born in Chicago in 1943, and as a teen got a job as a night watchman at an insurance company that had a computer – which, at the time, was rare. Ellis read the manuals and learned the system, and was able to show the company how to reuse punch cards when they ran out.

He went on to get his doctorate at the University of Illinois in 1969, during the development of the ILLIAC IV, the most powerful computer in the world and the first true parallel supercomputer – 64 processors running in tandem to break up complex problems – and eventually landed at Xerox PARC, the Alphabet of its time.

There, Ellis co-developed OfficeTalk, “the first workflow system that provided a visual electronic desktop metaphor across an Ethernet network of end users’ personal computers.”Ellis remained interested in the problems of shared office software for the rest of his career. He co-originated the idea of operational transformation, a software approach to collaborative editing that creates and resolves different versions of the same document as different users work on it; today this is a critical piece of how Google Docs work. T

he synthesis of Amazon’s business – and, in a way, the old and new economies – is its Snowmobile service, which is a literal moving service for data consisting of a tractortrailer truck that can carry up to 100 petabytes of storage. That’s half the size of CERN’s data archive.

It is actually faster to upload data on that scale to the truck, which is then transferred to AWS, because moving 100 petabytes would take 28 years on gigabit-per-second Google Fibre, or 120 days on the record-setting high-speed ESnet run by the US Department of Energy. Snowmobile, alternately, would take just two or three weeks.

Where is the cloud?

It is everywhere, and… actually, a lot of it is in the Washington, DC suburbs. In a 2016 piece for the Atlantic, Ingrid Burrington explains why: “networks build atop networks.” The Internet grew out of military research, in the suburbs because land is cheaper and they are more immune to attack than the city. Military contractors followed, and non-military telecom companies followed them. Now up to 70% of worldwide Internet traffic flows through the area. Quartz

Related Articles

Leave a Reply

Back to top button