Quantcast
Channel: Technology Voice - Social Media
Viewing all articles
Browse latest Browse all 14

Dealing with Information Overload

$
0
0


Image source: commons.wikimedia.org

"...and he now took the fancy that he would like to have the telelectroscope and divert his mind with it. He had his wish. The connection was made with the international telephone-station, and day by day, and night by night, he called up one corner of the globe after another, and looked upon its life, and studied its strange sights, and spoke with its people, and realized that by grace of this marvelous instrument he was almost as free as the birds of the air, although a prisoner under lock and bars. He seldom spoke to me, and I never interrupted him when he was absorbed in his amusement. I sat in his parlor and read and smoked, and the nights were very quiet and reposefully sociable, and I found them pleasant. Now and then I would hear him say, "Give me Yedo"; next, "Give me Hong Kong"; next, "Give me Melbourne." And I smoked on, and read in comfort, while he wandered about the remote underworld, where the sun was shining in the sky, and the people were at their daily work. Sometimes the talk that came from those far regions through the microphone attachment interested me, and I listened."

The above text is an extract from a somewhat prescient article entitled "From the "London Times" of 1904" which was written by Mark Twain and published in 1898. In this story, Twain predicted a system that was eerily similar to the Internet and to the networking and chat sites we use today. The device used to connect with others was called a telectroscope, and it was enabled through an international telephone connection.

Some 66 years later on an edition of the BBC's "Horizon" programme, Arthur C. Clarke spoke of virtual conferencing and communications systems, removing the need for physical presence to do one's job:

"I am thinking of the incredible breakthrough which has been made possible by developments in communications, particularly the transistor and - above all - the communication satellite. These things will make possible a world in which we can be in instant contact with each other, wherever we may be; where we can contact our friends everywhere on earth even if we do not know their actual physical location. It will be possible, in that age, perhaps only 50 years from now, for a man to conduct his business from Tahiti or Bali just as well as he could from London. In fact, if it proved worthwhile, almost any executive skill, any administrative skill, even many physical skills could be made independent of distance. I am perfectly serious when I suggest that one day we may have brain surgeons in Edinburgh operating in patients in New Zealand. When that time comes, the whole world would have shrunk to a point and the traditional role of a city as the meeting place for man would have ceased to make any sense. In fact, men will no longer commute, they will communicate. They won't have to travel distance any more; they'd only travel for pleasure."

Now, of course, we have become familiar with just how real these predictions of online networking and communications have become. With 800 million active users at present, Facebook is on track to have 1 billion users during 2012 and over 250 million photos are currently uploaded to its service daily. On YouTube, 4 billion videos are watched every 24 hours. Twitter has 1 billion tweets posted each week and around half a million Twitter accounts are being created every day.

And, there are lots of other social media sites, blogs, microblogs, wikis, social bookmarking, curated news, etc. We are floating in a social ocean, but there are so many islands to visit, and too much stuff is being created to keep up with it all (see also "What is the Social Semantic Web and Why Do We Need It?".)

Information overload is a pressing problem, and many of the pilot projects from the European Union's FET (Future and Emerging Tech) programme, due to pitch for full status in mid-2012, are tackling this issue, both directly and indirectly:

IT Future of Medicine aims to bring together the masses of medical information created around a patient, by using analytical and clinical data from the patient to create an individualised model.

FuturICT is creating an observatory for studying the way our living planet works in a social dimension.

The Human Brain Project is building computer models to simulate the actual workings of the brain.

RoboCom aims to improve our quality of life, creating robots with perceptual and emotive capability: we can only hope that they will also help with incoming flows of information, telling us what is important to know right now.

Guardian Angels are zero-power sensing devices to assist us with health care, the environment, and more: again, bringing context to the information that is all around us.

There's also a Graphene-related pilot. Here, at Technology Voice, we have previously covered Graphene, a material that will make computing devices run faster, replacing silicon in circuits to not only improve performance (by processing information more quickly) but create new applications.

Digital technologies have been woven throughout our daily lives to a level such that they have become another essential service, just like electricity or clean water. It costs to have these services, and wastage of resources is also important for the digital universe, but there is another aspect to keep in mind: the sheer volume of digital data being created every day.

More than a year ago, IDC published their "Digital Universe Study" in which they looked at the amount of digital information created and replicated in the world. They published some interesting observations:

  • 75% of our digital world is a copy (25% is unique).
  • In 2010, the amount of digital data was 1.2 zettabytes (1.2 trillion gigabytes). This is equivalent to a stack of DVDs stretching to the moon and back.
  • In 2020, this amount is predicted to grow to 35 zettabytes (35 trillion gigabytes). That's a stack of DVDs reaching halfway to Mars!

Thankfully, those brainy researcher-types are also creating systems to help us to find the info we need: building new search and discovery tools; devising ways to add structure to unstructured content (see our article on Linked Data and the Semantic Web), including images, audio and video content; making new information management tools that incorporate notions of prioritisation, classification and automatic deletion; and implementing better methods for trust, privacy and accountability.

A new science - termed "data science" - has emerged, and companies like Facebook now have large teams of data scientists working on their "big data". Finding meaning somewhere in these masses of data involves research into big data analytics, data mining, leveraging networked knowledge, the visualisation of results, etc.

Computing power is also worth thinking about in relation to this growing amount of data: both memory storage and processing speeds.

Current consumer-oriented memory storage drives can hold about 2 to 3 terabytes of data. Every 15 years, storage capacity roughly increases by a factor of 1,000. In a 2010 Scientific American piece, Paul Reber, a researcher at Northwestern University in the USA, estimated the storage of a human brain to be around 2,500 terabytes (other estimates vary this up or down by a factor of 1,000). If that is true, we would therefore require about a thousand consumer 2.5 terabyte drives to store the contents of a brain. Therefore it is not unreasonable to imagine we could store a brain's capacity on a single "memory" drive by 2025 (if we could actually copy the data off of a brain somehow.)

In terms of processing capabilities, estimates for the brain are that it can carry out anywhere from 1016 flops (floating point operations per second, a measure of computer microprocessor speeds) to 1019 flops. Current supercomputers operate at about 2.5 x 1015 flops. Using Moore's Law (which states that the number of transistors that can be placed on an integrated circuit doubles every two years), the theory is that we could have supercomputers capable of human brain speeds by 2025 (1019 flops.) By extending this to 2040, this grows to 5 x 1022 flops (equivalent to the aggregate processing speed of 5,000 brains.)

However, there are opposing trains of thought in relation to computers being able to emulate a human brain. Many say that the brain is much more than just storage and processing: consciousness is required. The Guardian recently reviewed a book that talks about exactly this issue: Bryan Appleyard's "The Brain is Wider than the Sky."

The futurist and founder of Singularity University, Ray Kurzweil, has said that in 2040, by his estimation, we will be able to upload the human brain to a computer, capturing "a person's entire personality, memory, skills and history." (See the full Kurzweil interview from 2009 in the Independent.) Why should this be a one-way transfer? Arthur C. Clarke also predicted in that same Horizon programme that we could upload to our brains, learning new skills and languages while we rest.

Whatever your opinion on the above, let us look forward to a future where the overload of information on today's web will feel like a messy second-hand bookshop when compared to the orderly library of our personalised digital universe.



Thanks to Josephine for her help with this article.

Rate this article: 
No votes yet
Topics: 

Viewing all articles
Browse latest Browse all 14

Trending Articles