Digital Age/Moore's Law

Technology is changing rapidly. Just how rapidly is fascinating - in many areas of technology we're seeing exponential growth, leading to change that is occurring faster than we may be able to follow. Moore's Law points to the extent of this change, so in this module we're going to start with Moore's Law and expand beyond, looking at what this level of change may mean for the future. This will include looking at the notion of the technological singularity, and we'll do a quick examination of some of the drivers and enablers of this change.

Moore's Law
The Intel Corporation, today known for their work in microprocessor technology, were founded in 1968 by Robert Noyce and Gordon Moore. Among the company's many achievements, Intel developed the first microprocessor in 1971, and went on to develop the "x86" line of microprocessors - including the Pentium CPUs found in many modern microcomputers.

Prior to founding Intel, in 1968, Gordon Moore proposed what is now known as "Moore's Law", often expressed as: "The number of transistors on an integrated circuit will double every two years". (Strictly speaking, he originally said every year, but it was changed to every two years in 1975). It is an interesting statement - what it means, in effect, is that every two years the power of computers will double, noting that "number of transistors" doesn't necessarily equate to "computational power".

Moore proved to be, on the whole, correct. Approximately every two years the complexity of integrated circuits has doubled, while their price has continued to fall. This is an impressive achievement - if, for example, cars had followed the same scale, we would have gone from a top speed of 16 kph with the Patent-Motorwagen in 1865, to a top speed of 32,768 kph in 1906. Nevertheless, similar laws exist for other technologies:


 * Kryder's Law: Magnetic disk storage density doubles every year.
 * Nielsen's Law: The bandwidth available to home users will increase by 50% each year.
 * Hendy's Law: The number of pixels per dollar in a digital camera doubles every two years.

Through your own experiences you'll have encountered many of these - if you try buying a hard drive today, compare what you could buy for the same price as what you would have been able to purchase twelve months ago, or look at the pixel count on a entry-level digital camera. The different speeds of development are also interesting: combining Hendry's Law with Kryder's Law, we can be confident that we can buy sufficient storage to meet out photographic needs for the next few years, but our ability to share those pictures online might be hindered by the rate at which network speeds are increasing.

Law of Accelerating Returns
Although there are other laws similar to Moore's Law, Moore's Law itself only applies to a limited range of technology, and even then there would appear to be an upper limit as to how long it can continue. Eventually we'll hit physical limits as to how many transistors can be fit onto a microprocessor, although the point at which this will occur is unclear, some current theories suggest that it will be reached in less than ten years. Similarly, if we want to look at this long term, it has been argued that there are limits to how much information can be contained in the universe, and it is suggested that Moore's Law will hit that limit in 600 years (Kraus and Starkman, 2004).

On the plus side, at least some of these limitations can be overcome if we include new technologies in the picture. Raymond Kurzweil is one of those who have argued along these lines, arguing that once the limits of a particular type of technology are reached, a paradigm shift will see a new type of technology replacing the old, continuing the growth. If this is true, Kurzweil suggests that in 2045 the rate of change in technology will bring about a technological singularity, where the rate of change is so fast it will "rupture" human history.

Drivers and enablers of change
Whether or not you accept that this exponential growth will continue, one thing is clear: things are changing fast. Technology, and the potential impact that it can have, is moving forward rapidly, and that rate of change is likely to continue to increase. Thus if you find it uncomfortable facing the way the world is changing now things aren't going to get better. On the other hand, the times will certainly be interesting.

But be that as it may, change doesn't always happen for its own sake. In this case, I'm going to present some possible drivers for change, and, for the sake of discussion, I'm separating them into two categories: technological enablers that make the change possible and social drivers that create reasons for the change. Significantly, these don't always go together - just because there's a desire for something doesn't mean that the technology exists to make it happen, and just because technology can produce something, we shouldn't assume that there is a desire to see that produced.

Processors/Performance
Obviously, more powerful computers can do more things – or so you might think. The Church–Turing thesis effectively states that any program that can be completed in a finite time can be completed by the simplest computer thus proposed – the Turing Machine. In short, given certain limitations (memory and speed) any Turing-complete computer can perform any computing task that can be completed.

Mind you, those limitations are important: there's not much point in a computer that takes so long to complete a task that we won't see the results in our lifetime (unless the computer concerned is Deep Thought, and the task is calculating the ultimate question to life, the universe and everything). Similarly, if the computer doesn't have enough memory, it might run out before completing the task: the Turing Machine's solution was to use an infinitely long tape, but they can be hard to come by.

Networks
Networks give us the ability to share data, and this is core to our ability to communicate, whether through personal interaction, telephones or computers. Increases in network speed improves they type of data that can be shared and the speed at which we can share it, while networks also help address the question of storage: we don't need to store everything on the computer, but can instead store it online and download as required. If you are interested in this side of things, you may wish to look into cloud storage.

Size
Size is also a key enabler: large devices are cool, but smaller devices are more portable, very small devices open up a range of possibilities. Going right down to nanotechnology, devices on a microscopic scale open up a wealth of possibilities that we still don't fully comprehend.

Cost
Finally, there isn't much point having exciting new technology if no-one can afford it. Cost reductions take technology from specific, high end tasks, and open up the use of that technology in far more mundane areas. I've often been taken by the way we "waste" technology - if we took an iPad back to 1950 and showed them what it could do, the computer scientists of the day would be astounded. But they would be even more astounded when we show them how all that computational power, of a type that they may not have even dreamed of, is being used to play Angry Birds.

Communication
From the phone, through email, and on to video chat – with lots in between. We're using more advanced technologies not just to communicate in different ways, but also to increase the level and frequency of our communications. If we went a few years back, who would have expected Twitter to take off - a system whereby you share with your "followers" every small aspect of your life in almost real time. In terms of enablers, the key one is probably a growth in networks, but cost, size and performance all play a role.



Desire for collaboration
Collaboration needs tools that facilitate it, and technology is helping to develop those tools. These range from improved methods of communication, through to shared spaces such as Wikipedia and, to use a self-referential example, Wikiversity. The big enablers, of course, are networks.

Need to collect and access information
It’s handy having the ability to collect and access information when we need it. This ranges from what we see as traditional "database-type" information, such as who is enrolled in a course or what the average age of Twilight fans is, to material that we traditionally gained from other sources - such as how to get to a friend's house to information about medical problems. For some of these, especially the database style questions, performance and storage are key enablers. But other issues, networks, size and cost come into play.

User interfaces
How we interact with tools to get information or to communicate is another key issue. Different methods of interacting with technology opens up different possibilities as to how it can be used. Indeed, as will be shown when we talk about TAM, the ability to use technology – and, most importantly, the perceived ease by which the technology can be employed – is a major aspect to whether or not the technology will be accepted.

Visualization
The ability to visualize data (whether it be an object or just information) requires hardware and software that can potentially work with massive volumes of data.

Entertainment
I tend to think that we downplay the value of entertainment in pushing technology. If you have broadband at home, why did you purchase it? Was it to improve your ability to work from home, or was it download music, amuse yourself online and play games? Entertainment pushes for and requires faster networks, faster processors and better storage, just as a start.

Research questions

 * Do you believe that Moore's Law will continued to hold?
 * Moore once stated that "Moore's law is a violation of Murphy's law. Everything gets better and better". Is there any truth in this?
 * To what extent do you think technology is developed for entertainment, as opposed to "real work"? What other pursuits are major drivers for technological change?