Pervasive computing, also called ubiquitous computing, is the growing trend of embedding computational capability (generally in the form of microprocessors) into everyday objects to make them effectively communicate and perform useful tasks in a way that minimizes the end user’s need to interact with computers as computers. Pervasive computing devices are network-connected and constantly available.
Unlike desktop computing, pervasive computing can occur with any device, at any time, in any place, and in any data format across any network and can hand tasks from one computer to another as, for example, a user moves from his car to his office. Pervasive computing devices have evolved to include:
- wearable devices;
- and sensors (for example, on fleet management and pipeline components, lighting systems, appliances).
Often considered the successor to mobile computing, ubiquitous computing generally involves wireless communication and networking technologies, mobile devices, embedded systems, wearable computers, radio frequency ID (RFID) tags, middleware, and software agents. Internet capabilities, voice recognition and artificial intelligence (AI) are often also included.
How ubiquitous computing is used
Pervasive computing applications have been designed for consumer use and to help people do their jobs.
An example of pervasive computing is an Apple Watch that alerts the user to a phone call and allows the call to be completed through the watch. Another example is when a registered user for Audible, Amazon’s audiobook server, starts his or her book using the Audible app on a smartphone on the train and continues listening to the book through Amazon Echo at home.
An environment in which devices, present everywhere, are capable of some form of computing can be considered a ubiquitous computing environment. Industries spending money on research and development (R&D) for ubiquitous computing include the following:
Because pervasive computing systems are capable of collecting, processing, and communicating data, they can adapt to the data’s context and activity. That means, in essence, a network that can understand its surroundings and improve the human experience and quality of life.
Ubiquitous computing was first pioneered at the Olivetti Research Laboratory in Cambridge, England, where the Active Badge, a “clip-on computer” the size of an employee ID card, was created, enabling the company to track the location of people in a building, as well as the objects to which they were attached.
Mark Weiser, largely considered the father of ubiquitous computing, and his colleagues at Xerox PARC soon thereafter began building early incarnations of ubiquitous computing devices in the form of “tabs,” “pads” and “boards.”
Weiser described the concept of ubiquitous computing thusly:
Inspired by the social scientists, philosophers, and anthropologists at PARC, we have been trying to take a radical look at what computing and networking ought to be like. We believe that people live through their practices and tacit knowledge so that the most powerful things are those that are effectively invisible in use. This is a challenge that affects all of computer science. Our preliminary approach: Activate the world. Provide hundreds of wireless computing devices per person per office of all scales (from 1″ displays to wall-sized). This has required new work in operating systems, user interfaces, networks, wireless, displays, and many other areas. We call our work ‘ubiquitous computing.’ This is different from PDAs [personal digital assistants], Dynabooks, or information at your fingertips. It is invisible, everywhere computing that does not live on a personal device of any sort, but is in the woodwork everywhere.
He later wrote:
For 30 years, most interface design, and most computer design, have been headed down the path of the ‘dramatic’ machine. Its highest ideal is to make a computer so exciting, so wonderful, so interesting, that we never want to be without it. A less-traveled path I call the ‘invisible’: its highest ideal is to make a computer so imbedded, so fitting, so natural, that we use it without even thinking about it. (I have also called this notion ‘ubiquitous computing,’ and have placed its origins in postmodernism.) I believe that, in the next 20 years, the second path will come to dominate. But this will not be easy; very little of our current system’s infrastructure will survive. We have been building versions of the infrastructure-to-come at PARC for the past four years in the form of inch-, foot- and yard-sized computers we call tabs, pads, and boards. Our prototypes have sometimes succeeded, but more often failed to be invisible. From what we have learned, we are now exploring some new directions for ubicomp, including the famous ‘dangling string’ display.
The term pervasive computing followed in the late 1990s, largely popularized by the creation of IBM’s pervasive computing division. Though synonymous today, Professor Friedemann Mattern of the Swiss Federal Institute of Technology in Zurich noted in a 2004 paper that:
Weiser saw the term ‘ubiquitous computing’ in a more academic and idealistic sense as an unobtrusive, human-centric technology vision that will not be realized for many years, yet [the] industry has coined the term ‘pervasive computing’ with a slightly different slant. Though this also relates to pervasive and omnipresent information processing, its primary goal is to use this information processing in the near future in the fields of electronic commerce and web-based business processes. In this pragmatic variation — where wireless communication plays an important role alongside various mobile devices such as smartphones and PDAs — ubiquitous computing is already gaining a foothold in practice.
Pervasive computing and the internet of things
The internet of things (IoT) has largely evolved out of pervasive computing. Though some argue there is little or no difference, IoT is likely more in line with pervasive computing rather than Weiser’s original view of ubiquitous computing.
Like pervasive computing, IoT-connected devices communicate and provide notifications about usage. The vision of pervasive computing is computing power widely dispersed throughout daily life in everyday objects. IoT is on its way to providing this vision and turning common objects into connected devices, yet, as of now, requires a great deal of configuration and human-computer interaction — something Weiser’s ubiquitous computing does not.
IoT can employ wireless sensor networks. These sensor networks collect data from devices’ individual sensors before relaying them to IoT’s server. In one application of the technology, such as when collecting data on how much water is leaking from a city’s water mains, it may be useful to collect data from the wireless sensor network first. In other cases, for example, wearable computing devices, such as an Apple Watch, the collection, and processing of data is better sent directly to a server on the internet in which the computing technology is centralized.
Advantages of pervasive computing
As described above, pervasive computing requires less human interaction than a ubiquitous computing environment where there may be more connected devices, but that the extraction and processing of data require more intervention.
Because pervasive computing systems are capable of collecting, processing, and communicating data, they can adapt to the data’s context and activity. That means, in essence, that a network that can understand its surroundings and improve the human experience and quality of life.
Examples of pervasive computing include electronic toll systems on highways; tracking applications, such as Life360, which can track the location of the user, the speed at which they are driving, and how much battery life their smartphone has; Apple Watch; Amazon Echo; smart traffic lights; and Fitbit.
This post was originally published by TechTarget