Upcoming technology transitions one can not afford to miss

Shirish Bahirat Ph.D.
12 min readApr 28, 2018

--

Incoming technologies will be exploring limits of your imagination. These transitions are coming even if you are ready or not. It’s a bigger deal than the internet or mobile. What is so significant about these technologies that we have not seen before? Why shouldn’t we miss these transitions?

We have always been living in transformative age. Biological evolution, political metamorphisms, industrial revolution to the internet. Speed of change increasing exponentially, same time there are many unsolved problems in the world — pollution, global warming, deforestation, overpopulation, water shortage, fossil fuels, waste disposal, space junk, extinction of species, ocean acidification, plastic pollution, ozone layer depletion, water pollution, health care, poverty, safety, government accountability, religious conflicts, wars, human rights, woman’s rights, aging population, privacy, fake news, lack of education, mental stress, financial stability ….

Future transformative technologies will empower individuals to make a huge impact on the world because these technologies will achieve decentralization of economics, information, resources, computing, communications, transportation, government, manufacturing … almost everything we can think. Empowering individuals to make an extraordinary impact is the reason this is so big.

Adapting to these technology transformations is more important than product innovations. As the product innovation trajectories will be completely different with these technologies and those organizations can adapt, transform will disrupt the markets and thrive.

Web 3.0 — distributed web applications based on peer-to-peer data exchanges

The internet is a system of interconnected computer networks that interchange data packets using the standardized Protocol Suite (such as TCP/IP). The web provides architecture to execute applications using standardized data exchange formats (for example HTTP, HTML and URLs).

Centralized cloud infrastructures minimize capital expenses needed to buy hardware and software. Speed, productivity and performance of cloud computing frameworks have advanced global scaling of infrastructure as a service (IaaS), platform as a service (PaaS), software as a service (SaaS) and business process as a service (BPaaS) etc. applications with improved reliability. However, hardware costs are rapidly coming down. A computer with the 1GHz processor with 512MB SDRAM cost $5 and given free with the purchase of a magazine. It still far from cutting-edge computers, but soon enough it won’t be costly to build a local web infrastructure for a small company or personal use.

Web 3.0 permits development of distributed web applications instead of running them on a central server. One of the key features of Web 3.0 is called as semantic web, that is the web of data. The semantic web is the extension of today’s web architecture and not its replacement. Web 3.0 provides the framework for expressing data so that it can be exchanged between distributed web applications. This standardized data format is called as the resource description framework (RDF) and each resource or data unit is identified using uniform resource identifiers (URI), essentially a namespace within URL context. Web applications that can standardize data formats and access protocols can have massive potential to scale building pear-to-pear networks eliminating central infrastructures. Hence we can build pear-to-pear web browsers, social networks, storage applications, distributed operating systems, messaging community, financial applications by eliminating middlemen and hosting your own applications locally.

Centralized framework based value creation is concentrated within large companies like Facebook, Uber, Google and Amazon. These companies create value by using your information and data you have created. Web 3.0 distributes this value creation by providing better control over the information. Maybe in the near future, information creators can charge these companies to use their personal data or completely eliminate centralized data and services provided by middlemen.

Apart from replacing the implementations of today’s web-based applications, Web 3.0 can empower new applications by connecting devices that can generate and use data. Sensor frameworks, the internet of things, artificial intelligence applications data exchanges can be performed seamlessly in this framework. Generated and consumed data can be shared using URI’s over the web without any centralized point of control and that’s why Web 3.0 is also being called as the machine-to-machine web. With trillions of devices and sensors communicating and exchanging data over this framework many challenges arise with today’s computing, communication and memory architectures that will need to be solved.

Quantum Computing — universal computing power

It’s been said that the quantum world is not only stranger than you think but it is stranger than you can think. Everything we know about physics breaks at subatomic levels, not just by a small margin but enormously. The whole thing about quantum physics was hard to digest even for the brightest minds of all times. Einstein was wrong about what we are going to discuss next and what we know today may be proved wrong later. The quantum reality is hard to explain, but we can mathematically compute and measure the underline facts that are exploited within quantum computing. This is a simplified explanation of quantum computing intended to define approximate analogies for a highly complex and mathematical domain.

Imagine a coin that is tossed into the air, it can be in head OR tail states until we catch. It will be in either head or tail with equal probability. However, state of subatomic particles can be head AND tail at the same time. This phenomenon of existing a quantum particle in both states once is called as superposition. Classical computers use 0 or 1 states of bits to perform computations. Where quantum computer utilizes qbit that can exits in the state |0> and |1> simultaneously. This notation is called as “ket 0” and “ket 1”. The superposition is at the heart of quantum computing and critical to fundamentally understand before we jump in the world of quantum computers.

To understand superposition, we need to review the famous double slit experiment. When quantum entities such as photons or electrons passed through a double slit opening exhibit behaviour of the wave function and create diffraction interference patterns on the screen that is similar to water flowing through multiple openings. However, there are many counter-intuitive twists to this experiment. After firing a single photon at a time also produces the same interference pattern. This is due to the superposition, implying a single photon wave passes through both slits. If we place a detector close to the slit to track which slit the photon or electron is passing, then the interference pattern disappears and only two lines on the screen appear representing each slit. The act of observation collapses the wave function, and photon or electron acts like a particle. Turning on and off the detector makes interference pattern appear and disappear. More strangely, the interference pattern disappears even if we delay the observation away from the slit and towards to the screen. That means the quantum particle wave function collapse even when we plan to observe it. Nature knows in advance that the particle will be observed even before it will be passed through the slit collapsing the wave function and making a choice which slit will be entering back in time.

It gets even stranger when we bring quantum entanglement in the picture. Two entangled subatomic particles exhibit state interdependence even after placing them at a very large distance. Einstein called this as a “spooky action at a distance.” He argued that the quantum entanglement was like a predetermined state such as when we observe left-hand glove in a box, the remaining glove in the box has to belong the right hand no matter how far we take them apart. Per Einstein, this was similar to the state of one particle can be accurately predicted by observing the state of its entangled particle. However, later it was proved that Einstein was wrong through a complicated experiment. It was shown that properties of entangled particles such as its spin do not exist before we measure them but come into coexistence when we measure. Bell’s theorem challenged all preconceived notions about quantum world stating that there are no hidden local variables in quantum mechanics that can explain quantum entanglement. Subsequently, quantum entanglement has been viewed as a superposition of two separate quantum entities and we will review its physics in following paragraphs.

So what is going on and how this is related to quantum computing?

Unlike classical computers, quantum computers work in the domain of wave functions. Computations are performed on waves within the state of superposition and act of measurement collapses the wave function providing the results of the quantum gate operations. Quantum gates consist of entangled qbits where one qbit is used to perform quantum operations and other entangled qbit can be used to measure the outcome of operations. Measured results can be stored in classical bits. A single wave function with specific phase and frequency can produce qbits with quantum entanglement and both generated photons or electrons remain entangled in the coherent state. Quantum states are very sensitive to the external disturbance producing decoherence and erroneous computations. That’s why quantum computers operate at extremely low or almost near absolute zero degrees Kelvin where qbits can be relatively stable. State transitions of can achieved by applying precise external microwave energies between state |0> and |1>. The switching probabilities for qbits are represented with complex vectors states that make spherical shape called as Bloch sphere. State space vectors that include state probabilities of qbits within a quantum system is given by the Hilbert space. Hilbert space can be denoted by real or complex numbers. The classical computing AND gate do not make in the world of quantum computing due to the superposition. But it is possible to create NOT, CNOT (controlled not) and the number of other gates. Hadamard gate puts qbit in the state of superposition and quantum gate operations are reversible in nature. Quantum computer can execute Shor’s and Grover’s algorithms providing solutions for integer factorization and searching within the unstructured database.

Information processed by quantum computer grows exponentially as a function of the number of qbits. Quantum computers are not the replacement for classical computers but can solve optimization, encryption or search problems that are not solvable by classical computers. Even in the stage of infancy quantum computers possess huge potential for enabling distributed quantum systems and quantum internet. Quantum computing systems can potentially advance financial models, inventing new materials, develop quantum consciousness, cryptography, weather simulations, analyze particle physics and many more complex unsolved problems.

Blockchain — distributed-ledger technology that enables transactions without a central authority

Bitcoin and blockchains are often discussed interchangeably but they are not the same things. Money is not just a currency but it is a form of trust. Higher the trust, higher the value for money. Anything that can be exchanged using money can be exchanged using trust. The blockchain is a distributed ledger to maintain trust without any central governing authority. Blockchains decentralize trust, value transfer and can be new foundries of economies and social systems. Value exchange enabled by blockchains is giving new meaning to the internet such as the Internet of Value (IoV) or Money over IP.

Blocks contain encrypted or unencrypted digital records. The data record integrity is maintained or rather validated using the signature of the digital record. Signature is produced using a cryptographic hash of the data as well as the timestamp when the records are made. If anyone tries to tamper the records contained in the block, original hash key and newly generated hash keys do not match invalidating the records. Blockchain is created producing cryptographic hash by including the signature of the previous block along with the data and timestamp of the current block. Thus blockchain not only can check data legitimacy of the current block but also for previous blocks. Blockchain is a distributed database and all participating nodes maintain a valid copy of the blocks within the network. Any tampered records immediately get kicked out of the system as the cryptographic hash signatures of these blocks cannot be qualified due to inconsistency with other records.

Even though blockchains contains significant promise, it still has the number of challenges yet to be fully solved. Some of them include the ability to upgrade cryptographic technologies, making block chains parallel instead of linear, computing power needed to mine or create new blocks, number of transactions per second, distributed storage applications for blockchains, hardware and memory accelerators for high-speed transactions, node management and so on. Solving these challenges will require some efforts but none of these challenges impossible to solve especially when paired with other upcoming technologies.

Blockchains have virtually unlimited applications. Smart contracts that can be used to build virtual communities or organizations, global financial transactions, value exchange within a peer-to-peer network etc. Efficiencies gained using blockchains can potentially improve global economic outlook especially by reducing the gap between developing economies and developed economies by equalizing wealth creation and distribution.

5G — multi-device wireless connectivity

5G is a combination of multiple technologies that provide benefit for almost 1000x increase in device connectivity capacity, 10 to 100x improvements in data rates even for mobile devices, reduced latency, energy and power savings per MB of transferred data and 100x improvements in system capacity utilizing a heterogeneous set of integrated interfaces.

5G with millimetre waves enhance broadcasting frequencies beyond current 6 GHz range. 5G will deploy centimetre wavelengths from 30 to 300 GHz with varying in wavelengths from 1 to 10 mm. So far only satellites and radar systems have been using millimetre wavelengths. Millimetre wavelength opens up space for the crowded real estate of frequency bands but there is a major limitation, they cannot easily travel through obstacles such as buildings. To overcome this limitation 5G networks will augment traditional cellular towers with another new technology called small cells.

Small cells are mini base stations that require low power compared to powerful large transmission antennas. Small cells can be placed within less than a mile distance, on top of buildings to prevent the signal drop. 5G will require much larger infrastructure compared to 4G but as the transmitters and receivers are much smaller than traditional antennas it can be much practical to build the 5G infrastructure. This is radically different network architecture that can provide much targeted and efficient use of available frequency spectrums that can take advantage of another technology — massive MIMO.

Massive MIMO is multiple input multiple output network to enable massive parallelism at base stations to handle higher traffic. Prevailing 4G base stations include dozens of antenna ports to handle all wireless traffic. However 5G base stations can support hundreds of ports supporting a lot more antennas can fit within a single array enabling lot more users or wireless devices at once thus increasing the capacity of mobile networks. Massive MIMO looks to be very promising for the future of 5G. However, installing many more antennas to handle significantly large traffic also results in more wireless interferences generating noise. That is the reason 5G stations incorporate another technology called as beamforming.

Beamforming is a signalling system that forms focused delivery routes to targeted users or devices. Because of focused beams, it reduces signal interference for nearby devices. Beamforming can enable massive MIMO arrays to make efficient use of the specific spectrums. For millimetre waves, beamforming is used to address a number of problems. High-frequency transmission can be obstructed by objects in the linear path and tend to weaken the signals over long distances. Beamforming solves this problem by focusing the signal through a concentrated beam that channels energy in the direction of a specific user or device instead of broadcasting the signal in many directions at once.

Full Duplex is another key feature of 5G networks. 4G transmission within transceivers must take turns for transmitting and receiving information over the same frequency or must operate on different frequencies to transmit and receive information at the same time. 5G transceiver transmits and receives the data at the same time, on the same frequency. This technology is known as full duplex and it could double the capacity of wireless networks at their most fundamental physical layer. Imagine two people talking at the same time but still able to understand one another — which means their conversation could only take half time and their next discussion could start sooner.

Federated Learning — decentralizing artificial intelligence (AI) learning

Federated Learning is the new form of distributed AI that was initially proposed by Google researchers and gaining popularity in the deep learning research community. Federated learning implements a distributed learning mechanism to train a centralized server model while training data remains local over a large number of clients.

The traditional process of building the AI solution encompasses several building blocks and steps that includes data acquisition, training, regularization or optimization .. so on. Data sharing and utilization requires trust between the different content creators and users. Many times the data is confidential and cannot be shared across organizations. Learning algorithms need to undergo multiple regularizations and optimizations to tune the hyperparameters of the model. There is no historical accountability for a deep learning model without requiring trust on centralized authority, on the other hand more data available to train the model higher quality results can be achieved. With thousands of handheld devices generating data, locally training the model weights in an encrypted state and transferring the weights to the central model can increase not only efficiency and quality but also can maintain data and model privacy.

Mobile or internet of things(IoT) suffers the limitation of centralized training model in which the quality of a model depends on the information processed across thousands or millions of devices. In such scenarios, each endpoint can contribute to training the AI model in its own autonomous way. The most interesting thing about federated learning is many of the emerging technologies can help to enable distributed AI networks.

AI will continue to benefit a number of new hardware related developments including graphene, silicon nanophotonics, carbon nanotubes, biocomputing and 3d printing. The emerging technologies will be combinations of above fields for example .. augmented reality, virtual reality, immersive computing. We will see more of them through innovative ways of combining above technology transformations to solve many new problems.

--

--

Shirish Bahirat Ph.D.

Engineer with passion for learning and sharing knowledge, worked with world's leading organizations like Google, Intel, and Nvidia. Opinions are my own.