It appears that many of the “deep tech” algorithms the world is excited about will run into physical barriers before they reach their true promise. Take Bitcoin. A cryptocurrency based on blockchain technology, it has a sophisticated algorithm that grows in complexity, as very few new Bitcoin are minted—through a digital process called “mining”. For a simple description of Bitcoin and blockchain, you could refer to an earlier Mint column of mine.
Bitcoin’s assurance of validity is achieved by its “proving” algorithm, which is designed to continually increase in mathematical complexity—and hence the computing power needed to process it—every time a Bitcoin is mined. Individual miners are continually doing work to assess the validity of each Bitcoin transaction and confirm whether it adheres to the cryptocurrency’s rules. They earn small amounts of new Bitcoin for their efforts. The complexity of getting several miners to agree on the same history of transactions (and thereby validate them) is managed by the same miners who try outpacing one another to create a valid “block”.
The machines that perform this work consume huge amounts of energy. According to Digiconomist.net, each transaction uses almost 544KWh of electrical energy—enough to provide for the average US household for almost three weeks. The total energy consumption of the Bitcoin network alone is about 64 TWh, enough to provide for all the energy needs of Switzerland. The website also tracks the carbon footprint and electronic waste left behind by Bitcoin, which are both startlingly high. This exploitation of resources is unsustainable in the long run, and directly impacts global warming. At a more mundane level, the costs of mining Bitcoin can outstrip the rewards.
But cryptocurrencies are not the world’s only hogs of computing power. Many Artificial Intelligence (AI) “deep learning neural” algorithms also place crushing demands on the planet’s digital processing capacity.
A “neural network” attempts to mimic the functioning of the human brain and nervous system in AI learning models. There are many of these. The two most widely used are recursive neural networks, which develop a memory pattern, and convolutional neural networks, which develop spatial reasoning. The first is used for tasks such as language translation, and the second for image processing. These use enormous computing power, as do other AI neural network models that help with “deep learning”.
Frenetic research has been going into new chip architectures for these to handle the ever-increasing complexity of AI models more efficiently. Today’s computers are “binary”, meaning they depend on the two simple states of a transistor bit—which could be either on or off, and thus either a 0 or 1 in binary notation. Newer chips try to achieve efficiency through other architectures. This will ostensibly help binary computers execute algorithms more efficiently….
Read more:Deep tech may stumble on insufficient computing power