Abstract:
Artificial Intelligence (AI) aims to simulate human intelligence in machines so that they can think like humans, and perform human-like cognition [1]. Although the present-day supercomputers can perform excellent calculations, they are very slow and inefficient compared to the human brain! [2,3] For example, a supercomputer takes around 500 s to emulate 5 s of human brain activity by consuming ~ MWs of energy [4]. This is much due to the conventional von Neumann computer architecture, where the memory and processing units are physically separate and are connected by limited interconnects known as bus bars [5,6]. During processing, data shuttles between these units, which makes the process slow and inefficient. Besides, limited transistor density in a processor chip also influences the computational ability. Although the advancing lithography technique is endeavoring to miniaturize transistors to the lowest dimension possible [7] to create a high density on board, the computational performance is still under satisfying. As the transistor scaling down approaches the theoretical limit, their packing density in a chip may not hold the famous Moore’s law any further [8].