"Perfection has always been expensive". An area where this is certainly true, and is rapidly becoming more so, is electronic systems design where the goal is to perform useful, stable computation. The continuous drive for faster designs that consume less and less power demands continuously shrinking semiconductor device sizes in order to achieve lower voltage and smaller currents. However, as transistor sizes reach atomistic scales, intrinsic noise and device mismatch become so large that designing reliable systems becomes exponentially difficult. Hence, this project tackles major challenges facing the semiconductor industry by exploring unconventional computing paradigms, as suggested in the More-than-Moore White Paper of the International Technology Roadmap for Semiconductors (ITRS).
The idea of the proposed research is based on the concept of Liquid State Machines (LSMs). LSMs are computational constructs that share similarities with neural networks in terms of their structure and how information is processed. They are represented by a complex network of non-linear components that are capable of processing time varying input data, which makes them suitable for performing tasks like classification, stochastic inference, speech recognition or computer vision. Novel aspects and opportunities of this research are to explore introducing reconfiguration options into the LSMs in order to optimise their internal dynamics on-line. Realising them in CMOS technology will make them compatible with traditional processor architectures. Imagine creating a "probabilistic co-processor or ALU" that is inherently noise and mismatch tolerant (in fact, these variations will be exploited to perform computation) and can solve complex tasks with minimal power consumption.