Tuesday, 27 December 2016

Deep learning as new electronics


It is hard to imagine a modern life without electronics: radios, TVs, microwaves, mobile phones and many more gadgets. Dump or smart, they are all based on the principles of semi-conducting and electromagnetism. Now we are using these devices for granted without worrying about these underlying laws of physics.  Most people do not care about circuits that run in chips and carry out most functions of the devices.

For the past 5 years, a new breed of human-like functionalities has emerged through advances of a new field called deep learning: self-driving cars, voice command in mobile phone, translation in hundreds of language pairs and a new kind of art. In 2016, ten years after its revival, deep learning has taken over the Internet. People have used deep learning-powered products in daily life without worrying about how the underlying neural nets work.

These two fields free us from many physical and psychological constraints:

  • Electronic devices give us freedom of communication over distance, a new kind of experiences with augmented reality and many more.
  • Deep learning enables freedom from having to make tedious and incorrect decisions (e.g., driving a car), freedom of information access (personalization), of hand (e.g., voice command), of finance (automated trading), of feature extraction (through representation learning), and many more.
It is worth noting that electronics and deep learning are different in principles.
  • Electronics devices are designed with great precision for specific functions in mind. Imprecision comes from the quantum uncertainty principle and thermal fluctuations.
  • Neural nets on the other hand, are designed to learn to perform a function of its own, where data (and sometimes model) uncertainty is built in.
However, it is also striking that they are quite similar in many ways.

Super-city of interconnected simple parts

Modern electronic devices are truly super-cities built out of just few kinds of primitive building blocks. The same holds for deep neural nets:
  • Electronic primitives: resistor, capacitor, transistor, coil, diode, logic gate and switch.
  • Neural net primitives: integrate-and-fire neuron, multiplicative gating, differentiable logic gate, switch and attention module. Interestingly, one of the most recent idea is called "Highway networks", borrowing the idea that highway traffic is free of traffic lights.
These primitives are connected in graphs:
  • Electronic devices works by moving electrons in correct order and number. The force that makes them move is potential difference. A design circuit captures all necessary information.
  • In neural nets, the activation function is like the electronic current. The main difference is that the magnitude of "current" in neural nets can be learnt. A computational graph is what is needed for model execution.
Not just analogy: A two-way relationship
  • Electronics deep learning: At present, advances in electronics have given huge boost in efficiency of deep learning with GPU, TPU and other initiatives. It is interesting to see if we can learn from electronics in designing deep nets? For example, will something analogous to integrated-circuits in deep architectures?
  • Deep learning  electronics: I predict that soon the reverse will hold true: deep learning will play a great role in improving efficiency and functionalities of electronic devices. Stay tuned.

1 comment:

  1. I always replied to this blog post and its been a long time since I came into knowledge of this blog. One of my friend’s suggestion worked for me and I am still regular to read every post of this blog.APC LCD Console with KVM

    ReplyDelete