Ego-motion in Self-Aware Deep Learning

in #art6 years ago

Photo by Seth Macey on Unsplash

We are now in the middle of 2018 and Deep Learning research is advancing at exponential rates. At the beginning of this year, I made 10 predictions on what to expect for the year. Making predictions and comparing them in hindsight is one way to determine if ones expectations are overshooting reality. It turns out, it’s undershooting reality in one aspect. I had not expected to see this much new research in “self-awareness”.

It’s a consensus understanding that self-awareness in machines can lead to more autonomous machines and ultimately into machines with consciousness. Previous to this year, the very idea of creating automation with just a minute trace of self-awareness remained a completely abstract notion. Today however, the idea has morphed into an active research area!

Late last year, I had written that embodied learning was essential to general intelligence. At that time, there weren’t many published research that explored this idea with respect to deep learning. On February of this year, Stanford revealed a paper in Arxiv “Emergence of Structured Behaviors from Curiosity-Based Intrinsic Motivation” that began to explore the emergence of what is known as ego motion. Ego motion is that self-awareness of an entity that knows its location and direction within a space. The architecture of the Stanford paper is depicted as follows:


(Nick Haber, Damian Mrowca, Li Fei-Fei, Daniel L. K. Yamins)

We shall see that the conceptual model of that of maintaining a ‘world model’ and the Siamese network of comparing the world model and that of perceived environment is a recurring design pattern in self-aware architectures. The amazing result of the Stanford paper is the progression of capabilities that were learned through a mechanism of curiosity. This chart:


(Nick Haber, Damian Mrowca, Li Fei-Fei, Daniel L. K. Yamins)

demonstrates the progression of capabilities from learning ego motion, object attention and finally object interaction. This is an impressive development that hasn’t actually been widely disseminated. The key revelation here is that you can begin with ego motion and then learn more advanced cognitive capabilities such as object attention and interaction learning. Ego motion has been empirically verified to be good base that leads to more advanced cognitive capability.

Not to be out done, DeepMind submitted to ICLR 2018 in late February a paper titled “Learning Awareness Models” where a system is trained to grasp blocks and predict its interaction with the block. Another related paper (also revealed in February) was Google and DeepMind’s “Machine Theory of Mind”. This paper explores the ability of an automation to predict the “mental states of others, including their desires, beliefs, and intentions.”

The subsequent month (i.e. March 2018) Judea Pearl published a paper that explored the “Theoretical Impediments to Machine Learning With Seven Sparks from the Causal Revolution”. Pearl writes:

Our general conclusion is that human-level AI cannot emerge solely from model-blind learning machines; it requires the symbiotic collaboration of data and models.

Where he astutely recognizes the limitation of previous deep learning models. That is, deep learning system must be aware of the models that it is learning. Cognitive systems that have zero awareness of their own latent models of reality are always going to be extremely limited. The architecture element of maintaining a ‘world model’ is absolutely key to more intelligent systems.

Let’s fast forward to this week. DeepMind published an fascinating paper on Science with the a simple title “Neural scene representation and rendering”. The ground breaking capability of DeepMind’s system is the ability to “imagine” 3D scenes from just a few snapshots of the original scene:


S. M. Ali Eslami et al.

Honestly, it’s quite incomprehensible as to how it’s able to create such a high-fidelity 3D world model. This will require a lot of digging since it’s just unclear where this quantum leap of development originates from. What is the prior research work that enables this kind of capability?

The present architecture that is described doesn’t reveal much other than the presence of a Siamese structure and a generative model:


S. M. Ali Eslami et al.

To gain some kind of intuition as how this might work is to perhaps understand what the two networks are actually doing.

The first network as we seen earlier, is a recurring pattern that is used in ego motion networks. In 2015, “Learning to See by Moving” for UC Berkeley:


Pulkit Agrawal, Joao Carreira, Jitendra Malik

In 2017, Stanford published “Generic 3D Representation via Pose Estimation and Matching”, that forms the basis of this Siamese network architecture. The following architecture:


http://cvgl.stanford.edu/papers/zamir_eccv16.pdf Amir R. Zamir, Tilman Wekel, Pulkit Argrawal, Colin Weil, Jitendra Malik, Silvio Savarese

was designed to learn the 3D representation given two images. The objective was through learning this internal representation that it could generalize to other tasks such as scene layout, object pose estimation and identifying surface normals. If you notice that in the right side, there is a ‘query’ component that appears to be identical to DeepMind’s paper. DeepMind’s paper differs in the use of an additional generative network.

The motivation for this generative model appears to come from an earlier DeepMind paper from 2016: “Towards Conceptual Compression” (note: DeepMind has a habit of favoring unimpressive titles that can be very easily overlooked in one’s own research). One of the big research problems about generative models is how do we create them while preserving the underlying semantics. Generative models are very good at rendering realistic images (see: The Uncanny Valley for Deep Learning) however they also generate unrealistic models. This reveals that the underlying representation is unable to capture the semantic relationships between the components that it is able to generate. How is DeepMind’s network able to capture semantics in an unsupervised manner?

The “Conceptual Compression” (NIPS 2016)paper provides a bit of a hint of how this may be done:


https://deepmind.com/research/publications/towards-conceptual-compression/ Karol Gregor, Frederic Besse, Danilo Jimenez Rezende, Ivo Danihelka, Daan Wierstra

In the paper, they describe what they mean by “conceptual compression” as follows: “giving priority to higher levels of representation and generating the remainder”. If you notice in the diagram above, this architecture is building a kind of representation at each layer. It’s described as follows:

Assume that the network has learned a hierarchy of progressively more abstract representations. Then, to get different levels of compression, we can store only the corresponding number of topmost layers and generate the rest. By solving unsupervised deep learning, the network would order information according to its importance and store it with that priority.

What do they usually say about “standing on the shoulders of giants”? It turns out that two key components of this recent DeepMind developer comes from way back in 2016.

Here’s the key though, to get to ‘self-awareness’, you need to have a way to create latent models that capture the semantics of the underlying training set. One other approach to this is DeepMind’s β-VAE, which is a way of disentangling representations. This was revealed in April this year.

So what design patterns have we just learned here? (1) The use of curiosity for learning new capabilities (2) The importance of explicit models that can be introspected (3) a Siamese network for generating 3D models (4) a method for conceptual compression. There are other ingredients that still need to be experimentally identified, by we are rapidly getting there!

There are many prognosticators who are predicting an AI winter. These are likely researchers that have either little exposure in the field or don’t have a good conceptual model to identify key milestones. It is extremely critical to have a good conceptual model of cognition to be able to wade through the thousands of papers that are published every year in Deep Learning. Just this year, 4900 papers were submitted to NIPS:

To give you a visceral idea of what that means. If you read 10 papers per day, then it would take you 16 months to read all 4,900 papers. To weed through all the noise, you must know what research to look for. To do this, you need a good conceptual model of human cognition.


Explore Deep Learning: Artificial Intuition: The Improbable Deep Learning Revolution

.


Exploit Deep Learning: The Deep Learning AI Playbook

Posted from my blog with SteemPress : https://selfscroll.com/ego-motion-in-self-aware-deep-learning/
Sort:  

This user is on the @buildawhale blacklist for one or more of the following reasons:

  • Spam
  • Plagiarism
  • Scam or Fraud

Coin Marketplace

STEEM 0.25
TRX 0.11
JST 0.032
BTC 63706.21
ETH 3073.80
USDT 1.00
SBD 3.76