The road to general AI just got a lot easier!
Why this is interesting; Basically, transfer learning (also called Inductive Transfer) is the technique and process of storing knowledge, gained while solving one problem, and applying it to a different but related problem. For example, knowledge gained while learning to recognize cars could apply when trying to recognize trucks and another comparison would be that one had trained a neural network to recognize cats (sic), using a million pictures, and instead of training a new network to recognize horses, by feeding it a million pictures of horses, one could simply use the cat trained network and only should feed say a 1000 different pictures of horses before it would be a master in recognizing horses. It is somewhat more technical elaborate than that but you get the idea.
Why this is important; Well, the heading says most of it – this is certainly a way towards General AI. On a more practical and short term scale it means that a given AI enabled application, trained in a somewhat narrow task field, could be relative quickly and straightforwardly re-trained for another similar task without having to have to train it with massive data (data that might not even be available in sufficient quantity and quality) all from scratch.
In this YouTube video Yann LeCun explains how it works (jump to the 24 minute mark).
This research paper, produced by Google DeepMind, is for the REALLY hardcore.