(14th-April-2021)
Suppose we want to separate two categories of data by drawing a line between them in a scatterplot.
One solution to this problem is to use machine learning to discover not only the mapping from representation to output but also the representation itself.
This approach is known as representation learning. Learned representations often result in much better performance than can be obtained with hand-designed representations.
They also allow AI systems to rapidly adapt to new tasks, with minimal human intervention. A representation learning algorithm can discover a good set of features for a simple task in minutes, or a complex task in hours to months. Manually designing features for a complex task requires a great deal of human time and effort; it can take decades for an entire community of researchers. The quint essential example of a representation learning algorithm is the autoencoder.
An autoencoder is the combination of an encoder function that converts the input data into a different representation, and a decoder function that converts the new representation back into the original format. Autoencoders are trained to preserve as much information as possible when an input is run through the encoder and then the decoder, but are also trained to make the new representation have various nice properties. Different kinds of autoencoders aim to achieve different kinds of properties.
Comments