People learn from experience. The richer our experiences, the more we can learn. In the artificial intelligence (AI) discipline known as deep learning, the same can be said for machines powered by AI hardware and software. The experiences through which machines can learn are defined by the data they acquire, and the quantity and quality of data determine how much they can learn.
Deep learning is a branch of machine learning. Unlike traditional machine learning algorithms, many of which have a finite capacity to learn no matter how much data they acquire, deep learning systems can improve their performance with access to more data: the machine version of more experience. After machines have gained enough experience through deep learning, they can be put to work for specific tasks such as driving a car, detecting weeds in a field of crops, detecting diseases, inspecting machinery to identify faults, and so on.
Deep learning networks learn by discovering intricate structures in the data they experience. By building computational models that are composed of multiple processing layers, the networks can create multiple levels of abstraction to represent the data.
For example, a deep learning model known as a convolutional neural network can be trained using large numbers (as in millions) of images, such as those containing cats. This type of neural network typically learns from the pixels contained in the images it acquires. It can classify groups of pixels that are representative of a cat’s features, with groups of features such as claws, ears, and eyes indicating the presence of a cat in an image.
Deep learning is fundamentally different from conventional machine learning. In this example, a domain expert would need to spend considerable time engineering a conventional machine learning system to detect the features that represent a cat. With deep learning, all that is needed is to supply the system with a very large number of cat images, and the system can autonomously learn the features that represent a cat.
For many tasks, such as computer vision, speech recognition (also known as natural language processing), machine translation, and robotics, the performance of deep learning systems far exceeds that of conventional machine learning systems. This is not to say that building deep learning systems is relatively easy compared to conventional machine learning systems. Although feature recognition is autonomous in deep learning, thousands of hyperparameters (knobs) need to be tuned for a deep learning model to become effective.
We are living in a time of unprecedented opportunity, and deep learning technology can help us achieve new breakthroughs. Deep learning has been instrumental in the discovery of exoplanets and novel drugs and the detection of diseases and subatomic particles. It is fundamentally augmenting our understanding of biology, including genomics, proteomics, metabolomics, the immunome, and more.
We are also living in a time in which we are faced with unrelenting challenges. Climate change threatens food production and could one day lead to wars over limited resources. The challenge of environmental change will be exacerbated by an ever-increasing human population, which is expected to reach nine billion by 2050. The scope and scale of these challenges require a new level of intelligence made possible by deep learning.
During the Cambrian explosion some 540 million years ago, vision emerged as a competitive advantage in animals and soon became a principal driver of evolution. Combined with the evolution of biological neural networks to process visual information, vision provided animals with a map of their surroundings and heightened their awareness of the external world.
Today, the combination of cameras as artificial eyes and neural networks that can process the visual information captured by those eyes is leading to an explosion in data-driven AI applications. Just as vision played a crucial role in the evolution of life on earth, deep learning and neural networks will enhance the capabilities of robots. Increasingly, they will be able to understand their environment, make autonomous decisions, collaborate with us, and augment our own capabilities.
Robotics
Many of the recent developments in robotics have been driven by advances in AI and deep learning. For example, AI enables robots to sense and respond to their environment. This capability increases the range of functions they can perform, from navigating their way around warehouse floors to sorting and handling objects that are uneven, fragile, or jumbled together. Something as simple as picking up a strawberry is an easy task for humans, but it has been remarkably difficult for robots to perform. As AI progresses, that progress will enhance the capabilities of robots.
Developments in AI mean we can expect the robots of the future to increasingly be used as human assistants. They will not only be used to understand and answer questions, as some are used today. They will also be able to act on voice commands and gestures, even anticipate a worker’s next move. Today, collaborative robots already work alongside humans, with humans and robots each performing separate tasks that are best suited to their strengths.
Agriculture
AI has the potential to revolutionize farming. Today, deep learning enables farmers to deploy equipment that can see and differentiate between crop plants and weeds. This capability allows weeding machines to selectively spray herbicides on weeds and leave other plants untouched. Farming machines that use deep learning–enabled computer vision can even optimize individual plants in a field by selectively spraying herbicides, fertilizers, fungicides, insecticides, and biologicals. In addition to reducing herbicide use and improving farm output, deep learning can be further extended to other farming operations such as applying fertilizer, performing irrigation, and harvesting.
Medical imaging and healthcare
Deep learning has been particularly effective in medical imaging, due to the availability of high-quality data and the ability of convolutional neural networks to classify images. For example, deep learning can be as effective as a dermatologist in classifying skin cancers, if not more so. Several vendors have already received FDA approval for deep learning algorithms for diagnostic purposes, including image analysis for oncology and retina diseases. Deep learning is also making significant inroads into improving healthcare quality by predicting medical events from electronic health record data.
The future of deep learning
Today, there are various neural network architectures optimized for certain types of inputs and tasks. Convolution neural networks are very good at classifying images. Another form of deep learning architecture uses recurrent neural networks to process sequential data. Both convolution and recurrent neural network models perform what is known as supervised learning, which means they need to be supplied with large amounts of data to learn. In the future, more sophisticated types of AI will use unsupervised learning. A significant amount of research is being devoted to unsupervised and semisupervised learning technology.
Reinforcement learning is a slightly different paradigm to deep learning in which an agent learns by trial and error in a simulated environment solely from rewards and punishments. Deep learning extensions into this domain are referred to as deep reinforcement learning (DRL). There has been considerable progress in this field, as demonstrated by DRL programs beating humans in the ancient game of GO.
Designing neural network architectures to solve problems is incredibly hard, made even more complex with many hyperparameters to tune and many loss functions to choose from to optimize. There has been a lot of research activity to learn good neural network architectures autonomously. Learning to learn, also known as metalearning or AutoML, is making steady progress.
Current artificial neural networks were based on 1950s understanding of how human brains process information. Neuroscience has made considerable progress since then, and deep learning architectures have become so sophisticated that they seem to exhibit structures such as grid cells, which are present in biological neural brains used for navigation. Both neuroscience and deep learning can benefit each other from cross-pollination of ideas, and it’s highly likely that these fields will begin to merge at some point.
We don’t use mechanical computers anymore, and at some point, we won’t be using digital computers, either. Rather, we will be using a new generation of quantum computers. There have been several breakthroughs in quantum computing in recent years and learning algorithms can certainly benefit from the incredible amount of compute available that quantum computers provide. It might also be possible to use learning algorithms to understand the output of the probabilistic quantum computers. Quantum machine learning is a very active branch of machine learning, and with the first International Conference in Quantum Machine Learning scheduled to take place in 2018, it’s off to a good start.
Remove AI infrastructure obstacles to enable more efficient data collection, accelerated AI workloads, and smoother cloud integration.