[IMAGE: ROSS PATTON/WIRED]
Machine Learning is exactly what it sounds like a technique that teaches computers to do what comes naturally to humans and animals, namely learning from experience.
ML algorithms use computational methods to learn information directly from data without relying on a predetermined equation as a model. The algorithms adaptively improve their performance as the number of samples available for learning increases.
Why Machine Learning Matters?
With the rise in Big Data, Machine Learning has become a key technique for solving problems in areas, such as:
– Natural language processing, for voice recognition applications
– Image processing and computer vision, for face recognition, motion detection, and object detection
– Computational finance, for credit scoring and algorithmic trading
– Energy production, for price and load forecasting
– Automotive, aerospace, and manufacturing, for predictive maintenance
How Machine Learning Works?
Machine Learning uses two types of techniques: supervised learning, which trains a model on known input and output data so that it can predict future outputs, and unsupervised learning, which finds hidden patterns or intrinsic structures in input data.
Supervised Machine Learning builds a model that makes predictions based on evidence in the presence of uncertainty. A supervised learning algorithm takes a known set of input data and known responses to the data (output) and trains a model to generate reasonable predictions for the response to new data. Use supervised learning if you have known data for the output you are trying to predict. Supervised learning uses classification and regression techniques to develop predictive models.
Unsupervised learning finds hidden patterns or intrinsic structures in data. It is used to draw inferences from datasets consisting of input data without labeled responses. Clustering is the most common unsupervised learning technique. It is used for exploratory data analysis to find hidden patterns or groupings in data. Applications for cluster analysis include gene sequence analysis, market research, and object recognition.
There are many different kinds of Machine Learning. But the one that is grabbing headlines at the moment is called “Deep Learning”. It uses artificial neural networks simplified computer simulations of how biological neurons behave to extract rules and patterns from sets of data. Show a neural network enough pictures of cats, for instance, or have it listen to enough German speech, and it will be able to tell you if a picture it has never seen before is a cat, or a sound recording is in German. The general approach is not new. But the ever-increasing power of computers has allowed deep learning machines to simulate billions of neurons. At the same time, the huge quantity of information available on the internet has provided the algorithms with an unprecedented quantity of data to chew on. The results can be impressive. Facebook’s Deep Face algorithm, for instance, is about as good as a human being when it comes to recognising specific faces, even if they are poorly lit, or seen from a strange angle. E-mail spam is much less of a problem than it used to be, because the vast quantities of it circulating online have allowed computers to learn to recognise what a spam e-mail looks like, and divert it before it ever reaches your inbox.
Big firms like Google, Baidu and Microsoft are pouring resources into AI development, aiming to improve search results, build computers you can talk to, and more. A wave of startups wants to use the techniques for everything from looking for tumours in medical images to automating back-office work like the preparation of sales reports. This rapid progress has spawned prophets of doom, who worry that computers could become cleverer than their human masters and perhaps even displace them. Such worries are not entirely without foundation. Even now, scientists do not really understand how the brain works. But there is nothing supernatural about it and that implies that building something similar inside a machine should be possible in principle. Some conceptual breakthrough, or the steady rise in computing power, might one day give rise to hyper-intelligent, self-aware computers. But for now, and for the foreseeable future, deep-learning machines will remain pattern-recognition engines. They are not going to take over the world. But they will shake up the world of work.