Google's artificial intelligence will be a real revolution: For now they have advances in the field, because they have developed an artificial neural network capable of imagining.
As you read: Search engine researchers showed the creative part of the image recognition process. This artificial intelligence, after being "fed" with photographs of the same object, learns to recognize it in images "with noise" where apparently nothing is seen.
For that, the Google team launched a new exercise: ask a trained neural network to recognize images that identify different objects in photographs where no form can be seen. Upon receiving a photo in which there is only noise, the system begins to align pixels until it finds features that match the object. They can be between 10 and 30 and the higher the layer, the more specific the result it gives. First it recognizes the object, then it is highlighted and at the end, in the last layer, it is when it is defined what it is that it interprets.
Google's artificial intelligence capabilities that are behind image recognition are increasingly surprising for their successes. However, in their confusions they also have a lot to astonish the experts …
How does Google's artificial intelligence work?
At the moment it is a system that emulates the functioning of neurons. We are talking about artificial neurons in different layers. He uses mathematical methods to classify the objects that appear in the images and, as a training, the researchers make him analyze hundreds of photos so that he learns to do it well … In this way, when the system receives an image, each layer takes care of to identify different features such as color and shapes, so that the last layer determines what object it is based on the processed data and the millions of similar photos that the system has been receiving.
Neurons are activated in response to very complex things, such as buildings or trees. Thus, one of the challenges of neural networks is to understand what exactly happens in each layer. For example, the first layer can search the edges or corners of the elements of the photo. The intermediate layers interpret the basic characteristics to find shapes or components, such as a door or a sheet. The last layers are assembled in a complete interpretation, these neurons are activated in response to complex objects such as entire buildings or trees, ”explained Google engineers.
In short, that the Google artificial intelligence What this network does is interpret what are the distinguishing characteristics of each object from nothing, a level of abstraction that experts say can correspond to the imagination.
These advances help to understand how neural networks are capable of executing complex classifications from images and also raise the question of whether these systems could become a tool for artists in the future … According to Google engineers, a new way of mix visual concepts or even shed some light on the roots of the creative process in general.
I like this: