Current location - Trademark Inquiry Complete Network - Tian Tian Fund - Google may be able to use Project Magenta to help computers identify the "emotion" of works of art
Google may be able to use Project Magenta to help computers identify the "emotion" of works of art

During this Google I/O 2017, we had a brief interview with Douglas Eck, product leader of Project Magenta, who is responsible for applying machine learning to reconstruct paintings and music content, to understand what Google has tried through this project and what goals it hopes to achieve.

.

Project Magenta (Project Magenta) is an experimental product of the Google Brain team. Simply put, it allows the computer system to use machine learning to analyze paintings and musical works created by humans, and try to reconstruct related elements. Similar applications are actually like previous

QuickDraw was launched, and AutoDraw was later presented in reverse form.

Earlier, Google also collaborated with the non-profit organization Gray Space Art Foundation to convert images into computer vision presentation results through a neural network system, thereby constructing a new artistic creation.

In a similar way, Sony's Computer Science Laboratory (CSL) used an artificial intelligence system to compile two pieces of music, "Daddy's Car" and "Mister Shadow", which were composed in the styles of The Beatles and American jazz respectively, while IBM

The supercomputer "Watson" was also used to help create fashionable dresses, and even new recipe content was created using existing cooking methods and various ingredient information.

The reason for implementing this project is actually to let the computer imitate the thinking mode of the human brain. For example, when drawing a cat's face, he will first draw two sets of pointed ears, and draw the obvious features that constitute a "cat" such as a beard.

This will allow the computer system to know the main features when drawing a cat, and what image represents a "cat".

After a long period of training, once the user draws a cat with eight legs on the input terminal, the computer knows after training that the "cat" does not have eight legs, so the final drawing result will be corrected to have four legs.

s cat.

Therefore, when applied to actual creation, the computer system learns to analyze the original painting and music content, and then reconstructs a new work after "understanding" the content, making artificial intelligence a new application tool for content creators.

Google's current practice of using machine learning analysis for content creation includes comparing and analyzing the color and line features of two images to generate new image content. In the music content creation part, it will also compare two different sounds.

, thereby creating a new tone and rhythm.

▲ Douglas Eck, Product Manager of Project Magenta However, Douglas Eck does not believe that paintings and music reconstructed by artificial intelligence systems should be called "art". Instead, he prefers to call them "results" because these content results are only transparent.

Similar results were obtained by analyzing the original painting and reconstructing it.

Therefore, for Google, this application is more like providing new creative tools rather than trying to replace existing creators.

As for whether to use Project Magenta to analyze various works of art so that the computer system can understand the emotions and thoughts that the creators put into the works, Google thinks it is a very interesting idea, but there is still a long way to go with the current technical capabilities.

Maybe this is a development direction that can be tried.