Friday , 23 February 2018
Home / Cloud Computing / Artificial intelligence manages to create 3D scenarios just by looking at photos

Artificial intelligence manages to create 3D scenarios just by looking at photos

If you fear a future in which machines will dominate humans and put them into three-dimensional simulations of reality like a video game, as in the movie “The Matrix,” this news may not be very comforting. An artificial intelligence has been able to give 3D shape – that is, depth and texture – to low resolution 2D shots.

The system was created by researcher Qifeng Chen, from Stanford University, USA, in partnership with Intel. He and his team built a neural network capable of synthesizing street photos. First, this I.A. Was fed with 5,000 low-resolution public highway images from Germany to “learn” their standards.

Then the scientists created a 3D model for each image. The task of I.A. Was to couple the two-dimensional image to the three-dimensional model. In other words, the challenge was to make the machine understand the shape, depth, distance, and texture of each object in an image.

It is as if the machine had the model of a human face in one hand and a photo of that face in the other: the goal was to make her “dress” the face with the photo as if it were a mask, understanding where the nose, , The texture of each one, the volume and depth of each.

In that case, however, the idea was to wear a street model with a photo, identifying the position of traffic lights, cars, trees, pedestrians, and everything else. The result was that I.A. Not only was he able to reproduce all these 3D photos, but he was able to create scenes, sequences of images almost as realistic as the work of a human designer.

Here’s a sample video of what the machine was capable of:

The goal of scientists, however, is not to create the Matrix. The system is still not perfect and has difficulty building photorealistic scenes. But this technique can and should be used by game developers in building 3D scenarios for games, special effects in movies, animations or virtual reality, as explained by Engadget.

The benefit of using a neural network for this is that a developer will no longer need to put dozens or hundreds of designers to spend months building movie or game scenarios, but may leave it to a smart computer to do at least a part Of work.

According to Chen, this technology is not yet ready to completely replace teams of graphic artists and high-performance machines used by studios worldwide, but it can help at least sketch how the final product will stay.

Veja Também

Volkswagen introduces standalone voice-controlled car

Volkswagen presented its first stand-alone car concept at the Geneva Motor Show this Monday. The ...

16 Comentários

  1. Great, this is what I was searching for in bing

  2. Good, this is what I was searching for in yahoo

  3. Ha, here from yahoo, this is what i was browsing for.

  4. Ha, here from google, this is what i was browsing for.

  5. Hello, happy that i stumble on this in yahoo. Thanks!

  6. Hello, google lead me here, keep up nice work.

  7. I conceive you have mentioned some very interesting details , appreciate it for the post.

  8. I simply must tell you that you have an excellent and unique article that I must say enjoyed reading.

  9. Yeah bookmaking this wasn’t a risky decision outstanding post! .

  10. I am not rattling great with English but I get hold this really easygoing to read .

  11. I like this site because so much useful stuff on here : D.

  12. I like, will read more. Thanks!

  13. Deference to op , some superb selective information .

  14. I am not rattling great with English but I get hold this really easygoing to read .

Leave a Reply

Your email address will not be published. Required fields are marked *