Quantcast
Channel: artificial intelligence – Futurism
Viewing all articles
Browse latest Browse all 502

Google Created AI That Just Needs A Few Snapshots To Make 3D Models Of Its Surroundings

$
0
0

Google’s new type of artificial intelligence algorithm can figure out what things look like from all angles — without needing to see them.

After viewing something from just a few different perspectives, the Generative Query Network was able to piece together an object’s appearance, even as it would appear from angles not analyzed by the algorithm, according to research published today in Science. And it did so without any human supervision or training. That could save a lot of time as engineers prepare increasingly advanced algorithms for technology, but it could also extend the abilities of machine learning to give robots (military or otherwise) greater awareness of their surroundings.

The Google researchers intend for their new type of artificial intelligence system to take away one of the major time sucks of AI research — going through and manually tagging and annotating images and other media that can be used to teach an algorithm what’s what. If the computer can figure all that out on its own, scientists would no longer need to spend so much time gathering and sorting data to feed into their algorithm.

According to the research, the AI system could create a full render of a 3D setting based on just five separate virtual snapshots. It learned about objects’ shape, size, and color independently of one another and then combined all of its findings into an accurate 3D model. Once the algorithm had that model, researchers could use the algorithm to create entirely new scenes without having to explicitly lay out what objects should go where.

While the tests were conducted in a virtual room, the Google scientists suspect that their work will give rise to machines that can autonomously learn about their surroundings without any researchers poring through an expansive set of data to make it happen.

It’s easy to imagine a world where this sort of artificial intelligence is used to enhance surveillance programs (good thing Google updated its code of ethics to allow work with the military). But the Generative Query Network isn’t quite that sophisticated yet — the algorithm can’t guess what your face looks like after seeing the back of your head or anything like that. So far, this technology has only faced simple tests with basic objects, not anything as complex as a person.

Instead, this research this is likely to boost existing applications for machine learning, like enhancing the precision of assembly line robots to give them a better understanding of their surroundings.

No matter what practical applications emerge from this early, proof-of-concept research, it does show that we’re getting closer to truly autonomous machines that are able to perceive and understand their surroundings, just the way humans do.

The post Google Created AI That Just Needs A Few Snapshots To Make 3D Models Of Its Surroundings appeared first on Futurism.


Viewing all articles
Browse latest Browse all 502

Trending Articles