2D Photos Transfomed By AI Into Real Objects.

In AI-Artificial Intelligence, Brain Science, Brain Technology, Robots, Senses and Perception, Virtual Reality by Brainy Days Ahead

2D is a lot easier for AI to “see” than 3D shapes. So how can scientists help? By creating an algorithm that “unfolds” 3D objects and teaches AI to perceive them first as 2D surfaces. For example, humans can imagine flattening out a mountainous globe and seeing it as a rectangular map, with each point on the surface displaying latitude, longitude, and altitude. After a lot of practice, the new machine-learning algorithm, named SurfNet, was able to translate photos of 3D objects into 2D surfaces and then reassemble them into 3D forms. Beyond simple maps, researchers succeeded in training SurfNet to reconstruct cars, airplanes, and hands in almost any posture. The new learning device is expected to play a creative role designing objects for virtual and augmented reality, creating 3D maps of rooms for robot navigation, and controlling hand movements and gestures.