Apple’s research paper focuses on a common AI problem shared by all–making realistic fake images to train facial recognition. Training a machine takes a huge amount of data. Make that a ton of data where pictures of faces or body language are concerned. Apple’s paper focuses on two examples: hand gestures and determining where people are looking. Using established datasets of synthetic images and a neural network trained on real images to refine them to look more realistic, Apple’s research compares the refined image to a real image, decides which picture is real, and then updates itself based on what the system judged as the fake rather than the real image. The process avoids having to label any real data. Read on.