Penetrating neural networks’ mysterious decisions.

In AI-Artificial Intelligence, Brain Technology, Computer Learning by Brainy Days Ahead

MIT researchers have developed a method to determine why neural networks make the predictions they do. Neural networks, such as Google’s Alpha Go program, supposedly mimic the human brain using a process known as “deep learning.” An ongoing problem with neural networks is that they are “black boxes” that are good at classifying data, but even their creators will have no idea why. In the new paper, the researchers divide the neural net into two modules. The first module extracts segments of text from the training data, which are scored by length and coherence. The second module performs the prediction or classification task. The two modules then learn together.