Aside from the lucky few know-it-alls, most of us will regularly be confronted with arguments that we have no basis for understanding. We could diligently take notes and later do our own research to make an informed decision, but that is not realistic in every case. So what are we to do to stay informed while sidestepping the misinformation that permeates our modern world?
A group from the MIT Media Lab has created a prototype device called the Wearable Reasoner that they believe can help. Wearable Reasoner is a pair of glasses that can listen for arguments and inform the wearer whether or not they are supported by evidence.
The wearable portion of the device is a pair of Bose Frames glasses. These glasses have speakers, a microphone, and are capable of connecting to a smartphone via Bluetooth. When activated, the linked smartphone will continually listen for an utterance with the help of the iOS Speech framework, which converts the detected utterance into text. The text is then sent to a cloud API that restores inter-word punctuation, and then finally, that result is sent to another API running the researchers’ reasoning algorithm. Based on the results from the reasoning algorithm, an audible response will be sent to the Bose Frames speakers to indicate if the detected utterance is supported by evidence.
A random forest classifier was selected for the reasoning algorithm and was trained on the "IBM Debater - Claims and Evidence" dataset, containing labeled claims and evidence for 58 socially divisive topics. After training, the classifier gave a 90.5% average accuracy. The results of the algorithm can be delivered to the user in two modes—non-explainable or explainable. In non-explainable mode, the user will only be informed if the claim was supported or unsupported. With explainable mode enabled, the device will give a sample of the supporting evidence in cases where the claim is supported.
On the one hand, the Wearable Explainer is a useful tool that can help to solve some of the issues we face in the information age. On the other hand, one can hardly avoid noticing the Orwellian overtones associated with an ever present voice telling us what we should and should not believe. An artificial intelligence is only as good as its training, and that training can be subject to inadvertent biases as well as outright manipulation. With some refinement, Wearable Reasoner could become a very useful tool, but it should also come with a very large asterisk on the box.