Feb 27, 2017 | Atlanta, GA
Two assistant professors in the School of Interactive Computing received awards for their respective research in the field of Visual Question Answering (VQA) last week.
Assistant Professor Dhruv Batra earned a Young Investigator award from the Office of Naval Research and Devi Parikh earned a Google Research Faculty Award.
The grants will provide $510,000 over three years for Batra's research, Explainable and Trustworthy Intelligent Systems, and $85,681 for one year for Parikh’s, Making the V in VQA Matter: Elevating the Role of Image Understanding in Visual Question Answering.
In VQA, given an image and a free-form natural language question about the image, the machine's task is to automatically produce a concise, accurate, free-form, natural language answer.
Batra’s research aims to 1) develop theory, algorithms, and implementations for transparent deep neural networks that are able to provide explanations for their predictions, and 2) to study the effect of developed transparent neural networks and explanations on user trust and perceived trustworthiness with VQA as the AI testbed.
Similarly, Parikh’s research aims to build a more balanced VQA dataset that reduces language biases and allows evaluation protocols that more accurately reflect progress in image understanding. Another goal is to train a VQA model that leverages that balanced dataset to promote more detailed image understanding, and develop a counter-example based explanation modality, where the VQA model justifies its answer by providing examples of images it believes are similar to the image at hand.
The result will be that users can better trust the VQA model and identify its oncoming failures, according to the proposal.
"I think this line of research addresses a fundamental problem for the future of AI -- how do we make AI trustworthy?" Batra said. "How do we build intelligent systems that explain why they are making the predictions they are making?"