Description

Question answering over knowledge graphs and other RDF data has been greatly advanced, with a number of good systems providing crisp answers for natural language questions or telegraphic queries. Some of these systems incorporate textual sources as additional evidence for the answering process, but cannot compute answers that are present in text alone. Conversely, systems from the IR and NLP communities have addressed QA over text, but such systems barely utilize semantic data and knowledge. This paper presents the first QA system that can seamlessly operate over RDF datasets and text corpora, or both together, in a unified framework. Our method, called UNIQORN, builds a context graph on-the-fly, by retrieving question-relevant triples from the RDF data and/or snippets from the text corpus using a fine-tuned BERT model. The resulting graph is typically rich but highly noisy. UNIQORN copes with this input by advanced graph algorithms for Group Steiner Trees, that identify the best answer candidates in the context graph. Experimental results on several benchmarks of complex questions with multiple entities and relations, show that UNIQORN produces results comparable to the state-of-the-art on KGs, text corpora, and heterogeneous sources. The graph-based methodology provides user-interpretable evidence for the complete answering process.

A running example in this paper is:

Question: director of the western for which Leo won an Oscar? [Answer: Alejandro Iñàrritu]

Running Example in uniqorn
XG(q) example for KG as input.

Running Example in uniqorn
XG(q) example for TEXT as input.

Running Example in uniqorn
XG(q) example for KG and TEXT as input.

Context graphs (XG) built by Uniqorn for the question 𝑞 = director of the western for which Leo won an Oscar? Anchors are nodes with (partly) underlined labels; answers are in bold. Orange subgraphs are Group Steiner Trees.

Please refer to our paper for further details.

Contact

For more information, please contact: Soumajit Pramanik, Jesujoba Alabi, Rishiraj Saha Roy or Gerhard Weikum


To know more about our group, please visit https://www.mpi-inf.mpg.de/departments/databases-and-information-systems/research/question-answering/.

Results on questions from 6 benchmarks

LC-QuAD 2.0 (Dubey et al. 2019) [All 4,921 test questions]
LC-QuAD 1.0 (Trivedi et al. 2017) [1,459 complex questions]
ComQA (Abujabal et al. 2019) [202 complex questions]
QALD (Usbeck et al. 2018) [70 complex questions]
CQ-W (Abujabal et al. 2017) [150 complex questions]
CQ-T (Lu et al. 2019) [150 complex questions]
Text corpora collected for these 6 benchmarks to enable heterogeneous QA
LC-QuAD 2.0 [All 30,000 questions] LC-QuAD 1.0 [1,459 complex questions] ComQA [202 complex questions] QALD [70 complex questions] CQ-W [150 complex questions] CQ-T [150 complex questions]

Code on GitHub

UNIQORN code

Paper

"Unified Question Answering over RDF Knowledge Graphs and Natural Language Text", Soumajit Pramanik, Jesujoba Alabi, Rishiraj Saha Roy, and Gerhard Weikum, in arXiv, 2021.