Unified Question Answering over RDF Knowledge Graphs and Natural Language Text
Results for the question:
Top-k KG facts and/or text snippets with BERT scoring (blue phrases indicate overlap with question tokens, red text indicates source)
Answer graph construction
From the top-scoring evidences as per BERT scores (shown above), that are either KG facts or text snippets, UNIQORN constructs a context graph (XG) that contains question-specific entities, predicates, types (added separately), and candidate answers.
Subsequently, it applies a graph algorithm for obtaining Group Steiner Trees, that identifies the best answer candidates in the context graph.
In the following, we visualize one of the top-k GSTs. In this graph, the edge label represents the rank of question relevance scores - higher the score, lower is the edge cost in the GST. The edge thickness is proportional to its relevance rank (thicker implies higher rank and lower edge cost).
Note that, in some cases, the user may notice some entities or relations in the GST that are absent in the top-k evidences retrieved by BERT. These have been introduced via connectivity constraints among these top-k evidences (see paper for more details).
Top GST
Top answer(s)
Description
Question answering over knowledge graphs and other RDF data has been greatly advanced,
with a number of good systems providing crisp answers for natural language questions or
telegraphic queries. Some of these systems incorporate textual sources as additional evidence
for the answering process, but cannot compute answers that are present in text alone.
Conversely, systems from the IR and NLP communities have addressed QA over text,
but such systems barely utilize semantic data and knowledge.
This paper presents the first QA system that can seamlessly operate over RDF datasets and
text corpora, or both together, in a unified framework.
Our method, called UNIQORN, builds a context graph on-the-fly,
by retrieving question-relevant triples from the RDF data and/or snippets
from the text corpus using a fine-tuned BERT model.
The resulting graph is typically rich but highly noisy.
UNIQORN copes with this input by advanced graph algorithms for Group Steiner Trees,
that identify the best answer candidates in the context graph.
Experimental results on several benchmarks of complex questions with multiple
entities and relations, show that UNIQORN produces results comparable to
the state-of-the-art on KGs, text corpora, and heterogeneous sources.
The graph-based methodology provides user-interpretable evidence for the
complete answering process. A running example in this paper is:
Question: director of the western for which Leo won an Oscar? [Answer: Alejandro Iñàrritu]
Context graphs (XG) built by Uniqorn for the question 𝑞 = director of the western for which Leo won an Oscar? Anchors are nodes with (partly) underlined labels;
answers are in bold. Orange subgraphs are Group Steiner Trees.