Please use this identifier to cite or link to this item: https://hdl.handle.net/2440/139714
Citations
Scopus Web of Science® Altmetric
?
?
Full metadata record
DC FieldValueLanguage
dc.contributor.authorZaib, M.-
dc.contributor.authorSheng, Q.Z.-
dc.contributor.authorZhang, W.E.-
dc.contributor.authorMahmood, A.-
dc.date.issued2023-
dc.identifier.citationProceedings of International Joint Conference on Neural Networks, 2023, vol.2023-June, pp.1-7-
dc.identifier.isbn9781665488679-
dc.identifier.issn2161-4393-
dc.identifier.issn2161-4407-
dc.identifier.urihttps://hdl.handle.net/2440/139714-
dc.description.abstractHaving an intelligent dialogue agent that can engage in conversational question answering (ConvQA) is now no longer limited to Sci-Fi movies only and has, in fact, turned into a reality. These intelligent agents are required to understand and correctly interpret the sequential turns provided as the context of the given question. However, these sequential questions are sometimes left implicit and thus require the resolution of some natural language phenomena such as anaphora and ellipsis. The task of question rewriting has the potential to address the challenges of resolving dependencies amongst the contextual turns by transforming them into intent-explicit questions. Nonetheless, the solution of rewriting the implicit questions comes with some potential challenges such as resulting in verbose questions and taking conversational aspect out of the scenario by generating the self-contained questions. In this paper, we propose a novel framework, CONVSR (CONVQA using Structured Representations) for capturing and generating intermediate representations as conversational cues to enhance the capability of the QA model to better interpret the incomplete questions. We also deliberate how the strengths of this task could be leveraged in a bid to design more engaging and more eloquent conversational agents. We test our model on the QuAC and CANARD datasets and illustrate by experimental results that our proposed framework achieves a better F1 score than the standard question rewriting model.-
dc.description.statementofresponsibilityMunazza Zaib, Quan Z. Sheng, Wei Emma Zhang, and Adnan Mahmood-
dc.language.isoen-
dc.publisherIEEE-
dc.relation.ispartofseriesIEEE International Joint Conference on Neural Networks (IJCNN)-
dc.rights©2023 IEEE-
dc.source.urihttps://ieeexplore.ieee.org/xpl/conhome/10190990/proceeding-
dc.subjectConversational question answering; information retrieval; question reformulation; deep learning; conversational information seeking-
dc.titleKeeping the Questions Conversational: Using Structured Representations to Resolve Dependency in Conversational Question Answering-
dc.typeConference paper-
dc.contributor.conferenceInternational Joint Conference on Neural Networks (IJCNN) (18 Jun 2023 - 23 Jun 2023 : Gold Coast, Australia)-
dc.identifier.doi10.1109/IJCNN54540.2023.10191510-
dc.publisher.placeOnline-
dc.relation.granthttp://purl.org/au-research/grants/arc/DP200102298-
pubs.publication-statusPublished-
dc.identifier.orcidZhang, W.E. [0000-0002-0406-5974]-
Appears in Collections:Computer Science publications

Files in This Item:
There are no files associated with this item.


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.