<?xml version="1.0" encoding="UTF-8"?>
<feed xmlns="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <title>DSpace Collection:</title>
  <link rel="alternate" href="http://hdl.handle.net/2440/115881" />
  <subtitle />
  <id>http://hdl.handle.net/2440/115881</id>
  <updated>2021-02-25T17:37:30Z</updated>
  <dc:date>2021-02-25T17:37:30Z</dc:date>
  <entry>
    <title>Medical data inquiry using a question answering model</title>
    <link rel="alternate" href="http://hdl.handle.net/2440/129731" />
    <author>
      <name>Liao, Z.</name>
    </author>
    <author>
      <name>Liu, L.</name>
    </author>
    <author>
      <name>Wu, Q.</name>
    </author>
    <author>
      <name>Teney, D.</name>
    </author>
    <author>
      <name>Shen, C.</name>
    </author>
    <author>
      <name>Van Den Hengel, A.</name>
    </author>
    <author>
      <name>Verjans, J.</name>
    </author>
    <id>http://hdl.handle.net/2440/129731</id>
    <updated>2021-02-07T22:34:38Z</updated>
    <published>2020-01-01T00:00:00Z</published>
    <summary type="text">Title: Medical data inquiry using a question answering model
Author: Liao, Z.; Liu, L.; Wu, Q.; Teney, D.; Shen, C.; Van Den Hengel, A.; Verjans, J.
Abstract: Access to hospital data is commonly a difficult, costly and time-consuming process requiring extensive interaction with network administrators. This leads to possible delays in obtaining insights from data, such as diagnosis or other clinical outcomes. Healthcare administrators, medical practitioners, researchers and patients could benefit from a system that could extract relevant information from healthcare data in real-time. In this paper, we present a question answering system that allows health professionals to interact with a large-scale database by asking questions in natural language. This system is built upon the BERT and SQLOVA models, which translate a user's request into an SQL query, which is then passed to the data server to retrieve relevant information. We also propose a deep bilinear similarity model to improve the generated SQL queries by better matching terms in the user's query with the database schema and contents. This system was trained with only 75 real questions and 455 back-translated questions, and was evaluated over 75 additional real questions about a real health information database, achieving a retrieval accuracy of 78%.</summary>
    <dc:date>2020-01-01T00:00:00Z</dc:date>
  </entry>
  <entry>
    <title>How might autonomous vehicles impact the city? The case of commuting to central Adelaide</title>
    <link rel="alternate" href="http://hdl.handle.net/2440/129730" />
    <author>
      <name>Kellett, J.</name>
    </author>
    <author>
      <name>Barreto, R.</name>
    </author>
    <author>
      <name>Van Den Hengel, A.</name>
    </author>
    <author>
      <name>Vogiatzis, N.</name>
    </author>
    <id>http://hdl.handle.net/2440/129730</id>
    <updated>2021-02-07T22:33:21Z</updated>
    <published>2019-01-01T00:00:00Z</published>
    <summary type="text">Title: How might autonomous vehicles impact the city? The case of commuting to central Adelaide
Author: Kellett, J.; Barreto, R.; Van Den Hengel, A.; Vogiatzis, N.
Abstract: Autonomous Vehicles (AV) are likely to have profound effects on cities. Using a survey of regular commuters into the Adelaide CBD we investigate views on AV ownership, use, vehicle sharing and attachment to conventional vehicles. We then explore potential vehicle flow and land use change in the Adelaide CBD under two scenarios. Whilst the overall vehicle fleet reduces under both scenarios, total trips may increase and some of the predicted benefits of AVs may not eventuate until a lengthy transition period is complete. These findings have policy implications for how the transition to AVs is managed.</summary>
    <dc:date>2019-01-01T00:00:00Z</dc:date>
  </entry>
  <entry>
    <title>On the general value of evidence, and bilingual scene-text visual question answering</title>
    <link rel="alternate" href="http://hdl.handle.net/2440/129208" />
    <author>
      <name>Wang, X.</name>
    </author>
    <author>
      <name>Liu, Y.</name>
    </author>
    <author>
      <name>Shen, C.</name>
    </author>
    <author>
      <name>Ng, C.C.</name>
    </author>
    <author>
      <name>Luo, C.</name>
    </author>
    <author>
      <name>Jin, L.</name>
    </author>
    <author>
      <name>Chan, C.S.</name>
    </author>
    <author>
      <name>Van Den Hengel, A.</name>
    </author>
    <author>
      <name>Wang, L.</name>
    </author>
    <id>http://hdl.handle.net/2440/129208</id>
    <updated>2020-12-03T22:00:44Z</updated>
    <published>2020-01-01T00:00:00Z</published>
    <summary type="text">Title: On the general value of evidence, and bilingual scene-text visual question answering
Author: Wang, X.; Liu, Y.; Shen, C.; Ng, C.C.; Luo, C.; Jin, L.; Chan, C.S.; Van Den Hengel, A.; Wang, L.
Abstract: Visual Question Answering (VQA) methods have made incredible progress, but suffer from a failure to generalize. This is visible in the fact that they are vulnerable to learning coincidental correlations in the data rather than deeper relations between image content and ideas expressed in language. We present a dataset that takes a step towards addressing this problem in that it contains questions expressed in two languages, and an evaluation process that co-opts a well understood image-based metric to reflect the method’s ability to reason. Measuring reasoning directly encourages generalization by penalizing answers that are coincidentally correct. The dataset reflects the scene-text version of the VQA problem, and the reasoning evaluation can be seen as a text-based version of a referring expression challenge. Experiments and analyses are provided that show the value of the dataset. The dataset is available at www.est-vqa.org</summary>
    <dc:date>2020-01-01T00:00:00Z</dc:date>
  </entry>
  <entry>
    <title>Self-trained deep ordinal regression for end-to-end video anomaly detection</title>
    <link rel="alternate" href="http://hdl.handle.net/2440/129207" />
    <author>
      <name>Pang, G.</name>
    </author>
    <author>
      <name>Yan, C.</name>
    </author>
    <author>
      <name>Shen, C.</name>
    </author>
    <author>
      <name>van den Hengel, A.</name>
    </author>
    <author>
      <name>Bai, X.</name>
    </author>
    <id>http://hdl.handle.net/2440/129207</id>
    <updated>2020-12-03T22:00:06Z</updated>
    <published>2020-01-01T00:00:00Z</published>
    <summary type="text">Title: Self-trained deep ordinal regression for end-to-end video anomaly detection
Author: Pang, G.; Yan, C.; Shen, C.; van den Hengel, A.; Bai, X.
Abstract: Video anomaly detection is of critical practical importance to a variety of real applications because it allows human attention to be focused on events that are likely to be of interest, in spite of an otherwise overwhelming volume of video. We show that applying self-trained deep ordinal regression to video anomaly detection overcomes two key limitations of existing methods, namely, 1) being highly dependent on manually labeled normal training data; and 2) sub-optimal feature learning. By formulating a surrogate two-class ordinal regression task we devise an end-toend trainable video anomaly detection approach that enables joint representation learning and anomaly scoring without manually labeled normal/abnormal data. Experiments on eight real-world video scenes show that our proposed method outperforms state-of-the-art methods that require no labeled training data by a substantial margin, and enables easy and accurate localization of the identified anomalies. Furthermore, we demonstrate that our method offers effective human-in-the-loop anomaly detection which can be critical in applications where anomalies are rare and the false-negative cost is high.</summary>
    <dc:date>2020-01-01T00:00:00Z</dc:date>
  </entry>
</feed>

