Abstract
The published medical literature and online medical resources are important sources to help physicians make patient treatment decisions. Traditional sources used for information retrieval (e.g., PubMed) often return a list of documents in response to a user's query. Frequently the number of returned documents from large knowledge repositories is large and makes information seeking practical only "after hours" and not in the clinical setting. This study developed novel algorithms, and designed, implemented, and evaluated a medical definitional question answering system (MedQA). MedQA automatically analyzed a large number of electronic documents to generate short and coherent answers in response to definitional questions (i.e., questions with the format of "What is X?"). Our preliminary cognitive evaluation shows that MedQA out-performed three other online information systems (Google, OneLook, and PubMed) in two important efficiency criteria; namely, time spent and number of actions taken for a physician to identify a definition. It is our contention that question answering systems that aggregate pertinent information scattered across different documents have the potential to address clinical information needs within a timeframe necessary to meet the demands of clinicians.
Original language | English (US) |
---|---|
Pages (from-to) | 236-251 |
Number of pages | 16 |
Journal | Journal of Biomedical Informatics |
Volume | 40 |
Issue number | 3 |
DOIs | |
State | Published - Jun 2007 |
Externally published | Yes |
Keywords
- Evaluation
- Information retrieval
- Machine-learning
- Question analysis
- Question answering
- Text summarization
ASJC Scopus subject areas
- Computer Science Applications
- Health Informatics