Select Academic Publications

Below are some of my academic publications. 

My interest is mostly in Natural Language Processing, Biomedicine, Decision Making via Informal Logic, Computational Argumentation, as well as Ethics & Philanthropy.

covid-19-paper.png

A Search Engine for Discovery of Scientific Challenges and Directions

The Association for the Advancement of Artificial Intelligence (AAAI), 2022
Was also featured in the SciNLP workshop at AKBC, the AI for Science workshop at NeuroIPS, and TAU's Medicine Research Fair
Dan Lahav, Jon Saad Falcon, Bailey Kuehl, Sophie Johnson, Sravanthi Parasa, Noam Shomron, Duen Horng Chau, Diyi Yang, Eric Horvitz, Daniel S Weld, Tom Hope

Keeping track of scientific challenges, advances and emerging directions is a fundamental part of research. However, researchers face a flood of papers that hinders discovery of important knowledge. In biomedicine, this directly impacts human lives. To address this problem, we present a novel task of extraction and search of scientific challenges and directions, to facilitate rapid knowledge discovery. We construct and release an expert-annotated corpus of texts sampled from full-length papers, labeled with novel semantic categories that generalize across many types of challenges and directions. We focus on a large corpus of interdisciplinary work relating to the COVID-19 pandemic, ranging from biomedicine to areas such as AI and economics. We apply a model trained on our data to identify challenges and directions across the corpus and build a dedicated search engine. In experiments with 19 researchers and clinicians using our system, we outperform a popular scientific search engine in assisting knowledge discovery. Finally, we show that models trained on our resource generalize to the wider biomedical domain and to AI papers, highlighting its broad utility. We make our data, model and search engine publicly available.

nature_edited.jpg

An autonomous debating system

Nature, 2021
Was featured on the cover of volume 591 issue 7850
Noam Slonim, ..., Dan Lahav, ..., Ranit Aharonov

Artificial intelligence (AI) is defined as the ability of machines to perform tasks that are usually associated with intelligent beings. Argument and debate are fundamental capabilities of human intelligence, essential for a wide range of human activities, and common to all human societies. The development of computational argumentation technologies is therefore an important emerging discipline in AI research1. Here we present Project Debater, an autonomous debating system that can engage in a competitive debate with humans. We provide a complete description of the system’s architecture, a thorough and systematic evaluation of its operation across a wide range of debate topics, and a detailed account of the system’s performance in its public debut against three expert human debaters. We also highlight the fundamental differences between debating with humans as opposed to challenging humans in game competitions, the latter being the focus of classical ‘grand challenges’ pursued by the AI research community over the past few decades. We suggest that such challenges lie in the ‘comfort zone’ of AI, whereas debating with humans lies in a different territory, in which humans still prevail, and for which novel paradigms are required to make substantial progress.

MMQA.png

Multimodalqa: Complex question answering over text, tables and images

The International Conference on Learning Representations (ICLR), 2021
Alon Talmor*, Ori Yoran*, Amnon Catav*, Dan Lahav*, Yizhong Wang, Akari Asai, Gabriel Ilharco, Hannaneh Hajishirzi, Jonathan Berant
∗ The authors contributed equally

When answering complex questions, people can seamlessly combine information from visual, textual and tabular sources. While interest in models that reason over multiple pieces of evidence has surged in recent years, there has been relatively little work on question answering models that reason across multiple modalities. In this paper, we present MultiModalQA(MMQA): a challenging question answering dataset that requires joint reasoning over text, tables and images. We create MMQA using a new framework for generating complex multi-modal questions at scale, harvesting tables from Wikipedia, and attaching images and text paragraphs using entities that appear in each table. We then define a formal language that allows us to take questions that can be answered from a single modality, and combine them to generate cross-modal questions. Last, crowdsourcing workers take these automatically-generated questions and rephrase them into more fluent language. We create 29,918 questions through this procedure, and empirically demonstrate the necessity of a multi-modal multi-hop approach to solve our task: our multi-hop model, ImplicitDecomp, achieves an average F1of 51.7 over cross-modal questions, substantially outperforming a strong baseline that achieves 38.2 F1, but still lags significantly behind human performance, which is at 90.1 F1

pred_sepsis.png

Predicting bloodstream infection outcome using machine learning

Nature Scientific Reports, Article number: 20101, 2021
Yazeed Zoabi, Orli Kehat, Dan Lahav, Ahuva Weiss-Meilik, Amos Adler, Noam Shomron

Bloodstream infections (BSI) are a main cause of infectious disease morbidity and mortality worldwide. Early prediction of BSI patients at high risk of poor outcomes is important for earlier decision making and effective patient stratification. We developed electronic medical record-based machine learning models that predict patient outcomes of BSI. The area under the receiver-operating characteristics curve was 0.82 for a full featured inclusive model, and 0.81 for a compact model using only 25 features. Our models were trained using electronic medical records that include demographics, blood tests, and the medical and diagnosis history of 7889 hospitalized patients diagnosed with BSI. Among the implications of this work is implementation of the models as a basis for selective rapid microbiological identification, toward earlier administration of appropriate antibiotic therapy. Additionally, our models may help reduce the development of BSI and its associated adverse health outcomes and complications.

KPAs.png

From Arguments to Key Points: Towards Automatic Argument Summarization

The Association for Computational Linguistics (ACL), 2020
Roy Bar-Haim*, Lilach Eden*, Roni Friedman*, Yoav Kantor*, Dan Lahav*, Noam Slonim*
∗ The authors contributed equally

Generating a concise summary from a large collection of arguments on a given topic is an intriguing yet understudied problem. We propose to represent such summaries as a small set of talking points, termed "key points", each scored according to its salience. We show, by analyzing a large dataset of crowd-contributed arguments, that a small number of key points per topic is typically sufficient for covering the vast majority of the arguments. Furthermore, we found that a domain expert can often predict these key points in advance. We study the task of argument-to-key point mapping, and introduce a novel large-scale dataset for this task. We report empirical results for an extensive set of experiments with this dataset, showing promising performance.