Dr. Jad Kabbara

Research Scientist

Primary DLC

MIT Media Lab

MIT Room: E14-526B

Research Summary

Dr. Jad Kabbara is a research scientist at CCC. He received his Ph.D. in Computer Science in May 2022 from McGill University and the Montreal Institute for Learning Algorithms (Mila). Before that, he received his Masters from McGill University in 2014 and Bachelors from the American University of Beirut in 2011. His Ph.D. research was in the broad area of Natural Language Processing, specifically, at the intersection of computational pragmatics and natural language generation and natural language understanding.

In his PhD, Kabbara worked on the computational modeling of presuppositions in natural language. Presuppositions are shared assumptions and facts that are not explicitly stated in the context (either in texts or conversations) and are taken for granted. For example, if we say in a conversation “Roger Federer won the match,” we presuppose he played a match (which we won) but that fact is not explicitly stated. In his Ph.D., Kabbara presented various neural models for learning presupposition effects in language (e.g., definite descriptions, adverbial presuppositions) and showed how we can use such models to improve the quality of extractive summaries. He also investigated large transformer-based models (e.g., BERT, RoBERTa) in the context of NLI to understand how well they perform on hard cases of presupposition as well as presenting learning frameworks to help improve their performance on such hard cases. His work was recognized with the ACL 2018 Best Paper Award and COLING 2022 Best Short Paper Award.

Recent Work