Large Language Models such as transformer-based models have been wildly successful in setting state-of-the-art benchmarks on a broad range of natural language processing (NLP) tasks, including question answering (QA), document classification, machine translation, text summarization, and others. Recently, the release of OpenAI’s free tool ChatGPT demonstrated the ability of large language models to generate content, with anticipations on its possible uses and potential controversies. The ethical and acceptable boundaries of ChatGPT’s use in scientific writing remain unclear. I will talk about our research on exploring large language models, e.g., long-sequence transformers and GPT style models, in the clinical and biomedical domains. Our work examines the adaptability of these large language models to a series of clinical NLP tasks including clinical inferencing, biomedical named entity recognition, EHR based question answering, interoperability etc.

Zoom link: https://weillcornell.zoom.us/j/98931485823

Event Details

See Who Is Interested

  • Emmanuel Bugarin Estrada
  • Javier Torres

2 people are interested in this event