João Paulo Souza of the Fundação de Apoio ao Ensino, Pesquisa e Assistência in Brazil will determine whether Large Language Models (LLMs) can be utilized as accurate information sources to guide healthcare provider decision-making. Frontline health workers must make real-life care decisions by distinguishing between relevant and irrelevant information and contextualizing it to their setting. This is particularly challenging in remote areas with limited healthcare specialists. To support them, an information program, the Formative Second Opinion (FSO), was developed to produce curated evidence summaries based on a large repertoire of real-life clinical queries. An updated version of this program is now being developed to combine a mobile messaging platform with LLMs for regions with limited internet connectivity and computer access. Using a mixed-methods study approach, they will evaluate the accuracy of the evidence summaries generated by ChatGPT-4 and those created by humans to 450 randomly selected clinical questions.
More information about Catalyzing Equitable Artificial Intelligence (AI) Use