Grand Challenges is a family of initiatives fostering innovation to solve key global health and development problems. Each initiative is an experiment in the use of challenges to focus innovation on making an impact. Individual challenges address some of the same problems, but from differing perspectives.
Showing page 1 out of 237 with 10 results per page.
Praveen Devarsetty of the George Institute for Global Health in India will integrate an LLM into their SMARThealth Pregnancy application to enable two-way communication support for frontline health workers to improve healthcare services for pregnant and postpartum women in India. Reducing maternal and newborn mortality and morbidity is a global priority, particularly in low- and middle-income countries where information about medical conditions and pregnancy symptoms is difficult to access in simple terms and local languages. Together with experts, they will create an "encyclopedia" of pregnancy advice based on Indian and WHO guidelines, integrate ChatGPT-4 into their SMARThealth Pregnancy application, and evaluate the application for providing high-quality and contextually relevant healthcare information and services following prompts from healthcare workers.
Imad Elhajj of the Humanitarian Engineering Initiative of the American University of Beirut in Lebanon will use Large Language Models (LLMs) to develop an interactive community health promotion platform with a chatbot that provides accurate health messages and real-time responses to queries on platforms like WhatsApp to vulnerable populations in Lebanon and Jordan. They will process texts from trusted websites, documents, and other text repositories, such as UNICEF and the WHO, into smaller text segments. These segments will then be converted into fixed-length vectors that capture their semantic meaning and contextual relationships. To generate answers, the GPT-3.5/4 model will retrieve the relevant vectors based on the user's query and use them together with the context taken from the conversation history. They will first evaluate the platform internally to ensure the relevancy, coherence and accuracy of the generated messages, and then conduct a pilot study with a small representative group from the target communities.
Maryam Mustafa of the Lahore University of Management Sciences in Pakistan will build a voice-enabled, mobile phone-based, conversational AI assistant, Awaaz-e-Sehat, for maternal healthcare workers in Pakistan to create and manage detailed electronic medical records. Pakistan has among the poorest pregnancy outcomes worldwide. The lack of documented medical records of pregnant women seeking care makes it challenging for doctors to provide accurate diagnoses and contextualized care based on socio-economic and lifestyle factors, which also play a vital role in maternal health outcomes. They will develop a proof-of-concept system comprising an intuitive user interface speech recognition module and a text recognition module to record audio responses in different languages following specific prompts. The system will then convert responses into text and populate a template electronic medical record in Urdu. Awaaz-e-Sehat will be evaluated by maternal healthcare workers at Shalamar Hospital for its ability to collect records from 500 patients.
Nirmal Ravi of EHA Clinics Ltd. in Nigeria will develop and test scalable and cost-effective ways to use large language models (LLMs) such as ChatGPT-4 to provide “second opinions” for community health workers (CHEWs) in low- and middle-income countries (LMICs). These second opinions would mirror what a reviewing physician might advise the provider in question after seeing or hearing their initial report. If LLMs can enhance the capabilities of CHEWs in this way, it could improve patient outcomes, free high-skill providers for other tasks, and mitigate the serious shortage of qualified health personnel in many LMICs. The specific outcomes of this project will be: a proof of concept that LLMs can be integrated within LMIC healthcare systems to improve quality of care; a proof of concept of a system architecture for LLMs that can be scaled up and deployed progressively in LMIC healthcare systems; and an initial understanding of the capacity of current LLMs to interact with CHEWs in LMIC settings.
Henrique Araujo Lima of the Universidade Federal de Minas Gerais in Brazil will develop a tool to systematically assess the accuracy and clarity of responses generated by Large Language Models (LLMs) to common questions on maternal health to increase their value in settings with limited healthcare access. To improve LLMs, it is essential to ensure the information they provide is both reliable and understandable, and for purposes such as health, LLMs will only be successful if both healthcare providers and users are confident about their benefits. They will collect the most common types of questions about maternal health in English, Portuguese, and Urdu, and submit them to the LLM. The quality of the answers will then be evaluated by medical experts from the U.S., Brazil and Pakistan, and the readability of the answers will be evaluated by individuals and a software model.
Shashi Jain of the Indian Institute of Science in India in collaboration with Uma Urs from Oxford Brookes University in the United Kingdom along with colleagues from Akaike and Kotak Mahindra Bank also in India, will build a GPT-enabled AI bot called SATHI, which stands for Scheme, Access, Training, Help, and Inclusion, to deliver information on the latest government financial schemes that support sectors, like micro-enterprises and farms, to potential customers and providers in rural and suburban India. Together with several partners, they will capture data and provide context to SATHI to enable it to answer queries related to financial schemes. They will also use a translation module so it can understand voice queries and respond with an audio answer in the local language. They will perform a field test at a bank branch to compare the use of SATHI alone with a human financial expert and with semi-experts supported by SATHI. They will collect data on customer satisfaction and their follow-up actions using standard field research methodology, including oral interviews and survey questionnaires.
Theofrida Maginga of the Sokoine University of Agriculture in Tanzania will develop a ChatGPT-powered Swahili chatbot for smallholder farmers with limited literacy and scarce resources in Tanzania to detect crop diseases quickly and easily. Maize is one of the most important crops in Tanzania and generates up to 50% of rural cash income. Several diseases that afflict maize are hard to detect visually, leading to substantial losses in crop productivity and income. They will integrate AI with Internet of Things (IoT) technologies that use non-invasive sensors to monitor the non-visual early indicators of diseases, including volatile organic compounds, ultrasound movements, and soil nutrient uptake. They will also develop and integrate a Swahili chatbot to interact with farmers in their local language in a culturally-sensitive manner and perform model validation and field testing.
Sophie Pascoe of Wits Health Consortium (Pty) Ltd. in South Africa, with support from the organizations, AUDERE in the U.S. and the Centre for HIV and AIDS Prevention Studies (CHAPS) in South Africa, will develop a Large Language Model (LLM)-based application, Your Choice, that interacts with individuals in a human-like way to respectfully obtain their sexual history and improve the accuracy of HIV risk assessments to control the epidemic in South Africa. Gathering accurate sexual history is essential for assessing HIV risk and prescribing preventative drugs but is challenging due to concerns about stigma and discrimination. Your Choice, which stands for Your Own Unique Risk Calculation for HIV-related Outcomes and Infections using a Chat Engine, leverages an LLM to ensure privacy and confidentiality, improve the accuracy of risk assessments, and increase awareness of preventative treatments. This solution would provide 24/7 access to an unbiased and non-judgmental counselor for marginalized and vulnerable populations specifically, greatly reducing the barriers and concerns around seeking advice. They will co-design the app with at-risk populations and evaluate a prototype using 550 public sector healthcare providers and clients.
Tamlyn Roman of Quantium Health in South Africa will use generative AI and Large Language Models (LLMs) to develop an automated analyst that integrates disparate health datasets and automates data analytics to support evidence-based decision-making in public health. Although there is a relative abundance of health-related data in South Africa, it is difficult to use effectively because the datasets are not standardized and analytics capacity to support policy- and decision-making is limited. They will source datasets for the analyst and assess the LLM's ability to automate checks and link multiple datasets. Improving interoperability between datasets will enable unique correlations to be identified between separate social indicators, which are currently recorded in distinct databases. They will also develop a user-friendly platform for output generation and visualization.
João Paulo Souza of the Fundação de Apoio ao Ensino, Pesquisa e Assistência in Brazil will determine whether Large Language Models (LLMs) can be utilized as accurate information sources to guide healthcare provider decision-making. Frontline health workers must make real-life care decisions by distinguishing between relevant and irrelevant information and contextualizing it to their setting. This is particularly challenging in remote areas with limited healthcare specialists. To support them, an information program, the Formative Second Opinion (FSO), was developed to produce curated evidence summaries based on a large repertoire of real-life clinical queries. An updated version of this program is now being developed to combine a mobile messaging platform with LLMs for regions with limited internet connectivity and computer access. Using a mixed-methods study approach, they will evaluate the accuracy of the evidence summaries generated by ChatGPT-4 and those created by humans to 450 randomly selected clinical questions.