LMMs in Healthcare

Recently, the World Health Organization (WHO) has released guidance on the ethical use and governance of Large Multi-Modal Models (LMM) in healthcare, acknowledging the transformative impact of Generative Artificial Intelligence (AI) technologies like ChatGPT, Bard, and Bert.

Large Multi-Modal Models (LMM)

  • LMMs are models that usemultiple senses to mimic human-like perception. This allows AI (Artificial Intelligence) to respond to a wider range of Human communication, making interactions more natural and intuitive.
  • LMMs integrate multiple data types, such as images, text, language, audio, and other heterogeneity. This allows themodels to understand images, videos, and audio, and converse with users.
  • Some examples of multimodal LLMs includeGPT-4V, MedPalm M, Dall-E, Stable Diffusion, and Midjourney.

WHO’s Guidelines Regarding the Use of LMMs in Healthcare

  • The new WHO guidance outlinesfive broad applications of LMMs in healthcare:
    • Diagnosis and clinical care, such as responding to patients’ written queries;
    • Patient-guided use,such as for investigating symptoms and treatment;
    • Clerical and administrative tasks,such as documenting and summarizing patient visits within electronic health records;
    • Medical and nursing education,including providing trainees with simulated patient encounters, and;
    • Scientific research and drug development,including to identify new compounds.

Indian Council of Medical Research issued ethical guidelines for AI in biomedical research and healthcare in June 2023.

Concerns- WHO Raised about LMMs in Healthcare

  • Rapid Adoption and Need for Caution:
    • LMMs have experienced unprecedented adoption, surpassing the paceof any previous consumer technology.
      • LMM is known for their ability to mimic human communication andperform tasks without explicit programming.
    • However, this rapid uptake underscores the critical importance of carefully weighing their benefits against potential risks.
  • Risks and Challenges:
    • Despite their promising applications, LMMs pose risks, including the generation of false, inaccurate, or biased statementsthat could misguide health decisions.
    • The data used to train these models can suffer from quality or bias issues,potentially perpetuating disparities based on race, ethnicity, sex, gender identity or age.
  • Accessibility and Affordability of LMMs:
    • There are broader concerns as well, such as the accessibility and affordability of LMMs, and the risk of Automation Bias(tendency to rely too much on automated systems) in healthcare, leading professionals and patients to overlook errors.
  • Cybersecurity:
    • Cybersecurityis another critical issue, given the sensitivity of patient information and the reliance on the trustworthiness of these algorithms.

Key Recommendations of WHO Regarding LMMs

  • Called for a collaborative approachinvolving governments, technology companies, healthcare providers, patients and civil society, in all stages of LMM development and deployment.
  • Stressed on the need for global cooperative leadership to regulate AI technologies Governments from all countries must cooperatively lead efforts to effectively regulate the development and use of AI technologies, such as LMMs.
  • The new guidance offers a roadmap for harnessing the power of LMMs in healthcarewhile navigating their complexities and ethical considerations.
    • In May 2023, the WHO had highlighted theimportance of applying ethical principles and appropriate governance, as enumerated in the WHO guidance on the ethics and governance of AI for health, when designing, developing and deploying AI for health.
  • The six core principles identified by WHO are:
    • Protect autonomy
    • Promote human well-being, human safety, and the public interest
    • Ensure transparency, explainability, and intelligibility
    • Foster responsibility and accountability
    • Ensure inclusiveness and equity
    • Promote AI that is responsive and sustainable.

Global AI Governance

  • India:
    • NITI Aayog, has issued some guiding documents on AI Issues such as the National Strategy for Artificial Intelligence and the Responsible AI for All report.
    • Emphasises social and economic inclusion, innovation, and trustworthiness.
  • United Kingdom:
    • Outlined a light-touch approach, asking regulators in different sectors to apply existing regulations to AI.
    • Published a white paper outlining five principlescompanies should follow: safety, security and robustness; transparency and explainability; fairness; accountability and governance; and contestability and redress.
  • US:
    • The US released a Blueprint for an AI Bill of Rights (AIBoR), outlining the harms of AI to economic and civil rights andlays down five principles for mitigating these harms.
    • The Blueprint, instead of a horizontal approach like the EU, endorses a sectorally specific approach to AI governance, with policy interventions for individual sectorssuch as health, labour, and education, leaving it to sectoral federal agencies to come out with their plans.
  • China:
    • In 2022, China came out with some of theworld’s first nationally binding regulations targeting specific types of algorithms and AI.
    • It enacted a lawto regulate recommendation algorithms with a focus on how they disseminate information.


POSTED ON 27-01-2024 BY ADMIN
Next previous