Scientists at the Swiss Federal Institute of Technology in Lausanne (EPFL) have recently unveiled a groundbreaking artificial intelligence tool known as Meditron, an unparalleled, free-to-use, advanced language-processing system fine-tuned for the healthcare sector. This technological marvel is engineered to facilitate and enhance the accuracy of medical decisions by healthcare professionals.

MEDITRON 7B and MEDITRON 70BArtificial intelligence algorithms termed Large Language Models (LLMs) have undergone extensive training on extensive datasets to comprehend and internalize a plethora of connections within the language – these connections are quantified as ‘parameters.’ Common uses for such algorithms are found in conversational agents like OpenAI’s ChatGPT and the language models that Google leverages for Bard. The most complex models in existence to date encapsulate a staggering number of parameters reaching several hundred billion, with training costs ascending to comparable financial heights.

General-purpose LLMs like ChatGPT are extremely versatile, aiding with a variety of tasks from crafting emails to generating poetry. Zeroing in on an exclusive knowledge domain, such as medicine, can allow for the creation of more streamlined, efficient, and obtainable models. LLMs, when strategically trained using medical literature, have the potential to democratize the availability of scientific, evidence-based medical information that can assist in impactful clinical decision-making.

Significant strides have been made towards exploiting and boosting these models’ medical expertise and logical deduction prowess. Yet the reality is such that most sophisticated AI solutions (like MedPaLM and GPT-4) remain behind proprietary barriers, or are confined to a parameter limit of approximately 13 billion, which limits their usability and reach.

To advance ubiquity and inclusivity in the medical AI realm, EPFL’s team within the School of Computer and Communication Sciences has developed two versions of Meditron, called MEDITRON 7B and MEDITRON 70B. These iterations showcase 7 billion and 70 billion parameters respectively and are specifically tuned for the medical environment. These models were introduced through a research paper published on the preprint platform arXiv, under the title, “MEDITRON-70B: Scaling Medical Pretraining for Large Language Models.”

Utilizing the foundational Llama-2 model from Meta as a starting point, and incorporating ongoing feedback from medical experts and biomedical scholars, Meditron was meticulously trained using a selection of high-quality medical texts. This corpus included authoritative medical publications accessible through public databases like PubMed and an assortment of clinical guidelines representing various regions, healthcare institutions, and global entities.

Zeming Chen, the primary researcher and doctoral student at EPFL’s Natural Language Processing Lab under the direction of Professor Antoine Bosselut, reported that upon rigorous evaluation across four critical medical benchmarks, Meditron’s operational effectiveness surpassed all other free-to-use models and even edged closer to the performance of proprietary models like GPT-3.5 and Med-PaLM. The MEDITRON 70B model demonstrated results that nearly matched those of the ultra-high-performing yet proprietary models GPT-4 and Med-PaLM-2, which are currently tailored to medical knowledge.

In an era where the swift development of AI is met with skepticism and even trepidation, Professor Martin Jaggi, who leads EPFL’s Machine Learning and Optimization Laboratory, highlighted the significance of Meditron’s transparent, open-source configuration. This transparency extends to both the processes and data involved in training the model, spurring researchers worldwide to engage in its examination and enhancement toward a safer, more robust technology – a set of opportunities not available with proprietary systems crafted by large tech corporations.

Guiding the medical facets of the endeavor, Professor Mary-Anne Hartley, a physician, and director of the Laboratory for Intelligent Global Health Technologies, jointly operated by MLO and Yale School of Medicine, emphasized that Meditron was built with an inherent focus on safety. Its distinction lies in its ability to embed cogent medical knowledge derived from reliable, transparent evidence sources. The pivotal next step involves ensuring the model’s capacity to deploy this medical intelligence safely and effectively.

The International Committee of the Red Cross, known for its medical practice guidelines, is one such source of esteemed evidence. From the perspective of humanitarian medical practice, Dr. Javier Elkin, at the helm of the Digital Health Program at the International Committee of the Red Cross, acknowledges the infrequency of novel healthcare innovations being attuned to humanitarian needs. The EPFL collaboration, which integrates its guidelines into the AI, garners enthusiasm from the organization due to its adherence to humanitarian tenets.

A collaborative workshop scheduled for early December in Geneva aims to delve into the prospects, constraints, and potential hazards posed by this innovative AI, with a focus on Meditron as presented by its creators.

Professor Bosselut expressed a fundamental belief that underpinned the development of Meditron: the universal entitlement to medical knowledge. EPFL’s hope is that Meditron acts as a springboard for researchers to safely tailor and confirm the efficacy of this technology within their clinical settings.

Other posts

  • Machine Learning In Sentiment Analysis
  • Adaptive Learning
  • Humorists Check LLM Joke Writing Skills
  • Sony Introduces An AI Tool For Single-Instrument Accompaniment While Creating Music
  • What Is Autoencoder In Machine Learning
  • AI Stability Introduces A New Model Of Sound Generation
  • The Best AI Features Presented By Apple At Wwdc 2024
  • AI-Powered Human Resources
  • Emotion Recognition with Machine Learning
  • Google Introduces The Open Source Framework Firebase Genkit