-
Surgeon Who Beat Cancer 3 Times Debunks Alternative Therapies—’No Evidence’ - 6 mins ago
-
Bournemouth vs. Arsenal lineup, prediction, picks: Where to watch Premier League live stream, TV channel, odds - 8 mins ago
-
Celta Vigo vs Real Madrid Prediction: La Liga Primera - 9 mins ago
-
‘It was a frightening time to be a woman’ - 11 mins ago
-
Weak La Niña and dry conditions likely in the Southwest this winter - 21 mins ago
-
Donald Trump Urges Women To Get ‘Fat Pig’ Husbands To Vote Early - 22 mins ago
-
How a 102-year-old woman is defying the odds as a musician, volunteer and more - 26 mins ago
-
iQOO 13 Design Revealed in Leaked Live Images; Could Feature Narrow Bezels, Flat Edges - 29 mins ago
-
NFL Week 7 picks, schedule, odds, injuries, fantasy tips - 31 mins ago
-
Bayer Leverkusen vs Frankfurt Prediction: Bundesliga - 33 mins ago
Google Introduces Med-Gemini Family of Multimodal Medical AI Models, Claimed to Outperform GPT-4
Google introduced its new family of artificial intelligence (AI) models focused on the medical domain on Tuesday. Dubbed Med-Gemini, these AI models are not available for people to use, but the tech giant has published a pre-print version of its research paper which highlights its capabilities and methodologies. The company claims that the AI models surpass GPT-4 models in benchmark testing. One of the notable features of this particular AI model is its long-context abilities that allow it to process and analyse health records and research papers.
The research paper is currently in the pre-print stage and is published on arXiv, an open-access online repository of scholarly papers. Jeff Dean, Chief Scientist, Google DeepMind and Google Research, said in a post on X (formerly known as Twitter), “I’m very excited about the possibilities of these models to help clinicians deliver better care, as well as to help patients better understand their medical conditions. AI for healthcare is going to be one of the most impactful application domains for AI, in my opinion.”
Med-Gemini is built on Gemini-1.0/1.5 & can be easily adapted to new medical modalities with custom encoders. We showcase Med-Gemini’s promise in accurate multimodal dialogue🗣️🖼️ with examples of high-quality conversations about radiology & dermatology images, noting that… pic.twitter.com/VSdU3fvijk
— Alan Karthikesalingam (@alan_karthi) April 30, 2024
Med-Gemini AI models are built on top of Gemini 1.0 and Gemini 1.5 LLM. There are a total of four models — Med-Gemini-S 1.0, Med-Gemini-M 1.0, Med-Gemini-L 1.0, and Med-Gemini-M 1.5. All of the models are multimodal and can provide text, image, and video outputs. The models are also integrated with web search, which the company claims has been improved through self-training to make the models “more factually accurate, reliable, and nuanced” when showing results for complex clinical reasoning tasks.
Further, the AI model is fine-tuned for improved performance during long-context processing, claims the company. A higher quality long-context processing would mean the chatbot can provide more accurate and pinpointed answers even when the questions are not perfectly queried or when it has to process a long document of medical records.
As per data shared by Google, Med-Gemini AI models have outperformed OpenAI’s GPT-4 models in the GeneTuring dataset on text-based reasoning tasks. Med-Gemini-L 1.0 has also scored 91.1 percent accuracy on MedQA (USMLE), even outperforming its own older model Med-PaLM 2 by 4.5 percent. Notably, the AI model is not available in public or in beta testing. The company likely will improve the model further before bringing it into the public domain.
Affiliate links may be automatically generated – see our ethics statement for details.
Source link