Vision Executives Blog

GoogleAI Chatbots Pass Ophthalmology Board Exams

Written by Vision Executives | Jun 17, 2024 12:22:58 PM
𝐆𝐨𝐨𝐠π₯πžπ€πˆ π‚π‘πšπ­π›π¨π­π¬ 𝐏𝐚𝐬𝐬 𝐎𝐩𝐑𝐭𝐑𝐚π₯𝐦𝐨π₯𝐨𝐠𝐲 𝐁𝐨𝐚𝐫𝐝 π„π±πšπ¦π¬

πŸ’» In a groundbreaking study, researchers explored the capabilities of artificial intelligence (AI) chatbots by assessing their performance on an ophthalmology board certification practice exam. The eye care industry has been increasingly focused on AI chatbots, especially after previous studies showed that ChatGPT scored 46% on a similar exam, which was considered insufficient for board certification preparation.

πŸ‘¨β€πŸ’» For this study, researchers utilised 150 multiple-choice questions from Eye Quiz, a platform designed for ophthalmology board exam practise. The investigation included testing Google’s Bard and Gemini chatbots from various countries using a virtual private network (VPN) to compare their performance with their U.S. counterparts.

In the U.S. Bard and Gemini achieved a 71% accuracy rate across the 150 questions. The VPN analysis revealed interesting variations:

πŸ‘ In Vietnam, Bard scored 67% accuracy, with 32 questions (21%) answered differently from the U.S. version.
πŸ‘ Gemini performed slightly better in Vietnam, with a 74% accuracy and 23 questions (15%) answered differently than in the U.S.
πŸ‘ In Brazil and the Netherlands, Gemini’s accuracy dropped to 68% and 65%, respectively.

These findings highlight the potential of AI chatbots in medical education and the variability in performance across different regions.