ππ¨π¨π π₯πππ ππ‘ππππ¨ππ¬ πππ¬π¬ ππ©π‘ππ‘ππ₯π¦π¨π₯π¨π π² ππ¨ππ«π ππ±ππ¦π¬
π» In a groundbreaking study, researchers explored the capabilities of artificial intelligence (AI) chatbots by assessing their performance on an ophthalmology board certification practice exam. The eye care industry has been increasingly focused on AI chatbots, especially after previous studies showed that ChatGPT scored 46% on a similar exam, which was considered insufficient for board certification preparation.
π¨βπ» For this study, researchers utilised 150 multiple-choice questions from Eye Quiz, a platform designed for ophthalmology board exam practise. The investigation included testing Googleβs Bard and Gemini chatbots from various countries using a virtual private network (VPN) to compare their performance with their U.S. counterparts.
In the U.S. Bard and Gemini achieved a 71% accuracy rate across the 150 questions. The VPN analysis revealed interesting variations:
π In Vietnam, Bard scored 67% accuracy, with 32 questions (21%) answered differently from the U.S. version.
π Gemini performed slightly better in Vietnam, with a 74% accuracy and 23 questions (15%) answered differently than in the U.S.
π In Brazil and the Netherlands, Geminiβs accuracy dropped to 68% and 65%, respectively.
These findings highlight the potential of AI chatbots in medical education and the variability in performance across different regions.