๐๐ฅ๐๐ฏ๐๐ญ๐ ๐๐จ๐ฎ๐ซ ๐๐จ๐ฅ๐ ๐๐๐ฆ๐ ๐ฐ๐ข๐ญ๐ก ๐๐๐๐ ๐๐ข๐ฅ๐๐ง๐จ ๐๐ง๐ ๐๐๐ข๐ฌ๐ฌ ๐๐จ๐ฅ๐...
๐๐จ๐จ๐ ๐ฅ๐๐๐ ๐๐ก๐๐ญ๐๐จ๐ญ๐ฌ ๐๐๐ฌ๐ฌ ๐๐ฉ๐ก๐ญ๐ก๐๐ฅ๐ฆ๐จ๐ฅ๐จ๐ ๐ฒ ๐๐จ๐๐ซ๐ ๐๐ฑ๐๐ฆ๐ฌ
๐ป In a groundbreaking study, researchers explored the capabilities of artificial intelligence (AI) chatbots by assessing their performance on an ophthalmology board certification practice exam. The eye care industry has been increasingly focused on AI chatbots, especially after previous studies showed that ChatGPT scored 46% on a similar exam, which was considered insufficient for board certification preparation.
๐จโ๐ป For this study, researchers utilised 150 multiple-choice questions from Eye Quiz, a platform designed for ophthalmology board exam practise. The investigation included testing Googleโs Bard and Gemini chatbots from various countries using a virtual private network (VPN) to compare their performance with their U.S. counterparts.
In the U.S. Bard and Gemini achieved a 71% accuracy rate across the 150 questions. The VPN analysis revealed interesting variations:
๐ In Vietnam, Bard scored 67% accuracy, with 32 questions (21%) answered differently from the U.S. version.
๐ Gemini performed slightly better in Vietnam, with a 74% accuracy and 23 questions (15%) answered differently than in the U.S.
๐ In Brazil and the Netherlands, Geminiโs accuracy dropped to 68% and 65%, respectively.
These findings highlight the potential of AI chatbots in medical education and the variability in performance across different regions.
๐ป In a groundbreaking study, researchers explored the capabilities of artificial intelligence (AI) chatbots by assessing their performance on an ophthalmology board certification practice exam. The eye care industry has been increasingly focused on AI chatbots, especially after previous studies showed that ChatGPT scored 46% on a similar exam, which was considered insufficient for board certification preparation.
๐จโ๐ป For this study, researchers utilised 150 multiple-choice questions from Eye Quiz, a platform designed for ophthalmology board exam practise. The investigation included testing Googleโs Bard and Gemini chatbots from various countries using a virtual private network (VPN) to compare their performance with their U.S. counterparts.
In the U.S. Bard and Gemini achieved a 71% accuracy rate across the 150 questions. The VPN analysis revealed interesting variations:
๐ In Vietnam, Bard scored 67% accuracy, with 32 questions (21%) answered differently from the U.S. version.
๐ Gemini performed slightly better in Vietnam, with a 74% accuracy and 23 questions (15%) answered differently than in the U.S.
๐ In Brazil and the Netherlands, Geminiโs accuracy dropped to 68% and 65%, respectively.
These findings highlight the potential of AI chatbots in medical education and the variability in performance across different regions.