Researchers at University College London and the Africa Health Research Institute have developed an AI-powered app that can interpret lateral flow tests for HIV. The technique involves taking an image of the test with a smartphone camera, and the app can tell whether the result is positive or negative simply by analyzing the image. As these tests can be difficult to interpret, the technology should help to improve their accuracy when deployed in low-resource regions. A total of 100 million HIV tests are performed every year. Given the importance of early treatment, and the sheer number of people being tested, the accuracy of testing is very important. Lateral flow technology is increasingly being adopted for HIV testing, particularly in poorer areas of the world. Such technology has obvious advantages in this context, including rapid test results, ease of use, the avoidance of expensive and cumbersome lab tests, and even the potential for self-testing. Lateral flow tests typically provide a visual result, such as a color change. In theory, this should make them easy to interpret. However, lay people with visual impairment or color blindness may struggle to interpret the test correctly. This latest technology aims to take some of the guesswork out of test interpretation, as it allows someone to simply take a picture of their test using a smartphone. The AI-powered app then rapidly provides a result. The technology is based on a machine-learning algorithm that was trained using 11,000 images of lateral flow tests, taken in the field. In a recent test, the researchers compared the accuracy of their app to tests that were read by eye. Strikingly, the app beat the human test users, demonstrating 98.9% accuracy compared with 92.1% for human assessments. Excitingly, the technology has applicability in various disease states where lateral flow tests are used, including in tests for syphilis, tuberculosis, malaria, and influenza. “This study is a really strong partnership with AHRI that demonstrates the power of using deep learning to successfully classify ‘real-world’ field-acquired rapid test images, and reduce the number of errors that may happen when reading test results by eye,” said Rachel McKendry, a researcher involved in the study, in a UCL announcement. “This research shows the positive impact the mobile health tools can have in low- and middle-income countries, and paves the way for a larger study in the future.” Source