Source: Extreme Tech
Remember how that Google neural net learned to tell the difference between dogs and cats? It’s helping catch skin cancer now, thanks to some scientists at Stanford who trained it up and then loosed it on a huge set of high-quality diagnostic images. During recent tests, the algorithm performed just as well as almost two dozen veteran dermatologists in deciding whether a lesion needed further medical attention.
This is exactly what I meant when I said that AI will be the next major sea-change in how we practice medicine: humans are extending their intelligence by underwriting it with the processing power of supercomputers.
“We made a very powerful machine learning algorithm that learns from data,” said Andre Esteva, co-lead author of the paper and a graduate student at Stanford. “Instead of writing into computer code exactly what to look for, you let the algorithm figure it out.”
The algorithm is called a deep convolutional neural net. It started out in development as Google Brain, using their prodigious computing capacity to power the algorithm’s decision-making capabilities.When the Stanford collaboration began, the neural net was already able to identify 1.28 million images of things from about a thousand different categories. But the researchers needed it to know a malignant carcinoma from a benign seborrheic keratosis.
Telling a pug from a Persian is one thing. How do you tell one particular kind of irregular skin-colored blotch from another, reliably enough to potentially bet someone’s life on?
Seriously, the skin colored blotches are a problem. This is what the algorithm had to work with.
“There’s no huge dataset of skin cancer that we can just train our algorithms on, so we had to make our own,” said grad student Brett Kuprel, co-lead author of the report. And they had a translating task, too, before they ever got to do any real image processing. “We gathered images from the internet and worked with the medical school to create a nice taxonomy out of data that was very messy – the labels alone were in several languages, including German, Arabic and Latin.”
Dermatologists often use an instrument called a dermoscope to closely examine a patient’s skin. This provides a roughly consistent level of magnification and a pretty uniform perspective in images taken by medical professionals. Many of the images the researchers gathered from the Internet weren’t taken in such a controlled setting, so they varied in terms of angle, zoom, and lighting. But in the end, the researchers amassed about 130,000 images of skin lesions representing over 2,000 different diseases. They used that dataset to create a library of images, which they fed to the algorithm as raw pixels, each pixel labeled with additional data about the disease depicted. Then they asked the algorithm to suss out the patterns: to find the rules that define the appearance of the disease as it spreads through tissue.
This is how the AI split up what it saw into different categories.
The researchers tested the algorithm’s performance against the diagnoses of 21 dermatologists from the Stanford medical school, on three critical diagnostic tasks: keratinocyte carcinoma classification, melanoma classification, and melanoma classification when viewed using dermoscopy. In their final tests, the team used only high-quality, biopsy-confirmed images of malignant melanomas and malignant carcinomas. When presented with the same image of a lesion and asked whether they would “proceed with biopsy or treatment, or reassure the patient,” the algorithm scored 91% as well as the doctors, in terms of sensitivity (catching all the cancerous lesions) and sensitivity (not getting false positives).
While it’s not available as an app just yet, that’s definitely on the team’s whiteboard. They’re intent on getting better healthcare access to the masses. “My main eureka moment was when I realized just how ubiquitous smartphones will be,” said Esteva. “Everyone will have a supercomputer in their pockets with a number of sensors in it, including a camera. What if we could use it to visually screen for skin cancer? Or other ailments?”
Either way, before it’s ready to go commercial, the next step is more testing and refinement of the algorithm. It’s important to know how the AI is making the decisions it makes in order to classify images. “Advances in computer-aided classification of benign versus malignant skin lesions could greatly assist dermatologists in improved diagnosis for challenging lesions and provide better management options for patients,” said coauthor Susan Swetter, professor of dermatology at Stanford. “However, rigorous prospective validation of the algorithm is necessary before it can be implemented in clinical practice, by practitioners and patients alike.”
The paper will run in the January 25 issue of Nature.
Source: Extreme Tech