In today’s AI Daily, we feature two use cases of advanced image analysis: Adobe‘s tool to identify photoshopped images and protect original authenticity, and new research for skin care detection using convolutional neural networks. We also picked a story discussing the potential use case of AI to predict violent behavior and school shootings.
Adobe is using AI to catch Photoshopped images (engadget.com)
- Using AI to find fake images is a way for Adobe to help ” increase trust and authenticity in digital media,” the company says.
- The team trained the AI to figure out the type of manipulation used on an image and to flag the area of a photo that someone changed.
- The research team says it might harness the AI to examine other types of artifacts, like those caused by compression when a file is saved repeatedly.
Man against machine. Can AI outperform dermatologists when diagnosing skin cancer? (medium.com)
- A study conducted by an international team of researchers pitted experienced dermatologists against a machine learning system, known as a deep learning convolutional neural network, or Cnn, to find out who is better at detecting malignant melanomas.
- To train it, we showed the Cnn more than 100,000 images of malignant and benign skin cancers and moles and indicated the diagnosis for each image.
- The doctors were shown 100 images of skin lesions and asked to make a diagnosis, using their judgment about whether it was a malignant melanoma or benign mole.
- The Cnn missed fewer melanomas, meaning it had a higher sensitivity than the dermatologists, and it misdiagnosed fewer benign moles as malignant melanoma, which means it had a higher specificity; this would result in less unnecessary surgery.
Can Artificial Intelligence Help Stop School Shootings? (smithsonianmag.com)
- So Nallapati, who’s been studying artificial intelligence in high school, reached out to other young women she knows through a program called Aspirations in Computing that’s run by the National Center for Women & Information Technology.
- Specifically, the scientists found that AI was as accurate as a team of child and adolescent psychiatrists when it came to assessing the risk of violent behavior, based on interviews with 119 kids between the ages of 12 and 18.
- More than 91 percent of the time, the algorithm, using only the transciripts, aligned with the more extensive assessments of a team of child and adolescent psychiatrists, who also had access to information from parents and schools.
- “We are now seeing a trend of AI being applied to very sensitive domains at alarming speeds, and people making these algorithms don’t necessarily understand all the social, and even political, aspects of the data they’re using,” says Rashida Richardson, director of policy research at the AI Now Institute , a program at New York University that studies the social implications of artificial intelligence.
- “Trying to understand very complicated issues that have a myriad of input and applying a tech solution that reflects a sliver of it is a problem because it can either reiterate the same problems we see in society or create a solution for a problem that’s not there.”