Computer scientists have developed a new tool to link digital media to their creator

0

In a project to develop smart tools to combat child exploitation, computer scientists from the University of Groningen have developed a system to analyze the noise produced by individual cameras. This information can be used to link a video or image to a particular camera. The results have been published in journals SN Computing on June 4, 2022 and Expert systems with applications June 10, 2022.

The Netherlands is the main distributor of digital content showing child sexual abuse, as reported by the Internet Watch Foundation in 2019. To combat this type of abuse, forensic tools are needed to analyze digital content to identify which images or videos contain suspected children. abusive content. Another untapped source of information is noise in images or video images. As part of an EU project, computer scientists from the University of Groningen, together with colleagues from the University of León (Spain), have found a way to extract and classify noise from an image or video that reveals the “fingerprint” of the camera with which it was made.

Ball

“You can compare it to the specific grooves of a fired bullet,” says George Azzopardi, assistant professor in the information systems research group at the Bernoulli Institute for Mathematics, Computer Science and Artificial Intelligence. University of Groningen. Each gun produces a specific pattern on the bullet, so forensic experts can match a bullet found at one crime scene to a specific gun, or link two bullets found at different crime scenes to the same weapon.

“Each camera has imperfections in its built-in sensors, which manifest as image noise in all frames but are invisible to the naked eye,” Azzopardi explains. This produces camera-specific noise. Guru Bennabhaktula, a PhD student at both Groningen and the University of León, has developed a system to extract and analyze this noise. “In image recognition, classifiers are used to extract information about the shapes and textures of objects in the image in order to identify a scene,” says Bennabhaktula. “We used these classifiers to extract camera-specific noise.”

Law enforcement

He created a computer model to extract camera noise from video images taken with 28 different cameras, extracted from the publicly available VISION dataset, and used it to train a convolutional neural network. Subsequently, he tested whether the trained system could recognize images taken by the same camera. “It turned out that we could do it with 72% accuracy,” says Bennabhaktula. He also notes that noise can be unique to a brand of cameras, a specific type, and individual cameras. In another series of experiments, he achieved 99% accuracy in classifying 18 camera models using images from the publicly available Dresden dataset.

His work was part of a European project, 4NSEEK, in which scientists and law enforcement worked together to develop smart tools to help combat child exploitation. Azzopardi: “Each group was responsible for developing a specific forensic tool.” The model that was created by Bennabhaktula could have such a practical use. “If police find a camera on a child abuse suspect, they can link it to images or videos found on storage devices.”

Challenges

The model is scalable, adds Bennabhaktula. “Using only five random frames from a video, it is possible to classify five videos per second. The classifier used in the model has been used by others to distinguish over 10,000 different classes for other computer vision applications. This means that the classifier could compare noise from tens of thousands of cameras. The 4NSEEK project is now complete, but Azzopardi is still in contact with forensic and law enforcement specialists to continue this line of research. “And we’re also working on identifying source similarity between a pair of images, which presents different challenges. This will form our next article on this subject.

Source of the story:

Material provided by University of Groningen. Note: Content may be edited for style and length.

Share.

Comments are closed.