Google have pledged an extra $2m although the daily mail quotes $4.5m to combat the spread of child abuse images online.
They have gone into little detail about the hashtag system used to assist police in their removal, which does sound confusing and I’ll address later. Both the Daily Mail and the BBC imply that this Hashtag technology has only just come into play when in fact Google has been tagging these offensive images since 2008 and passing data usable for identifying the images onto other authorities for quite a while.
There is no way that technology has only just become available to tag images in this way, I developed a system for a photo agency 10 years ago that not only injected an encrypted signature into an image on it being downloaded that included the username, IP address and time of download, but allowed us to search for uses of the image on sites (using Google) that were not authorized to use it. And we weren’t the only ones doing this; larger companies had far more exotic and powerful systems in place before we even started devlopment. For the last few years Google has had this great drag and drop image identification utility which gives some indication of just how developed these systems are, and how much more advanced they could potentially be today.
Technically while it isn’t just a matter of recording a simple checksum hash of an image, as that would be changed by just opening and resaving in Photoshop or adding a single letter to the iptc metadata which both would hide any new instances of a recognised abusive image. I am guessing this technology is closer to the biometric patterns recorded in an iris or fingerprint scan, which if based on facial features could flag other images of the same victim for review, possible identification and rescue. This doesn’t sound too farfetched having seen a beta of Facebook’s facial recognition system last year.
While it’s a nice idea, and the Mail have wrongly stated that Google is to remove images, which of course they can’t, all they could do is make this hash, pass onto authorities, and maybe block from search results, although that may have the effect of tipping of the abusive hoster / webmaster that they’d been identified. And even then, there are far more search engines for the abuser to choose from.
So why the delay, why the sudden surprise at progression at what would seem to be a “no brainer”, we know it’s not the technology, I can’t see it’s the funding, while $2m is an awful lot for me, it’s not much in the larger picture. Has it been the political will? Is it a message to internet consumers that big brother is watching you that no government ever wanted to send out?
I would suggested to Google, a guaranteed anonymous reporting system, having spoken to people who have unintentionally found these abusive images, they are worried to-do anything other than leave the page or delete the images and as I believe there is no honor among criminals, we should give them an opportunity to turn each other in.
While I am in principle against censorship on the internet, I don’t believe this even slightly falls under that label which I have seen being used as an excuse to block this sort of progress in the past, this appears to have been well thought through, well-targeted and does address David Camerons simple worry that his children will stumble across these images and quite a bit more.