Read more.CSI style unpixelating system uses neural net and learned conditioning to clear things up.
Read more.CSI style unpixelating system uses neural net and learned conditioning to clear things up.
So the CSI 'computer enhance' meme is starting to come true? That something I never expected. Wonder if this will help with crimes - You couldn't use it as evidence but could help with suspect wanted mug shots...
No. It creates very "Hollywood" faces because it's creating believable composites from overlapping images in a database that match the angle. If the person looked like the image, it would be chance (and a relatively low chance, given that your average Joe/Jo is not photogenic).
In the same way, look at the images with involve windows. It's smart enough to track subtle gradation of light from right to left across an image, pick up a frame shape, and decide that this thing on the right is: a light source; a gentle enough light source to be a window; a size and shape consistent with a window. But in one of Google's shots, the window is a skylight cut into the slanting roof of an attic room. To replace the window, the software has to insert a composite window-ish element, and it has vertical frames like the ones in its database.
No, the information in the fuzzed out images is gone and cannot ever be restored. This tech could make for a cracking TV upscaler, but not evidence.
The network was trained with images of celebs. Given a fuzzed input, it hallucinates a celeb based on the input. Given a picture of me, it would hallucinate the closest celeb like face it could.
I look forward to TVs having a de-pixellation mode to make tamed tv footage rude again, it would work very well for that
Um - Thats why I stated not for evidence but for mug shots. Sure it might be just a very generic face in the future but its better than some useless CCTV police often show now when try to find a wanted person.
Edit: Also don't forget this is version 1 of this tech. I'm sure it will improve to support more generic faces.
The starting image was a bowl of fruit.
DanceswithUnix (09-02-2017)
The information is gone, all the neural net can do is fill in the gaps with generic features best guessed from colour matching the blobs. Give it a heavily pixellated bowl of fruit, and it would get out the best face it could match.
Don't think of this as image processing, think of it as an interpretive painting. The net sees the blobs and decides which brush strokes to use to make something that looks like a face and hair. It is clever, but it can only ever be an educated guess.
Is that Gavin Rossdale from Bush?
I understand what you are saying. I just trying to point out even if it is a closest match face it will be more helpful than a heavily pixelated picture narrowing down suspects. After all show someone the 8x8 pixel picture and they won't have a clue who it is. This might just help a human think 'that looks a little bit like John Smith and he has clothes like that' - I'm not saying it a miracle - just another tool similar to photofits.
This is realy interesting tech. I agree with the various comments about it creating very generic 'hollywood' faces, which is of limited use. However I an see benefits:
1. The examples given are starting from an extreme (8x8) and so it's not surprising they're extremely generic. If you're talking about the CSI-type use case, it would be interesting to see how well it performs on a 16x16 image, or a 32x32 one... If it can take a slightly blurry picture and make it much less blurry that could be much more reliable.
2. Even in the extreme examples given, you could still use it in a CSI-style use case. Obviously not for a conviction, but for producing a photofit-type image that could help you narrow the search. With enough development, you could even quantify something like 'there is a 90% certainty that this is a reasonable likeness' - which could definitely hav value. Even DNA evidence only gives you a certain % of certainty.
"I want to be young and wild, then I want to be middle aged and rich, then I want to be old and annoy people by pretending that I'm deaf..."
my Hexus.Trust
Look at the top row - the two photos look like very different people. Courts are terrible at evaluating probability, this will be taken to be infallible. Any tool using this approach will be biased to output the training data, if you use this in court it's guaranteeing a miscarriage of justice.
There are currently 1 users browsing this thread. (0 members and 1 guests)