Read more.Quote:
Tool developed with ASI Data Science detected 94 per cent of such media content in tests.
Printable View
Read more.Quote:
Tool developed with ASI Data Science detected 94 per cent of such media content in tests.
Alexa - tell me a recipe for a chocolate bomb?
Alexa - this is the UK national security AI speaking,your speech patterns raised a flag and the police have been despatched to question you.
Alexsha *hic* how do you make an Irish Car Bomb?
I saw a live demo of this on the BBC Breakfast program earlier and honestly it looks VERY advanced, doing real time video evaluation it's not just text posts etc. It's serious tech and I totally agree that it should be made mandatory for social networks / targets operating with a UK presence. It's a minimal investment to tackle a serious evil in the world. And the tech really seems to be very advanced. I look forward to legislation to enforce use of this or similarly effective software.
and yes that seems a bit big brother but we're talking fundamentalist terrorists here not the likes of FSF.
So basically they found an "official" use for Tempora:
https://en.wikipedia.org/wiki/Tempora
After all,you do need to intercept all communications to be able to determine what is propaganda,right??
Soooooo many holes in this.
"automatically detect 94% of Daesh propaganda with 99.995% accuracy"
Sample size? compared to? rate of false positives?
Was it shown 1000 propaganda videos and only correctly flagged 940 of them?
Was it shown 1000000 videos, only caught 940 of the propaganda and incorrectly flagged 24000 videos?
Now I'm saying this because google is using AI on youtube and they are still having all sorts of issues with content being incorrectly flagged.
Nope. If, as suggested, this is used by social media platforms to detect and remove ISIS propaganda, then it's analysing uploads, not 'intercepting', Tempora-style.
And if it works as advertised, it'll auto-detect and auto-delete a very large proportion of unacceptable uploads, and flag remaining borderline cases for human intervention. That, hopefully, turns the task from unmanageable numbers, to manageable.
This is analagous to the spam-detecting routines were use here (which we WILL NOT discuss in any detail), beyond saying that we catch a high percentage automatically, and occassionally have to manually correct false positives.
About 15 years ago, I started using (not here) voice dictation software for bulk dictation, including using a mini-dictating machine then uploading files for subsequent analysis. It was, for me, about 95% accurate out of the box, with the minimum training exercise (about 10 minutes). Within a week, it was hitting 99.x% accuracy, and prompted me for the remaining 0.1%.
And that was 15 years, maybe more, ago. Since then, processing prower has increased hugely, and so has image and video recognition. For a start, fsce recognition has obvious security uses .... and some intrusive ones, like in-store. Even consumer software can "look at" photos and identify flowers, mountain landscapes, animal portraits, etc. And, of course, given a database of reference works, copyright-infringement software is pretty good at analysing even parts of protected images, and anti-cheat software detects plagiarised written work.
It's not much of a stretch to imagine purpose-built AI analysing pre-existing videos and audio files, and building a "fingerprint". If, as has been asserted, certain websites can analyse a dozen or so "likes", or a dozen or so purchase transactions and identify, with a pretty high certain, how an individual is likely to vote, then it's not hard to see AI looking at common characteristics, including perhaps not just words used, but word weighting and, if analysing audio files, even recognising emotional loading, differentiating between, say, a legit TV discussion program talking about "Jihad" and a fanatic on a training video calling for it.
If they can hit the claimed rates now, then in a year or two, such AI analysis will be much better, and a year or two after that, much MUCH better.
Personally, with such AI software, I wonder which is the chicken, and which the egg. Either this anti-ISIS detection capability is developed from commercially-oriented AI analysing, well, us. Or commercial use will, as night folliws day, follow from the governnental use, if it was developed for governments.
Now, donning my tinfoil hat for a minute, if data analytics companies can predict voting preferences from a few purchases or 'likes', what could they tell from bulk audio, or worse yet, video recordings???
But consumers would never allow mass surveillance, mass recordings of voice data, would they? They'd never allow always-on microphones on a device they carry everywhere, or actually buy such devices to sprinkle around the inside of their homes, would they?
So, companies like Google, Amazon, Microsoft, etc, couldn't conceivably get their hands on data to analyse. So we're all okay.
Oh, wait .....
Whether its intercepted via normal internet use,or not,they still need to data mine what is passing through the system,and that is a ton of data to sift through on social media. That includes everything from PCs to mobile phones. So serious computing power on the back end.
Tempora and its related systems would need to use algorithms to identify what is "problematic" and so would this "AI". It would be nigh on impossible to store everything that passes through the system on a daily basis,so you need to cut down what needs to be stored or analysed. Tempora like this "AI" would use a form of deep learning. So in both cases,the systems would need to be trained to cut out false positives,and only intercept/store/block stuff which is deemed to fall within certain parameters,and the parameters will be refined as they pass more and more data through it(training the network).
So this is for me just an extension of the same tech. In fact I would like to know where the back end of all of this is hosted. Is it GCHQ?? ISPs? Google? Facebook?? It has to be something UK specific as I can't see countries wanting a UK based system monitoring their citizens when they would rather do it themselves!! :p
I don’t think Google or Facebook are particularly strapped for computing power. I think the idea is that the platforms themselves apply the tool.
I am all for security, but things like this always start out (pretending to be) well natured and slowly get corrupted (completed).
It is the start of monetising the entire internet, one day I can see the internet being almost as limited as TV and Radio, except all the subscriptions to your favourite sites will end up costing more in total just to add insult to injury, the good times are nearly over boys and girls.
In the UK, the Tories have killed tens of thousands due to welfare cuts. Terrorists have killed how many?
Perhaps we are looking to block the wrong kind of terrorist.
Now we have an AI that knows how to behave like a terrorist...I for one welcome our AI Overlords
The thing is that will probably just mean they'd use other platforms, banning something rarely addresses an issue and often leads to casting the net ever wider, the blocking of websites in the UK is a perfect example of how the scope of what's banned has increased over the years.
Don't get me wrong anyone who advocates violence as a means to an end is a thoroughly reprehensible person, however historically banning things has only really resulted in brushing the problem under the carpet, and that's before we even get into how various governments throughout the years have labeled some very unexpected people and groups as terrorists and/or extremist.
Not particularly, but when we're talking about the kind of upload volume, for e.g. YouTube gets (i.e., 300 hours worth every minute, practically all of it at least 1080p, a whole whack of it 4K), you're talking a serious amount of video to sift through, and any A.I. of this nature must be seriously computationally expensive. And then there's the whole issue of the happy-go-lucky attitude of the government when it comes to censorship, while they gleefully tap this wedge deeper in place.
Running this program will be a lot cheaper than manually moderating that volume, and for the new youtube it'll be a hell of a lot cheaper than developing their own version
There will be other platforms, which is why the gov hasn't just got facebook or youtube to make their own algorithm and called it a day. By putting this out there they're making it easier for the other, smaller sites to also catch this content before they get infested