Read more.Quote:
Tool developed with ASI Data Science detected 94 per cent of such media content in tests.
Printable View
Read more.Quote:
Tool developed with ASI Data Science detected 94 per cent of such media content in tests.
Alexa - tell me a recipe for a chocolate bomb?
Alexa - this is the UK national security AI speaking,your speech patterns raised a flag and the police have been despatched to question you.
Alexsha *hic* how do you make an Irish Car Bomb?
I saw a live demo of this on the BBC Breakfast program earlier and honestly it looks VERY advanced, doing real time video evaluation it's not just text posts etc. It's serious tech and I totally agree that it should be made mandatory for social networks / targets operating with a UK presence. It's a minimal investment to tackle a serious evil in the world. And the tech really seems to be very advanced. I look forward to legislation to enforce use of this or similarly effective software.
and yes that seems a bit big brother but we're talking fundamentalist terrorists here not the likes of FSF.
So basically they found an "official" use for Tempora:
https://en.wikipedia.org/wiki/Tempora
After all,you do need to intercept all communications to be able to determine what is propaganda,right??
Soooooo many holes in this.
"automatically detect 94% of Daesh propaganda with 99.995% accuracy"
Sample size? compared to? rate of false positives?
Was it shown 1000 propaganda videos and only correctly flagged 940 of them?
Was it shown 1000000 videos, only caught 940 of the propaganda and incorrectly flagged 24000 videos?
Now I'm saying this because google is using AI on youtube and they are still having all sorts of issues with content being incorrectly flagged.
Nope. If, as suggested, this is used by social media platforms to detect and remove ISIS propaganda, then it's analysing uploads, not 'intercepting', Tempora-style.
And if it works as advertised, it'll auto-detect and auto-delete a very large proportion of unacceptable uploads, and flag remaining borderline cases for human intervention. That, hopefully, turns the task from unmanageable numbers, to manageable.
This is analagous to the spam-detecting routines were use here (which we WILL NOT discuss in any detail), beyond saying that we catch a high percentage automatically, and occassionally have to manually correct false positives.
About 15 years ago, I started using (not here) voice dictation software for bulk dictation, including using a mini-dictating machine then uploading files for subsequent analysis. It was, for me, about 95% accurate out of the box, with the minimum training exercise (about 10 minutes). Within a week, it was hitting 99.x% accuracy, and prompted me for the remaining 0.1%.
And that was 15 years, maybe more, ago. Since then, processing prower has increased hugely, and so has image and video recognition. For a start, fsce recognition has obvious security uses .... and some intrusive ones, like in-store. Even consumer software can "look at" photos and identify flowers, mountain landscapes, animal portraits, etc. And, of course, given a database of reference works, copyright-infringement software is pretty good at analysing even parts of protected images, and anti-cheat software detects plagiarised written work.
It's not much of a stretch to imagine purpose-built AI analysing pre-existing videos and audio files, and building a "fingerprint". If, as has been asserted, certain websites can analyse a dozen or so "likes", or a dozen or so purchase transactions and identify, with a pretty high certain, how an individual is likely to vote, then it's not hard to see AI looking at common characteristics, including perhaps not just words used, but word weighting and, if analysing audio files, even recognising emotional loading, differentiating between, say, a legit TV discussion program talking about "Jihad" and a fanatic on a training video calling for it.
If they can hit the claimed rates now, then in a year or two, such AI analysis will be much better, and a year or two after that, much MUCH better.
Personally, with such AI software, I wonder which is the chicken, and which the egg. Either this anti-ISIS detection capability is developed from commercially-oriented AI analysing, well, us. Or commercial use will, as night folliws day, follow from the governnental use, if it was developed for governments.
Now, donning my tinfoil hat for a minute, if data analytics companies can predict voting preferences from a few purchases or 'likes', what could they tell from bulk audio, or worse yet, video recordings???
But consumers would never allow mass surveillance, mass recordings of voice data, would they? They'd never allow always-on microphones on a device they carry everywhere, or actually buy such devices to sprinkle around the inside of their homes, would they?
So, companies like Google, Amazon, Microsoft, etc, couldn't conceivably get their hands on data to analyse. So we're all okay.
Oh, wait .....
Whether its intercepted via normal internet use,or not,they still need to data mine what is passing through the system,and that is a ton of data to sift through on social media. That includes everything from PCs to mobile phones. So serious computing power on the back end.
Tempora and its related systems would need to use algorithms to identify what is "problematic" and so would this "AI". It would be nigh on impossible to store everything that passes through the system on a daily basis,so you need to cut down what needs to be stored or analysed. Tempora like this "AI" would use a form of deep learning. So in both cases,the systems would need to be trained to cut out false positives,and only intercept/store/block stuff which is deemed to fall within certain parameters,and the parameters will be refined as they pass more and more data through it(training the network).
So this is for me just an extension of the same tech. In fact I would like to know where the back end of all of this is hosted. Is it GCHQ?? ISPs? Google? Facebook?? It has to be something UK specific as I can't see countries wanting a UK based system monitoring their citizens when they would rather do it themselves!! :p
I don’t think Google or Facebook are particularly strapped for computing power. I think the idea is that the platforms themselves apply the tool.
I am all for security, but things like this always start out (pretending to be) well natured and slowly get corrupted (completed).
It is the start of monetising the entire internet, one day I can see the internet being almost as limited as TV and Radio, except all the subscriptions to your favourite sites will end up costing more in total just to add insult to injury, the good times are nearly over boys and girls.
In the UK, the Tories have killed tens of thousands due to welfare cuts. Terrorists have killed how many?
Perhaps we are looking to block the wrong kind of terrorist.
Now we have an AI that knows how to behave like a terrorist...I for one welcome our AI Overlords
The thing is that will probably just mean they'd use other platforms, banning something rarely addresses an issue and often leads to casting the net ever wider, the blocking of websites in the UK is a perfect example of how the scope of what's banned has increased over the years.
Don't get me wrong anyone who advocates violence as a means to an end is a thoroughly reprehensible person, however historically banning things has only really resulted in brushing the problem under the carpet, and that's before we even get into how various governments throughout the years have labeled some very unexpected people and groups as terrorists and/or extremist.
Not particularly, but when we're talking about the kind of upload volume, for e.g. YouTube gets (i.e., 300 hours worth every minute, practically all of it at least 1080p, a whole whack of it 4K), you're talking a serious amount of video to sift through, and any A.I. of this nature must be seriously computationally expensive. And then there's the whole issue of the happy-go-lucky attitude of the government when it comes to censorship, while they gleefully tap this wedge deeper in place.
Running this program will be a lot cheaper than manually moderating that volume, and for the new youtube it'll be a hell of a lot cheaper than developing their own version
There will be other platforms, which is why the gov hasn't just got facebook or youtube to make their own algorithm and called it a day. By putting this out there they're making it easier for the other, smaller sites to also catch this content before they get infested
The headline for this story is misleading it should be
“UK Home office wants social media companies to use AI to block terrorise propoganda”
Indeed, and there was an interesting interview with one of the developers on the BBC 10 o’clock news where he stated that if as a result of the technology the material can only be accessed by using TOR networks and multiple passwords to find a hidden sire, they regard that as a win because it prevents easy access and the easy sharing of links.
If this works the way it's being implied, it has nothing to do with Tempura or systrn-wide intercepts. It's simply a tool made available for social media conpanies to install on their servers, and uploads to that platform are then 'scanned' before going live. It is analagous to installing AV software on your PC that analyses anything you download before saving it, and quarantining it if you get a hit.
How much of a backend system is needed depends on the volume of files, and perhaps size of files, are being uploaded to social media sites.
It's hard to see exactly what's implied because nobody is saying quite how it works, but the statement that the AI is "trained" with reference to thousands of hours of existing extrrmist material suggests known files will be fingerprinted and compared AV-style, but unknown vids have to be analysed according the whatever criteria the AI learned, and presumably, that will be sn ongoing exercise.
Also, for now, usage will be voluntary but ultimately it could become mandatory, presumably for sites hosting uploaded video. As ever, it's a balance between the negatives of enforcing ssuch aa regime, versus negatives of unlimited ability of extremists to upload terorist recruiting or training materials
Will it, though? The existing moderating system works by checking out user flagged content, it's an implicitly permissive system, and only a tiny fraction of a percent of video will be checked out. Whereas the idea behind the A.I. is that all content should be checked out as it's uploaded, and it's deferred to a moderator when it flags something.
I wasn't talking about the expense of the software, I was talking about how much hardware it'd need to keep up with YouTube's growth alone. Even if this software is optimised with tensor hardware in mind, which I doubt, it'd need a serious amount of hardware.
I expect the developers of the application understand the internet very well! :)
True, but a ‘private’ web server on the internet is easy to track and take down. It isn’t easy to take down FB and the like. The point of this software is to automate screening for content providers like FB and YouTube and potentially other server hosts services.
You're assuming smaller sites will want to catch this content and even care what UK laws or governments want them to do, something even recent history has shown us doesn't necessarily hold true when people set up an alternative to reddit because they couldn't be misogynistic dicks.
Besides most things I've read on studies into online radicalisation show that's not where it starts, by the time someone is searching for and consuming extremist materiel they've already been radicalised and this sort of material is just feeding into their own confirmation bias, at best a solution like this will reduce accidental views and make research into terrorist and extremists groups harder.
It seems like the the new, but old, buzz word is AI.
And the new thing to blame for what ails society and thus needs to be banned is stuff on the internet, previously its been books, music, art, films, television, radio, and video games.
My money is on this AI being trigger happy with shutting down stuff. I commend the idea to prevent terror and radicalisation, but I predict the AI would be tyrannical in practice.
The AI can't really be trigger happy from what i understand as it's just putting a probability number on a video, what would be trigger happy and very likely to decline overtime is where they draw the line on what should and shouldn't be allowed, that initially the bar will be set at videos flagged as 90% probability of being terrorist materiel and when that lets some through they'll lower it to 80%, 70%, 60% and so on.
Can the Home Office please block Home Secretary Amber Rudd's scaremongering and bigoted propaganda
Indeed, this is pretty much guaranteed to be used for political censorship. For our own good, of course, because politicians are never corrupt, or even wrong.
I am convinced that the primary purpose of this technology will be political censorship in Europe and the USA, with prevention of terrorist attacks coming in a distant second. Ideologues on this very comment thread are already suggesting it! All for our own good, of course.
User flagging doesn't work at keeping this sort of thing off social media. The cost of this will linearly scale with the upload rate, just like manual moderation, and given youtube already scans videos for copyrighted music/video they can manage it
If anyone sets up a jihad-tube website, that'll be an easy target to block (just like they blocked piratebay). The difficulty is legitimate social media apps like telegram where a blanket ban is disproportionate - and now they can easily implement this tool to filter extremist videos
The actual appeal process & false detection rate is up to how the social media site deals with flagged videos.
That is a bold statement, care to back up that assertion?
The blocking of piracy related website is easy to circumvent and any blocking of a jihad-tube website would be equally as easy to circumvent, as i said in another post about the only thing this technology will prevent is accidental viewing and legitimate research, there's also a risk that the bar will slowly be lowered overtime so it catches more than just terrorist and extremist materiel.
Besides studies show people are not radicalised online, if they're seeking out terrorist and extremist materiel online they've already been radicalised.
Hmmm. 49 people kill by terrorists since 2010, in the same time 120'000 deaths are linked to tory policies.
http://www.independent.co.uk/news/he...-a8057306.html
Constant erosion of privacy and civil liberties in the name of security and safety. Personally I know where I would want this money spent. What will happen is that the communication methods use will be further buried and will become more difficult to deal with.
I guess so but i wonder how many people or to what extent the things terrorists do actually induces fear in people, speaking personally I'm more fearful that I'll be involved in a train or car crash than a terrorist incident, they're upsetting, saddening, and i wish they didn't happen but fear is not what they instil in me.
The most effective way to destroy terrorism is to ban media and press giving anything beyond brief, barebones coverage. i.e., no going on about each outrage, non-stop, ad-nauseum, repeatedly showing the same sensationalising footage, and interviewing any poor fool daft enough to stand in front of a camera instead of telling the idiot reporter with usually moronic questions precisely where to shove it.
Instead the news should be "Bomb/shooting/whatever in x-location today at aa:bb o'clock. Contact xxx-yyyy if you are concerned about relatives. ... And now for the sports news".
Terrorism doesn't cause much terror unless the media do their job for them.