Read more.Virtualization-Based Security is going to be a standard feature on new Windows 11 PCs.
Read more.Virtualization-Based Security is going to be a standard feature on new Windows 11 PCs.
What frustrates me is there doesn't seem to be an easy way to "except" an application from VBS so it's either an on or off globally. This is quite frustrating because VBS is an exceptional way to isolate and protect applications and kernels and prevent malicious software hopping around and between internal resources on the system.
I would love to enable this software globally but without an easy (or possible) way to go "this software/executable path does not need VBS" then it's completely a non-starter for me.
Again, an excellent feature and push by MS for security but poor execution which will overall harm the user.
Microsoft with a badly executed good idea, well there's a first...
Oh no, wait a minute....
Tabbykatze (04-10-2021)
Hehe
The problem is as soon as you allow exceptions you are allowing a vector for something malicious. You know as soon as there is a dialog to allow exceptions someone will either script it or trick someone into ticking it. I may just be getting old but I've given up on the idea that users can be educated. Most just don't want to know and will click anything.
It's the tough balancing act between security and usability and there'll always be ways to circumvent through either automation or tricking a user.
However, a line does have to be drawn and on something like this the usability is impacted without recourse except to wholly disable the security, that is unacceptable.
There's a question how much usability is actually impacted though. If the performance is degraded less than the performance improvement each generation for GPUs then you could make the case for simply waiting another generation if that performance difference crossed the threshold of usability.
I'd be interested to know what hardware mitigations could be put in place in the future to mostly eliminate the losses though - it sounds like there's a bottleneck somewhere that some acceleration either on the CPU or GPU side might be able to help with.
As Tabbykatze said, not all software/processes require or benefit from VBS, so the 'all or nothing' approach doesn't make a whole lot of sense.
Time to let the early-adopters suss things out on real-world hardware... though the gaming angles could yet turn out to be a storm in a tea-cup, as the latest preview 'clean-installs' in VMWare with the VBS / Memory Integrity feature disabled by default.
Last edited by KultiVator; 04-10-2021 at 02:23 PM.
In our org, we do a lot of deep learning that is mixed between Linux and Windows wherein a lot of interim testing of learning scripts is done on the windows systems and then ported over to Linux (ported, as in transferred, we run in architecture ambiguous methods). If VBS also detrimentally impacts performance for those users, am I going to be in a bad position where VBS will severely impact our ML engineers running Windows meaning I'll have to run part of the business in a heightened security mode versus the other parts in a poor security posture.
I haven't looked into any benchmarking for ML methods and impacts yet but it does depend on how far the impacts extend. Performance becomes a usability impact once it gets over a certain acceptability threshold, something like 1-10% would be "ah well, work with it" but at 10-20% then serious questions start getting asked then if it's over 20% then it becomes a quandry.
I would be interested in seeing if establishments like phoronix, lambda, puget and sisoft see a quantifiable difference between VBS on and off in use cases more than just gaming. I may have a look myself when I have longer than 2 minutes which I spend popping onto Hexus while a spinning loader finishes.
From the article it is not clear if the performance impact of up to 28% in FPS is present on all CPUs or just the ones that do not have the required hardware.
My understanding is that only older hardware will be affected, so not a problem for more modern CPUs.
I agree about article clarity, but having followed the links you can find out PCGamer tested on a 10700K Intel chip, so not very old hardware.
And do you need that performance to keep scaling in future years? The slightly dumb question I'm asking is whether a 20% cut of 2022/3's (presumably realistic win11 adoption time) GPU performance would still be an issue, and is it likely that while GPU perf will increase, so will the load you're using it for?
Last edited by kalniel; 04-10-2021 at 02:30 PM.
My guess is that internally, the justification is similar to the justification of UAC. The difference being that the fix for this one might actually be done in hardware (if the impact comes from the GPU rather than CPU)
UAC was very unpopular when it was first introduced and almost everyone hated it. However it was necessary because software was consistently completely unjustifiably requiring administrator privileges to run. The way that has been fixed was to annoy users into annoying the companies that develop the rubbish software that demanded admin privileges.
This is slightly different - it's not that GPU hardware is poorly made, however by forcing it on as default (in certain circumstances) it will get people talking about the impact and how to reduce it.
If it were to be configurable on a per application bases, all the software vendors would do is instruct users on how to disable the feature.
"In a perfect world... spammers would get caught, go to jail, and share a cell with many men who have enlarged their penises, taken Viagra and are looking for a new relationship."
If you follow the links in the article to PC Gamer it says they used a 10700K, that's on Microsoft's list for supported processors.
The ComputerBase link apparently translates to Threadripper 3970X, again on the supported processors list.
Ninja edit: beaten to it by kalniel.
My only response to that is that it's not 2022/2023 and talking about potential future performance increase offsetting the current generation decrease is a moot point and pure conjecture at best. Right now, it "could" (because there haven't been any major benchs done) affect our day to day performance in the business. That and if I follow your acumen, I may have to spend up to £30,000 buying equivalent hardware in newer generations just to get the performance back which is untenable. I do get the root of what you're saying that in the years to come the immediate deficit will be offset, but that is years away and I have to deal with the problems in the now first and plan for the future second.
I do agree with your final point that that is where they can go. Similar to software constantly saying "you must disable your AV to allow it to install properly" wherein I look at that kind of request as "my software has been written badly and could be intercepted by your AV or be disrupted by on access scanning and I have not written proper post install checks and validation".
However, I would still generally require the ability to do piecemeal virtualisation because as much as it may be completely invisible to the software, there will be something that breaks because of it by its very nature as being another hurdle between software and OS.
There are currently 1 users browsing this thread. (0 members and 1 guests)