Read more.It is guiding motherboard makers to remove support from their UEFI firmware.
Read more.It is guiding motherboard makers to remove support from their UEFI firmware.
In a way, I suppose this is a good thing. Most of my booting issues (I run a few Linux distros & Windows 10) are related to accidentally installing in UEFI mode and then trying to boot in BIOS mode, so this'll certainly stop that (as well as force me to finally learn how to GRUB for UEFI)
On the other hand, I don't understand the rationale behind "kill it to death, even though it's disabled by default". If it's disabled by default, the only people who are going to be using it are those who know what they're doing and are enabling it because they need to.
Also, and finally, this is going to make the entry into hobby OSDev'ing much, much harder on real hardware Sure, you can do it in a VM, but nothing beats that feeling of seeing it run on real hardware
They could even remove AMT while they are at it and secure the bios even more !
I do love the bit about the CSM apparently "exposing security risks". Because UEFI support has been 100% flawless. Or how about that complete access to network functions in the UEFI network stack. Surely, that's no risk at all. We also shouldn't forget that one of the main reason for pushing UEFI (which I actually like btw) is that "old school" BIOS development is slowly becoming a lost art.
I don't entirely agree. The problem with learning OSDev on ARM-based hardware is that there is no real standard for peripheral hardware access via software (I'm talking about anything outside of the CPU). That said, learning to write a kernel for ARM-based SOCs *is* a useful thing to put on your CV. But how many people doing OSDev are doing it purely for their CV?
My view is that x86 (and 64-bit) is likely the easiest architecture to get into. There's so much documentation available, the peripheral hardware is standardised across hardware implementations (you know you'll have PCI, PCI-E or ISA buses to deal with and where to go looking for them) and there's so much knowledge of x86 out there (you definitely want to consult the people who were x86 OSDev'ing in the late-90s to early 2000s - they've been through some crazy stuff).
I *highly* recommend you try it on hardware as soon as you can. You come across behaviour you'd never encounter in a VM, but is vital that you take account of it. I found that out the hard way with my OS's AHCI driver: It worked wonderfully in qemu and VMware, but as soon as I ran it on my laptop it would fall flat on its face. I found out 3 years later (a few months ago) that I wasn't flushing the CPU's cache before telling the AHCI controller to perform a DMA read. So on real hardware it was reading junk, but in a VM it was fine.
I know where you are coming from with that, I cut my OS teeth on 8 and 16 bit platforms which had excellent documentation (like the Dragon 64 and Atari ST) though on the downside you needed more assembler skills in those days than you need now.
However there are a bunch of pins on the Pi that you can write to, you can get a cheap Banana Pi that has a SATA port if you want to play with that. If you enjoy it there are probably a dozen or so jobs around here doing bare metal ARM programming, I have had just one on x86 and tbh it wasn't fun.
I envy you - I would have loved to have started OSdev'ing back them. Unfortunately that was a long time before I even existed.
That's true. If you want to do more hardware hacking with your OS then a Pi would be a good place to start. Haha I'd love a bare metal job, but they all seem to be already taken. I'll have to settle for GUI programming at this point.
We don't need OS developers. By the time this happens, driverless cars will be all over the roads, and if you need an OS, you will simply Ask Alexa, specifying the hardware base, and provided SkyNet authorises it, Intel's OS-DevAI will write it for you in 7 nanoseconds, for the modest cost of a decade or two reduction in your euthanasia clock.
Welcome to the future.
And yes, I'm in a funny mood today.
Why wait? get rid of it SOONER.
I guess there's going to be no Intel in my future then, it will be a cold day in hell when i hand over control of my computer to Intel's Management Engine and Exploit-Ridden Firmware.
True but it's a lot easier (afaik) to neuter in a BIOS (read older system)
There are currently 1 users browsing this thread. (0 members and 1 guests)