Read more.Instead, chip manufacturers will turn to other means of boosting density.
Read more.Instead, chip manufacturers will turn to other means of boosting density.
So, is it finally time to adopt BTX cases with better cooling so we can cram more chips is? Because if the chips stop shrinking, then people's hunger for more power will mean computers need to get bigger. I look forward to playing games on a PC the size of a Cray X/MP
3D technologies, including layers of computation embedded in memory, will be great. The only problem is it's also expensive to manufacture. I guess this becomes viable if feature size shrinks hockey-stick in cost.
Fabrication efforts aside, the other problem is power - we struggle with the thermal envelope of a single layer of transistors, but when we have multiple layers, there's potentially a lot more power to dissipate, and some of it is stuck in the middle of the chip.
We might push more compute to remote services. Phones and devices just become thin clients.
What does that solve? You still want to achieve performance improvements on the compute, wherever it is. In a DC you get the benefit of things like high ambient temperatures and liquid cooling, but that only gets you so far. If you're not able to double your compute capability in the same physical space every couple of years, then you're in a sticky situation.
And MULTICS is born, welcome to the 1960's
Part of me thinks I have heard this all before too many times to see scaling get rescued by things like immersion lithography. Does sound more like the end of the lithography road this time though, just not enough atoms left.
There must still be some tricks to pull, SOI 5nm may be better than 5nm for example if it can be done.
OTOH, phones etc should now be fast enough with current technology. Perhaps people should just write better software, you can get magnitudes better performance if you go back to writing in C and tiling your code to work in L1 cache rather than using network java interfaces and wasting most of your cycles converting data to XML and back.
That'll never work thanks to Wirth's/May's law:
Software efficiency halves every 18 months, compensating for Moore’s Law.
DanceswithUnix (26-07-2016)
One of these days I will dig out a 1990 PC OS and see how fast it boots... if it boots.
Boot time seems remarkably consistent over the years, though what we're booting into is ever more complex. My only immediate point of reference was my old AMD X2 XP-based machine that serves an offline purpose at the in-laws - it booted, and felt in operation, faster than modern computers many times the speed.
I was thinking something like Windows 98 SE, new enough that it might stand a chance with modern hardware but old enough that my modern CPU has more L3 cache than a Win98 machine would have expected to see as main memory
I think I have a Red Hat Linux CD from that era as well, that might be more usable.
There are currently 1 users browsing this thread. (0 members and 1 guests)