Something I've noticed after moving from a soldered-die Phenom II to a Core i7 with its very high heat density and thermal paste, is how the stock fan profiles seem to be quite poorly configured for high-performance cooling solutions.
Let me explain. With something like my Phenom II with its fairly large, soldered die, and decent thermal transfer between the heatspreader and heatsink, the junction temperature would stay fairly close to that of the heatspreader thanks to the low thermal resistance path, and the same was true between the IHS and the heatsink (hence the purpose of the IHS). Therefore, the reported junction temperature was a fairly good representation, with an offset of course, of the temperature of the heatsink itself which in turn meant this information was directly useful to control the fan speed. Put another way, if things started to get warm, increasing the fan speed was likely to be useful in cooling everything down.
However this isn't quite so straightforward when you look at something like a Kaby Lake i7. Load it up to 100% and the junction temperature rockets up almost instantly to say 70C, yet temperatures from the heatspreader onwards are barely lukewarm. That high thermal resistance step between the die and IHS is throwing a spanner in the works. Fan profiles are still seeing this junction temperature and whacking fan speed up to hairdryer levels, and many people seem to be running out and spending far more money on things like AIO coolers to remedy this, but this seems wholly unnecessary in many cases. As an analogy, you have an obstruction in your plumbing and you're 'solving' this by installing wider and wider pipes after the obstruction. CPU TDPs haven't changed much for a long time, yet more and more expensive CPU coolers seem to be becoming almost the norm! We really shouldn't be having to spend £100+ on a cooler just to maintain reasonable noise levels on a stock system! I almost feel like the odd-one-out for thinking £25 is more than enough for cooling a stock CPU adequately.
But back to fan noise, because of this thermal 'obstruction', just ramming more air through the heatsink fins probably isn't making a whole lot of difference if it's already stone cold! Motherboard manufacturers have access to all of these CPUs, so it would be much better IMO if they could be a bit more intelligent about how their fan profiles work, by increasing fan speeds when it will actually make a significant difference e.g. in line with estimated heatspreader temperature. Now, of course we don't usually *have* a sensor on the heatspreader, but we can ballpark it with an offset to the junction temperature. E.g. if we see idle of 30C and a near-instant jump to 70C under load, we can be fairly confident that there's a ~40C delta between them, so to start the fan ramp after this initial on-load temperature.
I've had a play with this myself, and reducing the fan curve from 100% around this point, right back to idle speed, makes barely any difference to load temps, so what exactly is the use in putting up with irritating fan noise?
When the tjmax of a CPU is 100C, does it really matter whether long-load temps are 75 or 78C, for the sake of much lower fan noise and/or the cost of a new cooler? Food for thought...
It would be interesting if there was an automated way of ramping through fan speeds with the CPU under load, to better demonstrate this, and to try it in individual systems in order to make a better informed decision about fan curves. Any thoughts on how we could try this?