Overclocking on Devil’s Canyon

There lies a strange dichotomy with Intel’s product line. On the one hand, Intel offers a CPU engineered for better overclocking and encourages their motherboard partners to invest in overclocking features, but on the other, the warranty is technically void if the CPU fails while overclocked. A fair number of regular end-users (leaving business aside), such as my father who knows how to build a computer but not enough to overclock, can be concerned about overclocking and warranties.

To put the concept of 'overclocking death' into perspective: I have been an amateur overclocker for almost a decade, competing in national and international competitions both live and through the internet. I mostly focus on air/water (i.e. 24/7 system) overclocking, especially when it comes to AnandTech CPU and motherboard reviews. Out of the 250+ CPUs I own, I have only ever had one CPU fail. This was because I thought I had a different processor in the system and input inappropriate numbers. Those were not random numbers, I simply recalled an erroneous list from memory and past experiences, and not something a user overclocking a single system would end up with. As part of our motherboard reviews here at AnandTech, I try to offer a scale showing how the overclock is built over time, rather than jump in at the straight end.

One of the easiest ways to do so is to leverage any automatic overclocking options on the motherboard. For example, ASUS offers OC Tuner and a new OC Wizard mode in their Z97 BIOSes to help with overclocking. ASUS’ software has an auto-tuning mode where you can select the maximum temperature or power draw you want. Overclocking can be as simple as using that automatic overclock feature, or as complex as you like. I typically use those automatic overclocking options as a reference point for manual overclocks.

Manual overclocking has advantages similar to using a manual transmission in a racing car. A manual transmission lets the driver select the gears, potentially with better precision than the automatic system. A racing driver can also use different gears for torque or engine braking, and lap times in manual race cars are usually quicker than those in automatic transmission vehicles. With manual overclocking, we can adjust the system to use the overclock we want/need at the lowest possible voltage. An automatic system may use a lot of voltage to ensure stability, or if the CPU is marginal – the manual selection can override this behavior.

Manual overclocking a CPU is not a dark art. With practice, guidance and reasonable expectations, a small overclock can be nurtured into a bigger one, without the fear of decreasing the longevity in a daily system. Most processors with a mild overclock will most likely end up being replaced before they become irreparably damaged. If we ventured into the extreme overclocking, real seat-of-the-pants type scenario, then it can be possible, but that is not recommended without experience.

Overclocking also introduces this strange notion of ‘stability’ – how stable is your overclock? The word ‘stable’ means different things to different people, but the basic assumption is that the system should be stable for everything you do. Intel and AMD ship their CPUs at a voltage and frequency which keeps them stable no matter the situation. Some users attempt to match that stability by stress testing their system, whereas others are satisfied for a gaming stability with no need for transcoding video stability. Testing the stability of a system typically requires some form of stress test, and again users will select a test that either emulates real world (video transcoding, PCMark8, 3DMark) or attempt to find any small weakness (Prime95, XTU). The downside of this latter testing philosophy is that a bad stress test has the potential to break a system. Personally, I shudder when a user suggests a system is not stable unless it passes ‘72hr Large FFT Prime95’, because I have seen users irreparably damage their CPUs with it. 

My stress tests here at AnandTech typically consist of a run of the benchmark PovRay (3 minutes, probes CPU and memory) and a test using OCCT (5 minutes, probes mainly CPU). If there is weakness in the memory controller, PovRay tends to find it, whereas if the CPU has not enough voltage for video transcoding, OCCT will throw up an error. There are outlier circumstances where these tests are not enough for 100% stability, but when my systems are stable with these tests, they tend to devour any gaming or non-AVX transcoding for breakfast.

Overclocking blurb aside, my usual procedure for the i7-4770K is as follows:

  • 1. In the BIOS, set the DRAM to XMP.
  • 2. Set the CPU to 4.0 GHz (40x multiplier) with CPU Core Voltage to Manual and 1.000 volts.
  • 3. Save and Exit, see if system boots to OS.
  • 4. If entering OS fails, go back to BIOS and adjust voltage by +0.025 volts and return to step 3.
  • 5. When in the OS, run the POV-Ray multithreaded benchmark and OCCT test for five minutes. If either tests fail, go back to BIOS and adjust voltage by +0.025 volts and return to step 3.
  • 6. Monitor temperatures during OCCT test. If the temperature is at the top of the users limit, stop overclocking.
  • 7. If temperatures are low, and both tests complete, then this CPU frequency/Core Voltage combination is stable and noted. To continue overclocking, go back to BIOS and adjust CPU multiplier by +1. Return to Step 3.

For programmers, in pseudocode:

BIOS.XMP = true;
CPU.Multiplier = 39;
CPU.CoreVoltage = 1.000;
do {
              try {
              } catch {
                            CPU.CoreVoltage += 0.025;
                            continue; // exit do while loop
              OS.Log(CPU.Multiplier, CPU.CoreVoltage);
} while { CPU.Temperature < 85; };

Intel i7-4790K Results

Our i7-4790K sample actually had a high stock voltage – 1.273 volts at load. In chatting with other reviewers and overclockers, it would seem that 1.190 volts is another variant. This has repercussions in terms of overclocking headroom, as it makes the CPU warm from the stock settings. By contrast, with manual overclocking, we were able to achieve 4.4 GHz on all cores with 1.200 volts suggesting that Intel was ultra conservative with their stock voltage decisions.

For consistency with the other Intel CPUs, we underclocked the sample to 40x for all cores to begin, and changed Load-Line Calibration to Level 8 for the ASUS Z97-Pro.

The results were as follows:

From these results, we can see that large voltage jump from 4.6 GHz to 4.7 GHz, which is similar to what launch Haswell CPUs seem to require. This causes an upswing in both temperatures and power draw, meaning that the user really needs that thermal headroom or a nice CPU to move closer to 5.0 GHz. For 4.7 GHz, we also added some CPU Cache voltage (+0.050 volts) to ensure benchmark stability, which is a secondary technique when pushing the voltage limits.

From the stock settings, manually overclocking the system to 4.4 GHz gave a 23W drop in power draw and 8C for temperatures. I would happily take that for a daily system. The sweet spot for users with good cooling would seem to be 4.6 GHz with this CPU.

We asked Intel regarding where this CPU was in terms of their internal testing. It would seem our sample is actually below average (so were our i7-4770K and i7-3770K samples, incidentally). The couple of users who I have spoken to with 4.8 GHz would suggest that those frequency CPUs are going to be more common than they were with launch Haswell.

If we compare the i7-4790K alongside our i7-4770K launch sample, in the same motherboard with the same cooling:

The temperature delta is awesome. We have a 10-16C delta up to 4.5 GHz, which is still 8C at 4.6 GHz. This gives us enough headroom for 4.7 GHz, showing that Intel has partly solved the solution when it comes to heat generation. I am sure that some users will want more and still delid their CPU to get another few degrees, or ask why Intel has not gone all the way and directly soldered the IHS onto the CPU. The businessman in me tells me that it is all a matter of future headroom. Should they get competition, they can perform ‘simple’ tweaks to get the best out of the situation and perhaps stay in the lead. A sort of ‘never show your full hand unless you need to’ mentality. The other argument is one of progress, and we could wonder how many extra adjustments could be in Intel’s bag of tricks.

Intel i5-4690K Results

Although we never had a sample of the i5-4670K in to test, I was less excited about the i5-4690K CPU as it follows more of the Haswell refresh line of having a small speed bump over the CPU it was trying to replace. A move of 100 MHz means a 2-3% in absolute terms, although changing the package to allow for more headroom might make it more interesting. The i5 overclocking CPU makes more sense in terms of bang-for-buck if you are not running CPU-limited workloads, but it clearly has to match up to the i7 otherwise that price difference could be justified for a +10% increase in clock speed and +100% increase in threads.

Thankfully, our i5 sample overclocked as well as the i7 did – actually even more so. With no hyperthreading to deal with, we cannot load up a core with two simultaneous AVX threads to heat the CPU up as fast as an i7, and that can play an advantage. It would seem that the voltage/frequency characteristics for our i5 sample were also better than our i7 sample.

Our i5 overclocking results were as follows:

Due to this CPU being only 4.0 GHz at turbo, the temperatures and load voltage should be a lot less than the i7, which is shown above. The voltage scale follows a similar trend to the i7, although the jump from 4.7 GHz to 4.8 GHz is greater than +0.125 V, as the system was still giving a BSOD on booting into the operating system. The temperature readings are still nice and low, with 79C for a 4.7 GHz. Whereas with the i7 we were hitting the real upper limits and suggesting 4.6 GHz was a nicer position to be in, this CPU makes 4.7 GHz seem quite easy indeed. It might be worth noticing that the power draw at 4.7 GHz for the i5 matches the stock power draw on the i7, but do not forget that while the i5 and i7 have 4 cores, the hyperthreading on the i7 can really drive up the power consumption as it is doing more work (26.3% more POVRay for 26.9% more power at stock).

Devil’s Canyon Review: Intel Core i7-4790K and i5-4690K CPU Benchmarks
Comments Locked


View All Comments

  • ZeDestructor - Friday, July 11, 2014 - link

    Couldn't you use something with a dedicated server combined with kb/mouse emulation on a bunch of extra PCs running at 640x480 nonsense kind of thing?

    It's potentially more work, especially the potential synchronisation and timing issues, but it should be doable to within 10ms of latency (on my LAN I see ping roundtrips in the sub-ms range), should it not?
  • Ryan Smith - Friday, July 11, 2014 - link

    In theory yes. But in practice most first-person games spawn you at a random point, which makes any kind of input track playback ineffective.

    The games where such a thing would work would tend to be games that already have better benchmark capabilities anyhow, such as racers and RTSes.
  • ZeDestructor - Friday, July 11, 2014 - link

    Couldn't you get a modified DS from a dev?
  • FlushedBubblyJock - Thursday, November 20, 2014 - link

    They just don't want to do it.
    They live for their claimed accurate scientific method.

    It will take a genius with guts or a brute with some money, then we can see the results we never get that we all want somewhere else.
    Let's face it, it could be done and there would be some +/- low end percentage variability, so frames could be rounded to whole numbers removing the tenths ( which are outside the current bench errors and variability ).

    It just isn't going to happen here, but someone should definitely do it, and we'd all love going there.

    Real life scenarios are just too scary for the cold and removed and protected elite. Politics.

    We all play our online games and know what frames we can count on given our vid cards and systems and current clocks and we all have our favorite maps and servers... etc.

    Remember all these websites run the highest end overlcocked cpu and boards they possibly can, and that also is deceptive for most readers.
    They run SSD's now with clean installs and used to go with defragged and rebooted but now methods are equivalent so nothing extra is running best case scenario stuff...

    Yes another type of review site is needed, but then again the general idea is given for what info is available here, so downsize accordingly is the answer.
  • wallysb01 - Sunday, July 13, 2014 - link

    If this is something that’s really needed for a real test, why not just do it 50 times per processor and do some stats.
  • doggghouse - Friday, July 11, 2014 - link

    Out of curiosity, what do people consider to be safe voltages for Haswell? I recently had to replace my 4770K with a 4790K because the chip started to BSOD even when not overclocked... I don't know if I helped speed its demise after having tested it at 1.4V several times, and I think I settled on 1.3V 4.4GHz daily use (it was a mediocre chip apparently).

    I apparently lucked out on my 4790K because it is running stable at 4.6GHz with only 1.25V, and 4.7GHz at 1.3125V. I was thinking about testing its upper limits for fun--try for the mythical 5GHz--but I don't want to accidentally burn out an otherwise great chip. I very briefly ran it at 5GHz with 1.424V and HT disabled, it was stable enough to run a few benchmarks. But if I play around in the 1.4V range, am I potentially going to wreck it?
  • TheinsanegamerN - Saturday, July 12, 2014 - link

    Typically, for 22nm intel (sandy bridge and haswell) the typically regarded "safe 24/7 voltage" is only 1.3 volt on air, and 1.4 volt on water. for a very short period, higher than 1.4 can be used if you have VERY good cooling, but you may damage the vhip even with ln2 cooling at anything above 1.4v.
    Also, it sucks you had to do 1.3v to get to 4.4Ghz....I hit 4.2Ghz at only 1.075v. apparently got a REALLY good chip somehow, although i heard that the costa rica chips, of which mine is one, always did overclock better.
  • doggghouse - Monday, July 14, 2014 - link

    Interesting... so is it the temps, and not the voltage directly, that eventually kills the chip? If so, would running lots of synthetic benchmarks that brings temps into the 90-100C range shorten its lifespan? I have an AIO water cooler, the Kraken X60, which can keep normal temps cool, but anything above 1.25V will still hit 100C on the latest Prime95 Small FFTs test.
  • FlushedBubblyJock - Thursday, November 20, 2014 - link

    Voltage can kill the cpu. You won't necessarily get the temp reading it can happen so fast.
    Secondarily, a higher voltage is more volatile with higher temps, so the combination can also cause electromigration blowouts.
  • Dustin Sklavos - Friday, July 11, 2014 - link

    "A contact at Corsair."


    Here are the complete results for my Devil's Canyon and Pentium chips:
    4790K #1: 4.7GHz @ 1.275V
    4790K #2: 4.7GHz @ 1.31V
    4690K #1: 4.7GHz @ 1.375V
    4690K #2: 4.8GHz @ 1.375V
    G3258 #1: 4.9GHz @ 1.4V
    G3258 #2: 4.7GHz @ 1.375V

Log in

Don't have an account? Sign up now