Reddit — Dive into anything
New to OCing, I’ve tinkered with BIOS and voltages (specifically undervolting) before on Intel laptop CPUs for myself, friends, and family. This is the first time I’ve OC’d a desktop.
I’m familiar with most aspects of PCs, but tinkering with BIOS and OCing in general are definitely a new territory for me.
MOBO: Gigabyte Z390 Aorus Master
GPU: GTX 960/4GB (This is an old card I’m using to set the CPU & system up while I wait for my 2080 to come in the mail)
RAM: Corsair Vengeance LPX 16 GB (2 x 8 GB), DDR4 3200 MHz, 16-18-18-36
PSU: Corsair RMx 850W 80+ Gold
CPU Cooler: Noctua NH-D15
Case: Fractal Meshify C
Additional Cooling: x2 Noctua 140mm intakes, x2 Fractal 120mm exhausts (above CPU cooler), x1 Noctua 120mm exhaust (behind CPU cooler). All fan curves determined by CPU temps. They max out once the CPU hits 50°C.
Thermals*: 85-95°C during 100% on every core. RealBench sometimes spiked temps to 97°C just before a result hash match is completed. This high 90 temp quickly lowered to mid 80 range after, then stabilized around 85-95°C.
* Using Core Temp 1.14 & CPU-Z to monitor CPU during tests.
BCLK Adaptive Voltage: Disabled
SVID Offset: Disabled
AVX Offeset: 0
XMP Profile 1
TjMAX Temp: 100°C
CPU Clock Ratio: 50
FCLK Frequency for Early Power On: 1 GHz
CPU Flex Ratio Override: Disabled
Intel Speed Shift Tech: Disabled
CPU Enhanced Halt (C1E): Disabled
C6/C7 State Support: Disabled
C8 State Support: Disabled
C10 State Support: Disabled
Voltage Optimization: Disabled
Above 4G Decoding: Disabled
|VCORE||Cinebench R20||RealBench 2. 43 (120m)|
|1.295 V||Stable*||0m, Bluescreen|
|1.315 V||Stable||3m, Bluescreen|
|1.340 V||Stable||22m, Instability Detected|
|1.345 V||Stable||109m, Instability Detected|
|1.350 V||Stable||120m, Passed|
|* 1.295 V is minimum stable for Cinebench R20, lower VCOREs either auto-closed the program or, if significantly lower than 1. 295 V, blue-screened computer.|
I’ve read that 1.350 V is the max «safe» limit for Turbo LLC on Gigabyte boards, and my CPU will not be stressing itself too much — I don’t run it full tilt 24/7 or even for hours on end. Probably the most stress the CPU will get is for CPU-intensive games, streaming for friends, or rendering a video. I’m OCing it because I’d like to give it as much room as possible.
Thoughts on these results? Should I lower the CPU to 4.9 GHz to be more safe (and adjust VCORE accordingly)?
How to Overclock Intel 12th Gen Alder Lake CPUs
When you purchase through links on our site, we may earn an affiliate commission. Here’s how it works.
Intel Alder Lake System
(Image credit: Tom’s Hardware)
Contrary to innumerable reports of its demise, overclocking is not dead — not by a long shot. Yes, the past several generations of Intel’s chips slowly lost overclocking headroom as the company folded more of its frequency headroom into stock performance levels while struggling to compete with AMD. However, Intel’s Alder Lake chips hit the reset button: The Intel 7 process has far more room for overclocking than prior generations, helping the chips take over our list of Best CPUs for gaming. In fact, we’ve found that thermals are often the limiting factor to 12th Gen Alder Lake overclockability, meaning that if you’re lucky enough to get a good chip, you’ll largely be held back by your ability to cool it. In fact, our overclocking results below show that Intel’s Alder Lake chips have far more overclocking headroom than AMD’s Ryzen 5000 chips, and that equates to big performance speedups.
As always, you’ll be at the whims of the silicon lottery when it comes to the maximum overclock you can squeeze out of your chip. Still, all indications point to the Alder Lake chips being exceptional overclockers, even if you’re planning on standard ambient cooling. (i.e., you’re not using liquid nitrogen or other sub-zero cooling methods).
Alder Lake does bring a lot of new wrinkles to overclocking, though. The chips come with Intel’s hybrid architecture that blends groups of big and fast Performance cores (P-cores) with groups of small and powerful Efficiency cores (E-cores), and both run at different clock rates. That adds a few more variables to the mix, so you’ll need to work on finding the right balance for your needs.
Today, we’ll show you the ropes and teach you how to unlock the hidden overclocking performance lurking under the heat spreaders in Intel’s 12th Gen Alder Lake. We also have examples of the performance increases we attained in gaming and single- and multi-threaded work via our own overclocking efforts.
Before we explain how to overclock your chip, let’s take a look at our performance results.
Image 1 of 4
(Image credit: Tom’s Hardware)(Image credit: Tom’s Hardware)(Image credit: Tom’s Hardware)(Image credit: Tom’s Hardware)
Here are the results of our overclocking with Intel 12th Gen Core Alder Lake chips compared to the Ryzen 5000 lineup in Windows 11, along with DDR4 vs DDR5 benchmarks and overclocked configurations. You can find more detailed breakdowns of our overclocking with the Core i9-12900K and i5-12600K here, and the Core i7-12700K details are here.
We generated these overall measurements of gaming performance as a geometric mean of our entire test suite. We also selected the most important single- and multi-threaded tests in our suite to generate those cumulative measurements. You can see an even more expansive view of the Intel vs AMD duel in our CPU Benchmark hierarchy article.
As you can see above, the Alder Lake chips profit more from overclocking than the AMD Ryzen models. The Core i5-12600K is a particular standout with a massive 15% gain in 1080p gaming, while the Core i7-12700K shows similar overclocking headroom to the more expensive Core i9-12900K, cementing its value proposition. Here’s what that looks like in table form:
Swipe to scroll horizontally
|Tom’s Hardware — %age Change||1080p Gaming||Single-Thread||Multi-Thread|
|Core i9-12900K DDR4/ DDR5||+9. 7% / +5.2%||+1.6% / +3.2%||+3.3% / +7%|
|Ryzen 9 5950X||+5%||-2.3%||+5.7%|
|Core i7-12700K DDR4/ DDR5||+9.8% / +7.1%||+2.3% / +2.1%||+3.9% / +6.4%|
|Ryzen 9 5900X||+3.7%||-0.6%||+2.1%|
|Core i5-12600K DDR4/ DDR5||+15.2% / +12.9%||+4% / +4.2%||+8.8% / +11.3%|
|Ryzen 5 5600X||+6.7%||+3.8%||+2.7%|
Overclocking Prerequisites for Alder Lake
Before we start turning up the dial on the voltages (and fans), you’ll need to make sure that your system is ready for overclocking. As always, we have to caution you that overclocking voids the warranty on any processor, and you run the risk of damaging your chip if you apply excessive voltage. Excessive voltage and heat can also result in reduced chip lifespan due to degradation, so you’ll want to stay within reasonable boundaries.
First things first: You’ll need a K-series 12th Gen Alder Lake chip if you plan on increasing the chips’ core frequency, which is the most basic method of overclocking. That’s because K-Series chips, like the Core i9-12900K, i7-12700K and i5-12600K, have an unlocked multiplier that allows you to easily dial up the frequency on your chip. In addition, the graphics-less ‘KF’ models are also overclockable. If you don’t have a K-series chip, your options for overclocking Alder Lake will be far more limited, though you can still aim for higher memory clocks.
(Image credit: Gigabyte)
If you plan on doing full core frequency overclocking, you’ll also need a Z-series (Z690) motherboard, as Intel doesn’t allow you to change the chip’s frequency on cheaper B- and H-series motherboards — those don’t exist for socket LGA1700 yet, but they’re coming. Most Z-Series motherboards have robust power delivery subsystems, but performance varies, so pay attention to motherboard reviews to find your best option. You can hit our list of Best Motherboards to see the best models on the market.
Intel will also have its ‘locked’ non-K 12th Gen, Alder Lake chips on the market soon, but you can only overclock the memory on those models (on Z-, B- and H- 600-series motherboards for a change), which ultimately limits the amount of performance uplift.
You’ll also need a suitable cooling solution, but the definition of sufficient cooling can vary based on your personal preference. Your overriding goal should be to prevent thermal throttling, a process that reduces the processor’s clock speeds and voltage to prevent damage (killing your chip) from excessive temperatures. Excess heat can also cause premature chip degradation.
Intel’s overclockable chips don’t come with a bundled cooler, and you’ll need at least a 240mm All-In-One (AIO) liquid cooler (or air cooler equivalent) to squeeze out any meaningful all-core overclocking with the Core i5-12600K. You’ll want a more powerful 280mm, 360mm AIO, or custom watercooling loop to wring out the most performance possible on the higher-end Core i7-12700K and i9-12900K SKUs. Check out our Best CPU coolers article for recommended options, and be sure to use one of the Best Thermal Pastes to ensure your cooler is effective. Also be sure to get a cooler that has a socket LGA1700 adapter available — most cooler companies offer those for free on their top AIOs, but you might need to wait a few extra days if it’s not in the box.
Naturally, more elegant overclocking approaches that don’t use brute-force all-core overclocking methods, like manipulating turbo ratios or only overclocking a few cores, can extract extra performance even if you’re using a lesser cooler. We’ll also cover those methods below.
You also need to ensure that you have one of the best power supplies for your system, but your requirements will vary based on the other components in your system. You can see the basic guidelines with a power supply calculator, but be sure to enter the maximum overclock frequency and voltage (max voltage for Alder Lake shouldn’t exceed 1. 4V with conventional cooling) to ensure you have plenty of room for overclocking.
Measuring Baseline Thermals and Performance on Alder Lake
This guide assumes a basic understanding of common overclocking terms and concepts, which you can brush up on with our How to Overclock a CPU article.
Now that we have the prerequisites sorted, it’s important to establish a performance and thermal baseline. You’ll use this to measure how much impact an overclock has on both CPU heat and performance, allowing you to determine the acceptable tradeoffs for the amount of performance you gain.
There are a plethora of software options for stress testing and monitoring — see our how to stress test your CPU guide for additional details. Some, like AIDA or OCCT, have in-built stress testing and monitoring, while others, like HWInfo, are purely designed to monitor performance. For this article, we’ll use Intel’s eXtreme Tuning Utility (XTU) and AIDA64 for both performance monitoring and stress testing.
(Image credit: Future)
How to Change CPU Settings for Alder Lake Overclocking
Overclocking requires manipulating several system parameters, like voltages and clock speeds. You can make changes via software inside Windows with utilities like Intel’s XTU, or you can enter the values directly into the system BIOS/UEFI.
Both approaches have their strengths and weaknesses. Software overclocking Alder Lake with XTU is a bit simpler because it uses a standardized nomenclature for the various settings, whereas motherboard vendors can use different names for the same settings. Additionally, overclocking via software allows you to make changes in real-time. In contrast, changing the values in the BIOS requires a system reboot before you see the impact.
Overclocking Alder Lake CPUs via the BIOS does have one big advantage, though: There are far more fine-grained options available for more advanced tuners. That means experienced tuners are better off using the BIOS if they plan to use the more advanced features.
Image 1 of 7
(Image credit: Tom’s Hardware)(Image credit: Tom’s Hardware)(Image credit: Tom’s Hardware)(Image credit: Tom’s Hardware)(Image credit: Tom’s Hardware)(Image credit: Tom’s Hardware)(Image credit: Tom’s Hardware)
Here you can see the BIOS options for overclocking Intel Alder Lake CPUs on an MSI Z690 board. While the names for certain settings can vary somewhat based on your motherboard vendor, the major manufacturers (Asus, ASRock, Gigabyte, and MSI) all include a wealth of options in their enthusiast-class boards. Depending on your overclocking goals, you can go as deep as you want on a top-tier motherboard, but the basics aren’t nearly as daunting as the wealth of options might suggest.
First, Lift Power Limits for Alder Lake Overclocking
(Image credit: Future)
The first step to overclocking Alder Lake CPUs is to uncap the power limits imposed by the motherboard. For MSI motherboards, these settings are listed in the UEFI as the Long Duration Power Limit, Short Duration Power Limit, and the CPU Current Limit. You should enter the first two values as 4096W, and the latter value should be set to 512A. Finally, set the Long Duration Maintained value to the longest allowed (128 seconds).
The names of these settings can vary slightly by BIOS, but you can also change these same values in XTU — they’re listed as Processor Core IccMax (set to unlimited), Turbo Boost Power Max (set to unlimited), and Turbo Boost Power Window (128 Seconds). Finally, disable «Turbo Boost Short Power Max Enable.»
You’ll never reach these levels of power usage, but removing all power caps allows you to push your silicon to the limits.
Overclocking P-Cores and E-Cores on Intel Alder Lake CPUs
Overclocking your system memory is a must-do item for any overclocking, particularly if you plan on gaming. Still, it’s best to handle memory overclocking after you find your preferred core overclock frequencies. This limits the number of variables you will have to troubleshoot as you dial in your Alder Lake overclock.
Alder Lake has P-cores for latency-sensitive work that tends to be lightly threaded, while the E-cores step in for multi-threaded work and background tasks. The E-cores can only be overclocked in groups of four, while P-Cores can be overclocked individually or in groups. Alder Lake provides plenty of options for fine-tuning — you can disable the E-cores entirely, which often allows you to eke out a slightly higher overclock (typically a single bin) on the P-Cores.
Disabling the E-cores also allows you to dial in a higher ring ratio (often referred to as the fabric), giving you a nice bit of supplementary performance. For example, if you disable the E-cores, you can push the ring up to 4.5 GHz. That said, ring overclocking on the Alder Lake chips is typically great, even with the E-cores active. You can push the ring up to 4.2 GHz on most chips without much fuss, which is significantly higher than we’ve seen with previous Intel chips. So it’s hardly worth disabling the E-cores for another few bins of ring frequency — the performance gain isn’t large enough to justify the loss of performance in threaded work.
Choosing whether to disable the E-cores will depend on your own personal preference, but leaving both the P-cores and E-cores active will offer the best blend of performance for most users. Interestingly, you can disable all of the E-cores if you’d like, but one P-core has to remain active regardless of the configuration — you can’t disable all of the P-cores.
It is noteworthy that, on some motherboards, disabling all E-cores will unlock AVX 512 support for the P-cores. AVX 512 instructions generate far more heat than the standard AVX2 supported by Alder Lake, so you’ll have to account for that increased heat and power consumption when you set your AVX offsets (if enabled, a separate AVX 512 offset will appear in the BIOS options). In most cases, you won’t gain enough performance in general-use applications or gaming to justify disabling the E-cores.
(Image credit: Intel XTU / Tom’s Hardware)
Alder Lake allows you to overclock the CPU frequency in three ways: All core, Per core, and via Turbo Ratios. The ‘all core’ setting is what we traditionally associate with overclocking. ‘All core’ is the simplest method by far because it assigns one static frequency to each type of core, be they P-cores or E-cores. Simplest doesn’t normally translate to best, however.
Overclocking Alder Lake via the Turbo Ratios is one of the best ways to dial in a refined overclock, as this allows you to define the peak boost frequency based on how many cores are active. This feature can help you eke out a slightly higher overclock, but just as importantly, it allows the processor to drop back into its base frequency when the chip isn’t under load.
This allows the chip to run cool when it isn’t busy and also reduces the amount of time the chip is at the highest frequencies, which is important for chip longevity. If you overclock via the turbo ratios, you’ll need to make sure that your Windows power profile is set to ‘Balanced’ or lower (the ‘High Performance’ profile keeps the chip at its peak turbo frequency at all times).
The ‘Per Core’ feature allows you to assign a unique frequency to each individual core. This can be helpful if you identify that some cores are more capable of sustaining a higher frequency than others. However, you’ll still be limited to changing E-core frequencies in groups of four. This setting is most useful for advanced tuners and can require a fair amount of investigative work to determine the appropriate clock speed for each core.
Alder Lake Core Frequency Targets and AVX Offsets
Given our own results and the results we’ve seen in enthusiast forums, it’s reasonable to expect that most Core i9-12900K chips can support up to an all-core 5.2 GHz P-core overclock while sustaining an all-core 4.0 GHz on the E-cores, but cooling will be your biggest challenge.
With custom watercooling, we’ve even seen up to 5.3 GHz on P-cores and 4.2 GHz on E-cores with the Core i9-12900K, albeit at the expense of higher heat output. The other Alder Lake chips, like the Core i7-12700K and Core i5-12600K, can also sustain up to 5. 0~5.2 GHz on the P-cores and 4.0 GHz on the E-cores.
Your peak frequencies will be limited by your ability to cool the chip, so dialing in a lower E-core frequency can leave you a bit more thermal headroom for the P-cores, and vice versa.
Air coolers will offer very limited all-core overclocking capabilities — you’re better off using Turbo Ratio or Per Core overclocking approaches with air cooling. However, a basic rule of thumb for air cooling is to aim for up to a 4.9/3.9 GHz P-core/E-core all-core overclock with the Core i9-12900K. To exceed those limits, you’ll need a 280mm or 360mm AIO (or custom watercooling).
Additionally, taming Alder Lake often requires strict AVX offsets. These offsets reduce chip frequency during AVX workloads to allow higher overclocks when the chip executes standard instructions. For example, we used a -2 GHz AVX offset with our Core i9-12900K’s 5.1 GHz overclock, and a -3 GHz AVX offset with a 5.0 GHz Core i5-12600K overclock. Such a large offset can of course negatively impact performance in AVX workloads, so if you run a lot of those you’ll want to experiment to determine the ideal setting.
You can determine the correct AVX offset by testing with AVX-infused applications and measuring the amount of heat generated. Adjusting the AVX offset will allow you to drop the voltage during everyday workloads, thus keeping thermals within a tenable limit.
Core Voltage (Vcore) Targets for Alder Lake Overclocking
(Image credit: Future)
Naturally, being frugal with the voltage is among the most important factors to control thermals. You can dial in the Vcore, which controls the voltage sent to the P-cores, E-cores, and ring bus, as low as 1.05V. However, you definitely should not exceed 1.45V unless you’re cooling the chip with liquid nitrogen or another type of sub-zero cooling solution.
In our experience with a 280mm AIO watercooler, we obtained a 5.1/3.9 GHz P-core/E-core overclock on the Core i9-12900K and a 5.0/3.9 GHz overclock with the Core i7-12700K. We set the Vcore to 1.29V for both chips. We had to use a bit more voltage, to the tune of a 1.33V Vcore, with the Core i5-12600K at 5. 0/3.9 GHz, but that’s well below the 1.45V that is considered the absolute highest acceptable voltage with standard cooling. We also dialed the ring bus to 4.2 GHz for all three chips.
With the Core i9 and i7 models, you should aim for a maximum of a 1.25V Vcore with air cooling, while a 280mm air cooler can handle up to ~1.33V. In most cases, anything over 1.33V will require a 360mm AIO or custom watercooling. Due to its lower core count, you will have a bit more thermal headroom with the Core i5-12600K with each respective solution.
You can assign different Load Line Calibration (LLC) levels to improve stability, but the LLC implementations vary widely among the various motherboard vendors. We find that most higher-end motherboards tend to overclock Alder Lake fine with the ‘Auto’ LLC setting, but experimenting with different values can firm up an overclock and also avoid excessive voltages. We suggest sticking in the middle range of the LLC spread, and it’s always good to check enthusiast forums to see how others have fared with the same motherboard.
Alder Lake uses a partial FIVR (Fully Integrated Voltage Regulator) power delivery subsystem, meaning some of the chip’s power delivery is in-built. The FIVR feeds several other voltage rails, like the System Agent and Memory Controller voltages that you can increase for bleeding-edge memory overclocks, and the L2 cache for the E-cores. Most enthusiasts using conventional cooling shouldn’t need to manipulate these voltages.
DDR4 Memory Overclocking with Alder Lake CPUs
Memory overclocking on Alder Lake is a bit more involved than we’ve seen with other chip generations due to its support for DDR5 memory. However, we expect most enthusiasts to stick with DDR4, given the current pricing trends.
Intel employs ‘Gear’ technology for memory overclocking, just like the previous-gen Rocket Lake chips and AMD’s Zen 3 processors. Gear 1 mode allows the memory controller and memory frequency to operate at the same speed (1:1), thus providing the lowest latency and best performance in lightly-threaded work, like gaming.
Gear 2 allows the memory to operate at twice the frequency of the memory controller (2:1) and results in higher data transfer rates (frequency), which can benefit some threaded workloads but also results in higher latency that can lead to reduced performance in some applications.
We’ve found that Gear 1 provides the best overall performance, particularly for gaming, so it is almost always the best choice. If you scored a cherry chip, Intel’s previous-gen Rocket Lake chips could push up to DDR4-3800 in Gear 1, though most chips were limited to DDR4-3600. That means you would have to shift to the less-desirable Gear 2 mode to exceed that transfer rate. Alder Lake is much more forgiving in this regard, with most chips supporting DDR4-3800 in Gear 1 and even DDR4-4000 with some chips.
Given the above, you shouldn’t spend too much money on a fancy DDR4 kit with extreme data transfer rates. Tight timings on a DDR4-3600 to DDR4-4000 kit should yield the best results. Overclocking DDR4 is fairly straightforward: We recommend using an XMP profile for Gear 1 operation. This is a simple one-click operation in your BIOS that will bring you up to the speed advertised with your kit, though you may have to bump up the voltage slightly with some kits. Naturally, manual tuning can yield a bit more headroom.
Intel Alder Lake DDR5 Memory Overclocking
(Image credit: Future)
DDR5 opens up a whole new world of memory overclocking, and the new XMP 3.0 standard is a big part of that. XMP 3.0 brings support for up to five memory profiles (SPDs) that define frequency, voltage, and latency. That’s an increase from the previous maximum of two profiles with XMP 2.0 and DDR4 kits. XMP 3.0 also lets you write and name two of the profiles. That means you can adjust the frequencies and all the timings and voltages, assign a name, and save the settings directly to the XMP profile stored in the SPD.
DDR5 also brings power delivery onboard with PMICs (Power Management ICs) on the DIMMs. These new PMIC chips are instrumental in DDR5 overclocking because they control three on-DIMM voltage rails: VDD, VDDQ, and VPP. There are variances in PMIC quality, adding yet another variable to selecting the Best RAM for Alder Lake overclocking.
DDR5 operates in Gear 2 mode by default, and though Intel does have a Gear 4 mode available, it isn’t needed given the current DDR5 ceilings. We’re working on an overclocking guide specifically for DDR5 memory, so stay tuned for more detail. Intel also has a website that will show you which DDR5 memory kits are certified to work with certain motherboards using XMP 3.0 profiles.
Motherboard makers are also rolling out BIOS updates to support Intel’s new Dynamic Memory Overclocking feature. This new tech works with both DDR4 and DDR5. It allows the system to dynamically switch between standard memory frequencies and timings and an XMP profile, meaning it will auto-overclock the memory as needed based on the current usage pattern. And yes, this occurs while the operating system is running and doesn’t require a reboot — it’s a real-time dynamic adjustment.
- MORE: Best CPUs for Gaming
- MORE: CPU Benchmark Hierarchy
- MORE: AMD vs Intel
- MORE: All CPUs Content
Join the experts who read Tom’s Hardware for the inside track on enthusiast PC tech news — and have for over 25 years. We’ll send breaking news and in-depth reviews of CPUs, GPUs, AI, maker hardware and more straight to your inbox.
Contact me with news and offers from other Future brandsReceive email from us on behalf of our trusted partners or sponsors
Paul Alcorn is the Deputy Managing Editor for Tom’s Hardware US. He writes news and reviews on CPUs, storage and enterprise hardware.
AMD Ryzen overclocking options. Ryzen 7 3700X test on ASRock X570 Extreme 4 — i2HARD
May 7, 2020
Processors with an unlocked multiplier have always been appreciated by enthusiasts. Increasing their frequency through simple manipulations gave increased performance, comparable to the performance of older models in the line.
But today the situation with overclocking is not changing for the better for users. In a competitive struggle, manufacturers try to initially squeeze the maximum out of the chips.
And is manual overclocking necessary on a modern platform? Processors have gotten a lot smarter in the last couple of years. They can overclock themselves — Intel’s Turbo Boost technology and AMD’s Precision Boost Overdrive (PBO) technology. Unlike manual overclocking, these technologies work according to an algorithm based on a variety of sensors — voltage, power consumption, and temperature are taken into account.
AMD was especially successful in this with the release of the Zen 2 architecture. Let’s look at ways to overclock Matisse processors using the Ryzen 7 3700X as an example. Let’s evaluate their capabilities and discuss the relevance of overclocking as such.
And here, by the way, is our guide to overclocking AMD Ryzen processors.
Main characteristics of the processor
- Number of cores/threads: 8/16;
- Base frequency/Maximum frequency: 3.6/4.4 GHz;
- Process technology: TSMC 7nm FinFET;
- Default TDP: 65W;
- Maximum temperature: 95°C.
- Processor: AMD Ryzen 7 3700X;
- Motherboard: ASRock X570 Extreme 4, BIOS v 2. 30 from 03/16/20;
- RAM: XPG Spectrix D80 DDR4 RGB Red Edition AX4U320038G16-DR80;
- CPU Cooling: Thermaltake Pacific RL240 Water Cooling Kit;
- Power supply: Enermax Platimax D.F. 750W;
- Drive: Goodram PX500 NVMe PCIe Gen 3 × 4 512 GB;
- Operating system: Windows 10 Pro 64-bit version 2004.
AMD’s automatic overclocking, or Boost, is limited by several parameters:
- PPT Limit (Package Power Tracking) — limit on processor power consumption in watts, when TDP is exceeded, frequencies decrease.
- TDC Limit (Thermal Design Current) — limit on the maximum current supplied to the processor. Determined by the cooling efficiency of the VRM on the motherboard.
- EDC Limit (Electrical Design Current) — limit on the maximum current supplied to the processor. Determined by the VRM circuitry on the motherboard.
- Precision Boost Overide Scalar — coefficient of dependence of the voltage supplied to the processor on its frequency. When the above three parameters are disabled, this limiter saves the processor from failure by limiting the supplied voltage. For one core and for all cores, this figure is different. In our case, with the maximum value of Scalar ×10 with disabled restrictions, the maximum voltage per core was 1.49 V.
As you can see, auto-overclocking depends not only on the processor instance, but also on the motherboard, and specifically on its VRM power supply scheme, its cooling, and also on the cooling efficiency of the CPU itself.
Not only the total peak power of the chip is taken into account, but also the individual characteristics of each core: its frequency response to voltage, thermal interactions between neighboring cores, and power limits for each core.
In automatic overclocking, the maximum frequency for 1-3 cores was 4400 MHz, four cores, eight threads worked at a maximum frequency of 4275 MHz, with 100% load on all threads, all cores worked at a frequency of 3949 MHz. The maximum power consumption was 90 W with the highest voltage from 1.18 to 1.49 V. In the LinX stress test, the temperature rose to 68°C.
In single-threaded mode, the maximum frequency reaches the Ryzen 7 3700X declared in the technical specifications. In multi-threaded mode, auto overclocking adds 12% to the base frequency of the processor.
Manual multiplier setting
This is the most popular method of overclocking processors, which does not require special knowledge, has been known for many years, and it is it that is mainly used for overclocking intel processors. Suitable for Ryzen processors without X suffix.
We go into the BIOS, look for the tab or parameter OC Tweaker. We transfer the CPU Frequency value to manual mode. We will change two parameters: multiplier and voltage.
By default, for our processor, these figures are 36 and 1.1 V. Gradually change the multiplier by one, save, load Windows and test stability. If it is impossible to boot the OS or there are errors in the tests, we increase the voltage. The voltage range up to 1.45 V is considered safe.
It should be noted that when you enable the manual multiplier change mode, dynamic frequency change is disabled, all cores will operate at the manually set frequency without reducing it without load. The voltage will change depending on the load.
As a result, we managed to increase the frequency of all cores to 4.3 GHz with a voltage of 1.42 V. At this frequency, the system worked stably, passed all tests without errors.
At frequencies of 4.4 and 4.45 GHz, Windows booted, but there were errors in the tests, and the system did not work stably. Increasing the voltage didn’t help.
Here is a graph of the dependence of voltage growth on frequency, temperature changes under load and power consumption.
As you can see, up to 4.2 GHz, the voltage changes insignificantly and the temperatures are quite low. But already at 4.3 GHz, the temperature and power consumption increase significantly.
What do we get as a result? All cores at 100% load operate at a frequency of 4300 MHz — this is plus 20% to the nominal frequency. Power consumption rose to 137W at 1.42V. The maximum temperature during the stress test was 82°C. Of the minuses, it can be noted that there is no change in frequency without load.
But that’s not all that can be done with processors based on the Zen 2 architecture. Since the processor physically consists of separate CCX blocks with 4 cores each, each of these blocks can be overclocked separately, if, of course, the BIOS has such an opportunity.
There are two such blocks in our 3700X processor, and one of them has more successful cores, on which we will try to increase the frequency above the common 4300 MHz.
For these manipulations, we will find the appropriate parameters on the AMD Overclocking tab.
Previously, in the OC Tweaker tab, leave the CPU Frequency value in manual mode, do not touch the multiplier, but change the voltage values.
On the AMD Overclocking tab, we are interested in two parameters — CCX0 and CCX1 Frequency, and we will change them. Since all the cores worked at 4300 MHz, we leave this parameter for the second block, and on the first we begin to increase the frequency in increments of 25 MHz.
The highest value that worked stably was 4350 MHz.
The increase is insignificant, but the principle itself is important to us. In the older AMD Ryzen 9 3900X, there are already four such execution units, 3 cores each, and, accordingly, there are more maneuvers for their separate overclocking.
Precision boost overdrive, BCLK and Offset voltage changes
This feature works for processors with the X index and is designed solely to enhance dynamic overclocking. It is disabled by default and activating it will void the warranty.
We are looking for the Precision Boost Overdrive parameter in the BIOS. On our board, this parameter was hidden in the Advanced tab in the AMD Overclocking parameter.
Here we set the values for the PPT, TDC, and EDC parameters, which we discussed above. We set the value to 1000 everywhere, which will remove all restrictions on these items. You can also set more real limits, recommended for the 3700X — 105, 70, 105, which will not deprive the VRM of protection.
The voltage versus frequency ratio, or Scalar, varies from ×1 to ×10, in practice it had almost no effect on the increase in processor frequency, but the maximum voltage increases when a larger coefficient is selected. Set the value to ×2.
The value of the maximum boost will be set to 200 MHz — this is the largest possible number.
Below we set the limiting temperature of 85 or 95 degrees.
Next, we need to adjust the CPU Core Voltage — Offset Mode values. Find the External Voltage Settings and LLC parameter in the OC Tweaker tab.
We set the minimum value of Offset Mode in mV, this value will be added to the base voltage value at maximum load on the processor. A negative value is also possible, in which case it will be subtracted from the base value.
Here we can also set the LLC (Load-Line Calibration) value levels — this is the boost voltage during load, it affects the stability during overclocking. There are five levels in total, from 25 to 100%.
Other values of CPU Over Protection are left in automatic mode to protect components.
We save and check the stability of the work. In case of unstable behavior, we can increase the minimum Offset Mode value, change the Scalar value and the LLC level.
Having achieved stable operation at the set values, we can further increase the frequency by changing the BCLK system bus. By default we have 100 MHz. Changing this setting will affect not only the processor, but also the memory, USB ports, PCI-E bus, and SATA interfaces. Its increase overclocks almost all components of the motherboard, which can lead to problems with their stability, especially for drives.
The stable value was 102 MHz. This number is multiplied by a dynamically changing multiplier and we get the resulting value of the maximum frequency in certain tasks. The maximum frequency on 1-3 cores rose to 4513 MHz. With 100% load of all threads, the maximum frequency was 4308 MHz for all cores.
How much could we add to automatic overclocking by manually editing BIOS values? In single-threaded mode, plus 100 MHz, in multi-threaded mode, the increase is more significant — almost 300 MHz, this value corresponds to that obtained during overclocking by changing the multiplier.
In contrast to the previous type of overclocking, power consumption decreased to 119 W at an average voltage of 1. 4 V; at load peaks, the voltage rose briefly to 1.49 V maximum due to Offset Mode. The temperature under load also decreased and reached a maximum of 75°C.
Ryzen master software overclocking
To overclock its processors from under Windows, AMD offers a proprietary Ryzen master utility.
In this utility, all types of overclocking discussed above are possible.
Automatic overclocking — in this tab we can only change the PPT, TDC, EDC parameters and the Boost value, also up to a maximum of 200 MHz. We cannot change the frequency or voltage.
The same values, but without selecting the Boost value, can be changed in the Precision boost overdrive mode. PPT, TDC, EDC default values are 1000, 380, 380.
In both cases, we got almost identical results. In contrast to the automatic mode set by the BIOS of the motherboard, the increase was only 50 MHz in multi-threaded tasks, and up to 300 MHz with a mixed load. For one core — all the same 4400 MHz. But the indicators of energy consumption and temperatures have increased.
We see the manual overclocking mode as more interesting and practically in demand. Here we can change not only the values of the CCX modules, but also each core individually. Moreover, the program marks the most successful cores for overclocking. Also here you can generally disable individual cores. These settings are not found in most motherboard BIOSes.
By exposing all the cores to the previously identified stable frequency of 4300 MHz, we got the same results. Increasing to 4400 MHz caused the system to reboot after enabling the test utility.
With separate overclocking of each CCX execution unit, we got the same results: 4350 and 4300 MHz, respectively.
We also noticed that the cores marked by the program as the most efficient did not match those that actually showed a higher frequency in the tests. Ryzen master marked the 3rd core with a gold star, the 7th core with a silver one, and the 2nd and 6th with a circle. In tests 1, 3 and 8, the highest frequencies were taken, the second core ranked lower.
Let’s take a look at the performance gain in test utilities in various overclocking modes. In all tests, the RAM worked with XMP profile 3200 MHz 16-18-18-36 CR1.
First test of LinX 0.6.5 AMD Edition AVX. This utility loads all threads. Here are the parameters in GFlops.
The next test — Cinebench R20 also loads all the cores, rendering is one of the most popular workloads for a modern PC, where multithreading is involved.
As you can see, in tasks that load all threads, overclocking by a multiplier has an advantage, the frequency and voltage are fixed. The PBO + BCLK overclocking mode is slightly inferior, although all the cores operate at the same frequency of 4300 MHz, but they can sag periodically. Software overclocking is slightly inferior.
The following tests do not load all threads evenly, the WinRAR archiver and wPrime change the load dynamically.
In these tests, we see that multiplier overclocking loses performance due to lower frequency when using 1-3 cores.
Only the overclocking mode with an increase in BCLK affects the speed of working with memory, since it also changes the speed characteristics of the memory by increasing the bus frequency. At the same time, we see an increase in recording and copying data.
Overclocking the AMD Ryzen 7 3700X processor turns out to be a dubious undertaking. And we have at least two reasons for this statement.
First, the cost of a motherboard based on the X570 chipset with an adequately implemented VRM and an efficient CPU cooling system will cost as much as the processor itself.
Secondly, overclocking in manual mode gives an increase of 100-300 MHz to the values that the processor shows in automatic mode, thanks to PBO technology. The increase in performance due to these additional couple of hundred is noticeable only in benchmarks, you will not see it in real tasks.
We made the next conclusion about the irrelevance of overclocking by fixing the frequency by a multiplier for processors of the Zen 2 architecture. Today, you can forget about it. Increasing the frequency on all cores gives a performance boost only in multi-threaded modes, from 8 or more. And reduces performance in single-threaded tasks.
Even with automatic overclocking, using four cores and eight threads, they all ran at 4300 MHz — the maximum possible when overclocking due to the multiplier. And two cores easily worked at a frequency of 4400 MHz. Also, this type of overclocking blocks the dynamic change in frequency without load, which leads to more power consumption.
The best solution seems to be overclocking by modifying the existing boost through the processor power settings. Changing voltages through offset mode, disabling PBO limits, changing the Scalar coefficient, selecting LLC levels, as well as changing the BCLK frequency can give a performance boost in both multi-threaded and single-threaded tasks.
The VRM capabilities of the motherboard and the CPU cooling system, as well as the flexibility of the BIOS settings of a particular motherboard, are of tangible importance for this type of overclocking.
Was overclocking effective? Looking at an increase of 100 MHz at the maximum frequency shown, we can say that it is not. The figure of 4.5 GHz, against the background of possible 5 GHz for Intel processors, is somehow not particularly impressive, but let’s not be so categorical and hasty with conclusions. Overclocking due to boost modification gave us +300 MHz under multi-threaded load, which is more in demand than single-threaded mode.
Technologies are developing and it is already possible to forget about a simple increase in the multiplier. The manufacturer squeezed the maximum out of the processor itself, and we can get an increase in frequencies based on the capabilities of the CPU power subsystem of the motherboard and the flexibility of voltage settings in the BIOS. And this is an opportunity for competition among motherboard manufacturers. Perhaps in the near future we will see the release of models capable of squeezing even more megahertz out of AMD processors.
Overclocking of AMD processors is again becoming the lot of enthusiasts, the average user will obviously not bother for an extra hundred megahertz, because «smart» processors can effectively overclock themselves.
The First 1 Cent High Performance Plastic Processor / Habr
30-40 years ago, when personal computers were still new and there was no Internet as such, the pioneers of computing technology predicted that in the future electronic chips would become so cheap that they would be everywhere — in homes, in vehicles, even in the human body. . For that time, this idea seemed fantastic, even absurd. PCs were then very expensive and most of them did not even connect to the Internet. The idea that billions of tiny chips would ever be cheaper than seeds seemed ridiculous.
For decades, techies have been promising a world where absolutely every object that we will encounter — furniture, dishes, clothes — will have a «mind» thanks to ultra-cheap programmable processors. If you’re wondering why this hasn’t happened yet, it’s because no one has built working processors that can be produced in the billions worth 1 cent each.
Over time, absolutely everything around us will become «smart». Manufacturers that have not made their products “smart” will at some point be forced out of the market by competitors who have managed to do this. One way to achieve such cheap microprocessors is through plastic microchips.
Nearly 50 years ago, Intel created the world’s first mass-produced microprocessor, the 4004, a modest 4-bit CPU with 2,300 10-micron silicon transistors capable of performing only simple arithmetic operations. Since this pioneering achievement, there has been continuous technological development with increasing complexity to the extent that modern 64-bit silicon microprocessors now have 30 billion transistors (for example, the AWS Graviton2 microprocessor, manufactured in a 7nm process). Microprocessors are so ingrained in our lives that they have become a meta-invention, that is, a tool that allows us to implement other inventions.
Microprocessors are at the heart of every electronic device, including smartphones, tablets, laptops, routers, servers, cars and, more recently, smart objects that make up the Internet of Things. While traditional silicon technology includes at least one microprocessor embedded in every smart device on Earth, it faces key challenges to making everyday objects smarter. Cost is the most important factor preventing conventional silicon technology from being applied to these everyday items. While economies of scale in silicon production have helped to drastically reduce unit costs, the unit cost of a microprocessor is still prohibitive. In addition, silicon chips are not naturally thin and flexible, which are highly desirable characteristics for embedded electronics in these everyday items.
On the other hand, flexible electronics offer these desirable features. Over the past two decades, this technology has advanced to offer low-cost, thin, flexible, and user-friendly devices, including sensors, memory, batteries, light-emitting diodes, power harvesters, and printed circuits. These are the basic components for building any intelligent integrated electronic device. The missing element is a flexible microprocessor. The main reason there is still no viable flexible microprocessor is that a relatively large number of thin film transistors (TFTs) need to be integrated on a flexible substrate to perform any meaningful computation.
For example, in 2021, Arm reproduced their simplest 32-bit M0 microcontroller made of plastic, but even this could not meet the requirements. The problem, according to engineers at the University of Illinois Urbana-Champaign and British flexible electronics maker PragmatIC Semiconductor, is that even the simplest industry-standard microcontrollers are too complex to mass-produce from plastic.
Unlike conventional semiconductor devices, flexible electronic devices are built on substrates such as paper, plastic or metal foil and use active thin film semiconductor materials such as organic compounds, metal oxides or amorphous silicon. They offer a number of advantages over crystalline silicon, including low manufacturing costs. Thin film transistors (TFTs) can be fabricated on flexible substrates at much lower processing costs than metal-oxide-semiconductor field-effect transistors (MOSFETs) fabricated on crystalline silicon wafers. The goal of TFT technology is not to replace silicon. As both technologies continue to evolve, it is likely that silicon will continue to benefit in terms of performance, density, and energy efficiency. And TFTs will allow the creation of electronic products with new form factors and costs unattainable for silicon, thereby greatly expanding the range of potential applications.
8-bit and 4-bit microprocessors respectively
An intermediate approach is to integrate silicon-based microprocessor dies into flexible substrates, also called hybrid integration, where the silicon wafer is thinned and the wafer dies are integrated into a flexible substrate. Although the integration of a thin silicon die offers a short term solution, this approach still relies on traditional costly manufacturing processes. Therefore, this is not a viable long-term solution to produce the billions of everyday smart objects that are expected in the next decade and beyond.
In a study to be presented at the ISCA 2022 International Symposium on Computer Architecture, the transatlantic team presents a simple yet fully functional plastic processor that can be made for less than 1 cent. A team at the University of Illinois designed 4-bit and 8-bit processors specifically to minimize size and maximize the percentage of operational integrated circuits produced. The 4-bit version chip worked, delivering 81% performance, enough to break the 1 cent barrier.
Plastic microchip test rig
4-bit plastic microchip architecture made of plastic and able to work even when bent around a millimeter radius. But at a time when a reliable manufacturing process is a must, it was design that mattered more.
Instead of adapting an existing microcontroller architecture for plastic, the Illinois team started from scratch to create a design called the Flexicore . Performance drops very quickly if you increase the number of conductive channels. Knowing this, the team developed a design capable of minimizing the number of channels needed. The use of 4-bit and 8-bit logic instead of 16-bit or 32-bit helped. As well as separating the memory that stores instructions from the memory that stores data. But the team has also reduced the number and complexity of instructions the processor is capable of executing.
Comparison of silicon and IGZO on the example of a TV display
Why not silicon?
You might be wondering why silicon processors can’t do the work of super-cheap flexible computing. Compared to plastic, silicon is expensive and inflexible, but if the chip was made small enough, plastic might not be needed. However, silicon fails at this task for two reasons: first, although the circuit area can be made ultra-small, it still needs to leave relatively large space around the edges so that the chip can be cut out of the wafer. In the case of a microcontroller as simple as the Flexicore, there will be more space around the edge than the area containing circuits. Moreover, even more space would be required to accommodate enough I/O pads to allow data and power to be delivered to the chip. All of a sudden there is a large area of expensive empty silicon, pushing costs past the critical $0.01 mark.
The team made it even easier by designing the processor to execute an instruction in a single cycle instead of the multi-stage pipelines of modern processors. They then developed logic to implement these instructions by reusing parts, further reducing the number of gates.
All this led to the creation of a 4-bit FlexiCore with an area of 5.6 square millimeters, consisting of only 2104 semiconductor devices (about the same as the number of transistors in the Intel 4004 1971 years) compared to about 56,340 PlasticARM units. This is an order of magnitude less than the smallest silicon microcontrollers in terms of gate count. The team has also developed an 8-bit version of FlexiCore, but it has not yet yielded positive results.
With PragmatIC Semiconductor, the Illinois team produced plastic-coated wafers filled with 4-bit and 8-bit processors, tested them at various voltages in several programs. The experiment seems simple, but it is groundbreaking. Most research processors built using non-silicon technologies give such low performance that the results are reported from one or, at best, several working chips. This is the first work in which anyone has taken data from multiple chips for any non-silicon technology.
Not satisfied with this success, the team developed a design tool to explore architectural optimizations for various applications. For example, the tool showed that power consumption can be significantly reduced by slightly increasing the number of valves.
The chip industry has been focused on power and performance and, to some degree, reliability. Focusing on cost allows new computer architectures to be created and new applications to be targeted.