Ryzen 3900x rendering: Reddit — Dive into anything

Best CPU For Rendering [2023 Guide]

TABLE OF CONTENTS

1

Have you ever wondered what Processor (CPU) is best for rendering?

Finding the best CPU for rendering, which is also as cheap as possible, is something you will want to do before building a new Computer for 3D Rendering, or a dedicated Render node/ Renderfarm.

3ds Max, Maya, Cinema 4D, Blender, and many other 3D software packages have in-built or 3rd-party CPU Render Engines that all rely directly on your CPU’s multi-core performance.

But because there are so many CPUs with all kinds of clock-speeds, core-counts, hyperthreading, or brand, it can become quite cumbersome to select the right one to go with.

AMD Ryzen, Threadripper, Intel i5, i7, i9, XEON, Celeron, some with many cores and others with high core clocks.

Ultimately, it all comes down to raw CPU Rendering performance, which I will be measuring with Cinebench, the currently leading Benchmarking Software for CPU Rendering Performance.

Of course, there are lots of lists online to check cinebench points, but even more important than scores is how well the performance/dollar ($) ratio is since spending an unnecessary amount on a CPU is something we’ll all want to avoid.

This is why I have created a performance/dollar ($) table which you can sort to your liking.

This will show you the best Rendering CPU for the money:

Best CPU for 3D Rendering

Performance / Dollar ($): Higher is better.

CPU rendering performance based on Cinebench R23 scores: Higher is better.

▮ = AMD   |   ▮ = Intel

CPU Name CPU Rendering Performance Price ($) Performance / Dollar*
AMD Threadripper PRO 5965WX 40535 2399
AMD Threadripper PRO 5975WX 53977 3299
AMD Threadripper PRO 5995WX 66403 6499
AMD Ryzen 7 1700X 8869 230
AMD Ryzen 5 5600G 11285 259
AMD Ryzen 7 5700X 14214 299
AMD Ryzen 7 5800X3D 15003 449
AMD Ryzen 5 7600X 14780 299
AMD Ryzen 7 7700X 20144 399
AMD Ryzen 9 7900X 30020 549
AMD Ryzen 9 7950X 40795 699
AMD Ryzen 5 5500 10710 159
AMD Ryzen 5 5600 11429 199
AMD Ryzen 3 3100 5423 99
AMD Ryzen 3 3300X 6787 120
AMD Ryzen 5 2600X 7523 229
AMD Threadripper 1900X 8979 299
AMD Ryzen 5 3600 9073 199
AMD Ryzen 5 3600X 9526 236
AMD Ryzen 5 3600XT 9945 249
AMD Ryzen 7 2700X 10140 329
AMD Ryzen 5 5600X 11201 230
AMD Ryzen 7 3700X 12195 329
AMD Ryzen 7 3800XT 12955 399
AMD Ryzen 7 3800X 13848 339
AMD Ryzen 7 5700G 14350 359
AMD Ryzen 7 5800X 14812 300
AMD Threadripper 1920X 15038 799
AMD Ryzen 9 3900XT 18511 499
AMD Ryzen 9 3900X 18682 434
AMD Threadripper 2950X 18797 899
AMD Threadripper 1950X 19635 999
AMD Ryzen 9 5900X 22046 450
AMD Ryzen 9 3950X 26375 749
AMD Threadripper Pro 3955WX 27175 1149
AMD Ryzen 9 5950X 28782 600
AMD Threadripper 2990WX 29651 1799
AMD Threadripper 3960X 34932 1399
AMD Threadripper Pro 3975WX 43450 2749
AMD Threadripper 3970X 46874 1999
AMD Epyc 7702P 48959 4425
AMD Threadripper Pro 3995WX 73220 5489
AMD Threadripper 3990X 75671 3990
Intel i5 13600K 24528 329
Intel i7 13700K 31069 409
Intel i9 13900K 41012 589
Intel i9 10980XE 25490 979
Intel i9 9920X 14793 1189
Intel i9 9960X 17953 1684
Intel i9 9980XE 27093 1979
Intel i9 9900X 13994 989
Intel i5 9600K 6596 262
Intel i7 9700K 9428 385
Intel i9 9900K 12470 499
Intel i7 10700K 13302 384
Intel i9 10850K 16820 464
Intel i9 10900K 18034 499
Intel i5 11600K 11277 272
Intel i7 11700K 14812 409
Intel i5 12400 12344 199
Intel i5 12400F 12321 174
Intel i9 11900K 16211 549
Intel i5 12500 12974 212
Intel i7 12700F 21568 324
Intel i5 12600K 17660 299
Intel i7 12700K 23488 419
Intel i9 12900F 26455 494
Intel i9 12900KF 27472 574
Intel i9 12900K 27483 589
Intel i9 12900KS 27796 739
CPU Name CPU Rendering Performance Price ($) Performance / Dollar*

 

Now you know the best performance/price ratio of different CPUs when it comes to pure CPU rendering.

Keep in mind, to truly find not just the best performing CPU for rendering, but the best overall system for your rendering needs, you should also consider:

  • Power consumption: Does the CPU need lots of power and drive up your power bill?
  • Single- vs. multi-CPU systems: What is the overall system price per CPU? Some CPUs can be installed into multi-CPU systems, which might reduce the overall system price per CPU
  • Heat: Does the CPU get very hot? Will you need a loud and expensive cooling solution? Ryzen and Threadripper CPUs tend to be easily cooled
  • CPU-Cooler price: Some CPUs, such as the AMD Ryzen CPUs, have a CPU Cooler included in the package already, which has to be factored in when comparing CPUs
  • Motherboard/RAM price: A cheap CPU might not be such a great deal if you need an expensive motherboard or RAM for it
  • Number of cores (performance) per system: A Ryzen 5 5500 might have extremely high CPU Rendering value, but you will also need multiple of those CPUs (and therefore multiple systems) to get to the performance of a single Threadripper 3990X

If your rendering demands are high and a single PC may not be enough, be sure to check our guide on building your own Renderfarm.

AMD Ryzen 9 7950X vs Intel i9 13900K

I have been asked this several times, as both of these CPUs are extremely popular. 7950X vs 13900K. Which one is better for rendering?

So let’s make a quick comparison:

  • AMD Ryzen 9 7950X: 16 Cores, 32 Threads, draws less power, Stays cooler – 40795 Cinebench (R23) Points
  • Intel Core-i9 13900K: 24 Cores, 32 Threads, 10% higher single-core performance, can get hot with high power draw – 41012 Cinebench (R23) Points

If you put everything but performance aside, it usually comes down to the following:

  • Are you rendering a lot and want lower noise and power draw? Get a Ryzen 9 7950X.
  • Do you actively work on this PC a lot? Get a Core-i9 13900K.

One of these two CPUs is usually what you would choose when building a Computer for Animation or a Computer for 3D Modeling, as they are some of the highest-clocking CPUs out there.

What CPU Core-Feature is more valuable / important to you?

  • Having High Core-Clocks for fast active work (e.g. i9 13900K)

  • Having a lot of Cores for fast Multi-Processing (e.g. TR 3990X)

Poll Options are limited because JavaScript is disabled in your browser.

High Core-Counts vs. high Core-Clock

Both high core-counts and high core-clocks will improve your rendering speeds. Having more cores is usually the best price/performance way of increasing 3D CPU rendering speed.

Of course, rendering alone isn’t what you usually do on a typical workstation. When actively working on it, be it in 3D, Photo Editing, Graphic Design, or Video Editing, having high core-clocks will benefit you much more than having many cores.

This means it would be best to have both lots of cores and high core clocks. Since CPUs usually trade cores for clock speeds (because of thermal and power limits) you typically have to find a middle ground between the number of cores and clock-speed, though.

Best CPU for Rendering on a Laptop

Now, all of the above are CPUs that would be built into a 3D Rendering Computer or Workstation. If you are interested in using something more mobile, say, a Laptop for Animation, whilst enjoying fantastic CPU rendering speed, then the following list is for you:

▮ = AMD   |   ▮ = Intel |   ▮ = Apple

CPU Name Single Core Performance Multi Core Performance Performance Total*
Intel Core i9-12950HX 1927 23019
Intel Core i9-12900HX 1902 18845
Intel Core i9-12900HK 1938 18197
Intel Core i7-12700H 1806 16745
Intel Core i9-12900H 1917 16555
AMD Ryzen 9 6980HS 1669 14736
AMD Ryzen 9 6980HX 1669 14711
AMD Ryzen 9 6950HS 1662 14670
AMD Ryzen 9 6950H 1662 14670
AMD Ryzen 7 6800H 1499 13611
AMD Ryzen 9 6900HX 1662 14670
AMD Ryzen 9 6900HX 1662 14670
AMD Ryzen 9 6900HS 1579 13977
Intel Core i9-11980HK 1574 13977
AMD Ryzen 9 5900HX 1478 13875
AMD Ryzen 9 5980HX 1524 13460
AMD Ryzen 9 5980HS 1521 12844
Intel Core i9-11950H 1574 12836
Intel Core i9-11900H 1540 12354
Intel Core i7-11850H 1517 12354
Intel Core i7-11800H 1492 12180
AMD Ryzen 7 5800HS 1339 10472
AMD Ryzen 5 5600H 1370 10123
AMD Ryzen 7 5700U 1274 9555
Intel Core i5-11500H 1492 9532
AMD Ryzen 5 5600HS 1342 9439
AMD Ryzen 5 5500U 1165 6784
Intel Core i5-1185G7 1538 6264
Intel Core i5-1165G7 1504 6070
Intel Core i5-1135G7 1343 5913
Intel Core i7-11370H 1517 5812
Intel Core i5-1145G7 1419 5059
Apple M1 1528 7799
Apple M2 1701 8538
Apple M1 Pro 1543 12170
Apple M2 Pro 1701 15248
Apple M1 Max 1555 12422
Apple M2 Max 1701 15248
CPU Name Single Core Performance Multi Core Performance Performance Total*

*Weighted. Total Performance (column) is relative to an Intel i7 11700k, weighed equally at 50% single-core and 50% multi-core performance. This weighing will indicate good all-round performance for most workloads.

If you’re running very specific tasks, that you know demand, e.g., many cores (such as CPU rendering), sort the table by multi-core performance. Or, if you’re certain your workloads only need high single-core performance, sort by that column.

Benchmark used for this list is Cinebench R23.

Benchmarks vs. Real World

One should be aware that benchmarks are usually not representative of all types of real-world workloads.

A Threadripper 3990WX (review), for example, is extremely fast at rendering scenes that would otherwise spend a huge amount of time in the bucket-rendering phase (the phase that is parallelized most easily. Most modern render engines are transitioning to make this the “progressive” phase, where you progressively see your rendered scene more clearly every few seconds).

When rendering frames that don’t take very long (< 1 min), having multiple lower-end CPUs instead of one very powerful CPU is usually better. This is because you can’t perfectly parallelize the entire rendering process! 

There are lots of steps involved in Rendering:

  • Preparation time
  • Mesh exporting
  • Texture loading time
  • Cache building time
  • Ray-Tracing tree-building time
  • Light-Cache and other GI-Caching times

.. to only name a few. These are all rendering steps that are done before the more well-known (visual) bucket/progressive rendering stage even starts.

Some of these stages might even be restricted to a single CPU core. And when you have 64 cores (as in the Threadripper 3990X), 63 of those Cores will have to wait idly until these preparation steps are done.

Lots of these benchmarks, such as Cinebench, mainly measure the visual bucket/progressive rendering phase where a multi-core CPU with many cores pulls ahead easily, as the underlying scenes are usually not all that complex (Read as: there is almost no “single-core” preparationtime involved in benchmarks).

Long story short:

Make sure to analyze the type of scenes you are planning to render. Measure what rendering stage usually takes up the most time in a few of your typical scenes. Keep an eye on the CPU-utilization in your Task Manager to see if the current rendering phase uses all CPU Cores or only a few to find out what has to be improved.

Most CPU render engines nowadays show the current rendering stage somewhere in the render window, like in the example below taken from Cinema 4D’s picture viewer [Updating Geometry]:

Be sure to also check our guide on how to render faster. It’s an in-depth primer on scene optimization and should help you speed up your renders, maybe even postponing the need to buy new & expensive components.

Custom PC-Builder

If you want to get the best compatible parts for your workloads that are also within a specific budget you’re working with, you should definitely have a look at our web-based PC-Builder Tool.

Select “CPU Rendering” as your main purpose and adjust your budget to create the perfect PC with part recommendations that will fit within your budget.

Need even more Rendering performance?

There are 4 popular ways to speed up your rendering performance.

  1. Optimize your scene, so it renders faster: Here’s our Guide on this.
  2. Buy a faster CPU or GPU for your workstation. You are already reading the CPU Guide; here’s the GPU Guide.
  3. If a single workstation doesn’t cut it anymore, build your own render farm with multiple render nodes: We wrote an in-depth Guide on that as well.
  4. And if all of the above still isn’t fast enough for you, you’ll need to utilize an online Render farm: Our Guide on online Renderfarms.

FAQ

Is Intel or AMD better for rendering?

AMD’s Threadripper CPUs are clearly in the lead when it comes to CPU rendering. Core-Count, performance per dollar, and lower power consumption, AMD currently has the better CPUs for pure multi-core CPU rendering.

Is GPU rendering faster than CPU rendering?

GPU rendering is usually considerably faster than CPU rendering when comparing performance per dollar on a CPU and GPU. A GPU’s architecture and its thousands of CUDA / Stream-processing cores are purpose-made for parallel processing and easily outperform a CPU.

Faster interactive & real-time previews bring your 3D Projects to a whole new level of quality, thanks to the added iterative capabilities that are made possible with GPU rendering.

GPUs can also be more easily added to, scaled, and carried over throughout multiple motherboard/CPU generations, making them more cost-effective long-term.

Are you mainly rendering on the GPU or CPU?

  • Mainly GPU

  • Mainly CPU

  • Hybrid (CPU + GPU)

Poll Options are limited because JavaScript is disabled in your browser.

Direct performance comparisons, though, are challenging to conduct, as the feature-set of GPU and CPU render engines differ too much, and images don’t always look the same when rendered. Hybrid engines come close, but they, too, don’t always support every feature across both hardware components.

Does the CPU affect GPU rendering performance?

The CPU can affect GPU rendering performance. The CPU’s task is to prepare parts of the 3D Scene and send assets to the GPU. On very short renders, the CPU becomes a considerable factor. The longer your Bucket-rendering phase (pure GPU) lasts, the less the CPU impacts render-time.

If your entire scene fits into your GPU’s VRAM, the CPU’s impact on render-time is lower than when you’re utilizing out-of-core access to the Systems RAM.

Is RAM important for rendering?

Sufficient RAM is essential for CPU Rendering. The CPU holds your 3D Scene in Memory and accesses its contents throughout the rendering phase. If your Scene is too large and does not fit into your RAM, it’ll be swapped to your storage disc, which is considerably slower than RAM.

Always make sure you have enough RAM.

Is Ryzen good for Rendering?

AMD Ryzen CPUs are great for rendering in CPU-based render engines. CPU render engines scale almost linearly with more cores, though, so high-core-count Threadripper CPUs (up to 64 Cores), for example, will fare even better.

When buying a Ryzen CPU to maximize CPU rendering performance, make sure to buy one with as many cores as possible.

Over to you

What kind of Computer are you building? Feel free to ask for help in the comments or in our expert forum.

CGDirector is Reader-supported. When you buy through our links, we may earn an affiliate commission.

Benchmarking Performance: CPU Rendering Tests

by Andrei Frumusanu & Gavin Bonshoron July 7, 2019 9:00 AM EST

  • Posted in
  • CPUs
  • Gaming
  • AMD
  • Zen
  • 7nm
  • Ryzen
  • Ryzen 7
  • Zen 2
  • Ryzen 3000
  • Ryzen 3rd Gen
  • Ryzen 9
  • 12-core
  • Ryzen 3700X
  • Ryzen 3900X

447 Comments
|

447 Comments

New Microarchitecture, New 7nm Process Node, New Chiplet Design Memory Hierarchy Changes: Double L3, Faster MemoryX570 Motherboards: PCIe 4. 0 For EverybodyBenchmarking Setup: Windows 1903Test Bed and SetupSPEC2006 & 2017: Industry Standard — ST Performance & IPCBenchmarking Performance: Web TestsBenchmarking Performance: CPU System TestsBenchmarking Performance: CPU Rendering TestsBenchmarking Performance: CPU Encoding TestsBenchmarking Performance: CPU Office TestsBenchmarking Performance: CPU Legacy TestsGaming: World of Tanks enCoreGaming: Shadow of WarGaming: Ashes Classic (DX12)Gaming: Strange Brigade (DX12, Vulkan)Gaming: Grand Theft Auto VGaming: F1 2018Power Consumption & OverclockingConclusion: Shy Of The Very Best, Overall Absolute Winner

** = Old results marked were performed with the original BIOS & boost behaviour as published on 7/7.

Rendering is often a key target for processor workloads, lending itself to a professional environment. It comes in different formats as well, from 3D rendering through rasterization, such as games, or by ray tracing, and invokes the ability of the software to manage meshes, textures, collisions, aliasing, physics (in animations), and discarding unnecessary work. Most renderers offer CPU code paths, while a few use GPUs and select environments use FPGAs or dedicated ASICs. For big studios however, CPUs are still the hardware of choice.

All of our benchmark results can also be found in our benchmark engine, Bench.

Corona 1.3: Performance Render

An advanced performance based renderer for software such as 3ds Max and Cinema 4D, the Corona benchmark renders a generated scene as a standard under its 1.3 software version. Normally the GUI implementation of the benchmark shows the scene being built, and allows the user to upload the result as a ‘time to complete’.

We got in contact with the developer who gave us a command line version of the benchmark that does a direct output of results. Rather than reporting time, we report the average number of rays per second across six runs, as the performance scaling of a result per unit time is typically visually easier to understand.

The Corona benchmark website can be found at https://corona-renderer. com/benchmark

 

LuxMark v3.1: LuxRender via Different Code Paths

As stated at the top, there are many different ways to process rendering data: CPU, GPU, Accelerator, and others. On top of that, there are many frameworks and APIs in which to program, depending on how the software will be used. LuxMark, a benchmark developed using the LuxRender engine, offers several different scenes and APIs.

Taken from the Linux Version of LuxMark

In our test, we run the simple ‘Ball’ scene on both the C++ and OpenCL code paths, but in CPU mode. This scene starts with a rough render and slowly improves the quality over two minutes, giving a final result in what is essentially an average ‘kilorays per second’.

POV-Ray 3.7.1: Ray Tracing

The Persistence of Vision ray tracing engine is another well-known benchmarking tool, which was in a state of relative hibernation until AMD released its Zen processors, to which suddenly both Intel and AMD were submitting code to the main branch of the open source project. For our test, we use the built-in benchmark for all-cores, called from the command line.

POV-Ray can be downloaded from http://www.povray.org/

Cinebench R15

The latest version of CineBench has also become one of those ‘used everywhere’ benchmarks, particularly as an indicator of single thread performance. High IPC and high frequency gives performance in ST, whereas having good scaling and many cores is where the MT test wins out.

Benchmarking Performance: CPU System Tests
Benchmarking Performance: CPU Encoding Tests
New Microarchitecture, New 7nm Process Node, New Chiplet Design Memory Hierarchy Changes: Double L3, Faster MemoryX570 Motherboards: PCIe 4.0 For EverybodyBenchmarking Setup: Windows 1903Test Bed and SetupSPEC2006 & 2017: Industry Standard — ST Performance & IPCBenchmarking Performance: Web TestsBenchmarking Performance: CPU System TestsBenchmarking Performance: CPU Rendering TestsBenchmarking Performance: CPU Encoding TestsBenchmarking Performance: CPU Office TestsBenchmarking Performance: CPU Legacy TestsGaming: World of Tanks enCoreGaming: Shadow of WarGaming: Ashes Classic (DX12)Gaming: Strange Brigade (DX12, Vulkan)Gaming: Grand Theft Auto VGaming: F1 2018Power Consumption & OverclockingConclusion: Shy Of The Very Best, Overall Absolute Winner

PRINT THIS ARTICLE

Intel or Ryzen for Graphics, Rendering and Video Editing

With the introduction of the new line of Ryzen 3000 series processors (3900X, 3800X, 3700X, 3600), AMD has successfully closed the gap with the best offerings from competing Intel. As a result, most of the new CPU models smash direct competitors in every type of task.

UPD. November 2020 — a separate article about the new AMD 5000 processor series

The site has additional articles on the following topics:

Best computer for GPU rendering (Octane, Redshift, V-Ray GPU)

Best Computer for 3D Modeling and Rendering (CPU) in 2019

Best Computer for Video Editing

Let’s take a closer look and compare AMD and Intel products to conclude: which current processor should I use when building a computer at the end of 2019? Let’s start holivar ))

For those who want quick advice and don’t like to figure out «why?» right from the end I will give a list with the most productive solutions in each category.

Ryzen or Intel in 2019?

Video Editing Processor
Top Performance — Intel Core i9 9980XE
Best Value — AMD Ryzen 9 3900X
Budget Option — AMD Ryzen 7 2700X

Processor for graphics
Top performance — Intel Core i9 9900K
Best Value — AMD Ryzen 9 3900X
Budget Option — AMD Ryzen 5 3600

CPU Rendering
Top Performance — AMD Threadripper 2990WX
Best Value — AMD Ryzen 9 3900X
Budget Option — AMD Ryzen 5 3400G

Overall Performance
Top Performance — AMD Ryzen 9 3900X
Best Value — AMD Ryzen 7 3700X
Budget option — AMD Ryzen 5 2600X

History of confrontation AMD vs Intel

If you are interested in how the development of technology led to such conclusions and want to understand a little more about the difference between processors, read on.

AMD and Intel have been a rivalry for almost half a century and one of the oldest topics for technical holivar between fans of their products. And in the future, the confrontation is unlikely to stop.

For years, the advantage in the CPU market was held by Intel, in the 21st century, at least the last decade, AMD was trailing behind. However, everything changed with the release of the AMD Ryzen line in 2017. Intel’s
has long stood out due to its higher performance, wide range of models for various market segments, which in general made them a better and easier choice for any purpose.

Whether the customer needed a gaming computer, a workstation or an office computer, the conclusion was clear — take Intel and don’t rack your brains.

Intel’s IPC advantage

The difference in overall performance is rooted in companies’ fundamentally different approaches to processor architecture that were introduced around 2010-2011.

Intel focused on improving IPC performance (Instructions per cycle — operations per cycle), AMD focused on parallel data processing. One approach implied the creation of processors with a smaller number, but very productive cores, and the second — a larger number, but individually weaker cores.

What is IPC?

Instructions per Cycle is the number of operations that the processor can perform during one cycle. What does this mean in practice?

See. For more than 10 years there have been processors with a clock speed of 3 GHz. However, if we compare any modern processor with an older one, but the same frequency, it turns out that the modern one is much faster. Why? Because of the IPC. For 1 cycle, the newest processors perform more operations than the older ones. While increasing operations per clock directly increases the overall performance of the processor, and increasing IPC is the most obvious and important goal when designing faster CPUs.

AMD: low IPC and failure of the FX line

In 2011, AMD released the FX processor line, which was created almost from scratch, but sales failed. The fact is that new processors were often slower than their own processors of previous generations, which they replaced!
The FX 4000 series arrived a few months later, but simply couldn’t compete with Intel’s Sandy Bridge CPUs. This allowed Intel to consolidate its dominance and increase sales.

Over the next few years, history moved in the right direction, as both companies increased the performance of their processors in new generations.
The result is logical: Intel was the undisputed leader both in the areas of maximum performance of individual cores and in tasks that required the power of multi-core processors.

Of course, sooner or later new developments from AMD had to shake the leadership of Intel. But it’s worth giving credit to Intel engineers: their decisions and strategic vision in 2011 predetermined the superiority of the company for many years to come.

First Gen Ryzen vs. Intel 7th Gen

When AMD first announced the Ryzen series of processors, people were pretty skeptical. The memories were still fresh when the FX series was also promised mountains of gold, but in fact they received an 8-core processor that could not catch up with the 4-core one from Intel even in multi-threaded tasks. To complete the picture, add the crazy heat dissipation and high operating temperatures of those processors that gave rise to tons of memes on the Internet.

When Ryzen did hit the market, even the diehard AMD fans were surprised. Compared to the FX series, ops per clock performance has increased by 52%, bringing it very close to the performance of Intel’s 7-series processors. Of course, this was not a victory yet, but after the failure of the FX series, it was a huge step towards increasing performance based on the new architecture.

The low cost of production made it possible to position the 8-core processor with 16 threads from AMD against the 4-core, 8-thread processor from Intel. The Ryzen series offered users not only high frequency, but also performance per clock, and coupled with a larger number of such cores, the new AMD Ryzen easily came out on top in terms of performance in multi-threaded tasks.
However, Intel has been polishing the architecture and frequencies of its processors for years and still holds the lead in maximum processor frequencies and operations per clock, making them the best choice for single-core workloads.

In addition, since Intel had held the market leadership for years by this point, most applications were optimized specifically for their processors, which was also reflected in comparative tests.

In games and standalone graphics applications, Intel processors remained the best choice due to their superior single-threaded performance.

Ryzen vs. Intel in 2019

That all changed with the launch of AMD’s updated line of Ryzen 3000-series processors based on the Zen 2 architecture. AMD was well aware of exactly where they lagged behind the competitor. The objective of the upgrade was precisely to increase the IPC performance in order to catch up and overtake the ninth generation of Intel processors. As you understand, if it weren’t for Intel’s advantage in frequencies, the competition would have ended a couple of years ago without really starting. And now there would be nothing to talk about.

Ryzen and Intel benchmarks in various applications

When it comes to building a computer, it is important to understand that the choice of configuration should be based on the profile of the main tasks for which the computer is purchased. Below I will give several benchmarks comparing Intel and AMD Ryzen processors in the field of video editing, rendering and other graphics-related tasks in one way or another.

Ryzen or Intel: Cinebench R20 tests

Although the main task of the 12 core Ryzen 39 processor00X was to beat the 8-core Intel i9 9900K in multi-threaded modes, quite unexpectedly, Ryzen outperformed the competitor in terms of performance on a single core.

Ryzen or Intel for Video Editing

In Adobe Premier Pro 2019 and Vegas Pro 16 tests, 3rd Gen Ryzen delivers impressive results. Please note that the Ryzen 3900X has 12 cores and 24 threads, while the top-end Intel 9980XE has 18 cores and 36 threads and, in addition, is much cheaper, but the results are very close to the leader.

Another Ryzen 3950X with 16 cores and 32 threads should be released in the fall, and then Intel will definitely not be good. Adjusting for the cost of solutions, Ryzen still looks like a better buy for video editing.

Video editing hardware
Power without compromise Intel Core i9 9980XE + RTX 2080Ti/Titan Xp
Value for money: AMD Ryzen 3900X + RTX 2070
Budget solution Ryzen 7 270 0X + GTX 1660Ti

In Intel mounting tasks so far in the lead, but 12 core 3900X shows results close to 18-core Intel, which hints that when the 16-core Ryzen 3950X comes out, Intel will be torn to shreds. However, the i9 9980XE is still technically the leader.

Intel or Ryzen for graphics work, a fast viewport

For graphics work and a fast viewport, the maximum frequency of one core and the number of operations per clock are important, so the superiority of Intel has been here for many years.

However, the release of Gen 3 Ryzen processors brings a significant change to this established paradigm, as the new processors deliver performance very close to Intel’s top processors.

Graphics hardware
Power without compromise Intel Core i9 9900K
Value for money: AMD Ryzen 3900X
Budget solution Ryzen 5 3600

Once again I emphasize, the superiority of Intel here very small, but in fairness for the sake of the palm give honestly. At the same time, I would not advise buying the Intel 9900K now — the Ryzen 3900X is negligibly inferior, but the additional cores will be able to prove themselves in other tasks, such as rendering.

Ryzen or Intel for CPU rendering

For CPU rendering, the Ryzrn 9 3900X seems to be the preferred choice right now. Therefore, if you render in V-Ray, Corona, etc. — take the 3900X and you won’t regret it. If necessary, in the future it will be possible to upgrade to the Ryzen 3950X without changing the motherboard.

Speaking of motherboards.
If we consider the purchase of Ryzen and a motherboard for it, then this is a more profitable purchase and in a strategic perspective for the long term. The AM4 slot based upgrade options for AMD processors are much better than the Intel LGA 1151. Intel has a bad habit of forcing users to change their motherboard even if it is only 1-2 years old if they need to upgrade to a more modern processor.

To sum up, the Ryzen 9 3900X looks like the best buy right now. If you are considering buying a new computer a little later, you should wait for the Ryzen 3950X to hit the market for maximum performance. The added benefit is that if you opt for a Ryzen based AM4 motherboard, you can easily upgrade to the new line of Ryzen Zen 3 processors coming in 2020.

The first 64-core workstation from AMD and Lenovo.

Man5ON

AMD recently announced a new line of AMD Ryzen Threadripper PRO processors with up to 64 cores and fairly high throughput performance. These processors are designed for OEM professional workstations. Manufacturers are confident that these processors will be the best choice for artists, architects, engineers and data scientists.

It became known that the first workstation with the presented processor will be released by Lenovo. The Lenovo ThinkStation P620 should be on sale in September this year, with an estimated starting price of $4,600. The station will be equipped with Threadripper Pro 3995WX, delivering unparalleled levels of power, performance and flexibility in a single processor chassis.

The workstation will support up to two NVIDIA Quadro RTX 8000 or four RTX 4000 graphics cards, up to 1TB of RAM and up to 20TB of storage, as well as support for PCIe 4.0 interface and thoughtful cooling system.
We can say that the new workstation is universal for use in any industry. Now you don’t have to choose between the number of cores and clock speed. It remains to wait for the release and the corresponding tests, is everything really so good.

728
0
850
7

67

2020-07-17

And the video cards are still Nvidia))) When will the Intel ones appear ???
Promised this year! I would like to read the first reviews already 🙂
Promised this year! I want to read the first reviews 🙂

Intel? Why not radeons? )

2020-07-17

Oh, a little bit earlier and Applemakpro at the price of an apartment would not be needed. with all the consequences) although it remains to be seen how much this machine will cost.

2020-07-17

Suspiciously cramped case for such iron

2020-07-17

In fact, this power, apart from renderings, is far away not all software can not use. Somewhere between 8 and 16 cores will hang out on optimized applications, and 2-4 cores will generally be used in the aftereffect.
And the render will certainly take off, but when all the cores are loaded, the turbo frequency will reduce the megahertz by 600.

2020-07-17

I don’t know what can be collected there for the remaining 600 bucks of the total cost, because 3990X now costs $ 4k and this is the closest brother of this processor

2020-07-17

Jen, 4k is retail.

2020-07-17

Shinetek, Rendering, simulation, compo — resource-intensive tasks, and the goal is on them. Why load all 64 cores when we extrude coligon at the cube?

2020-07-17

Simulation is far from being optimized for such a number of cores everywhere. But to compile some anrial will probably be to the fullest. And Nuke will probably boot.

2020-07-17

Emin Yusifov
Intel? Why not radeons? )

Because while Nvidia disappointed me, 15 years passed, and then they were ruined by the office’s policy, like Adobe and Autodesk, and ATI Radeon did not live with me even for several months — I exchanged it with mats for Nvidia and was satisfied, I don’t even remember what exactly I got hot in it. I have been using Intel processors for 25 years and I am more than satisfied, lately I have to use the built-in video, so there are no complaints about it either, and if Intel video cards prove themselves, then I will opt for them. I will add that I am a very sophisticated PC user — compiling programs, rendering multimedia, playing games and watching Blu-ray quality movies.

2020-07-18

I’ve never seen anything worse.. five minutes and hit the wall!!!

2020-07-18

Crazy Gamer
Because while Nvidia disappointed me, 15 years passed, and then they were ruined by the office’s policy, like Adobe and Autodesk, and ATI Radeon did not live with me even for several months — I exchanged it with mats for Nvidia and was satisfied, I don’t even remember what exactly I got hot in it. I have been using Intel processors for 25 years and I am more than satisfied, lately I have to use the built-in video, so there are no complaints about it either, and if Intel video cards prove themselves, then I will opt for them. I will add that I am a very sophisticated PC user — compiling programs, rendering multimedia, playing games and watching Blu-ray quality movies.

Well, to be honest, it is not necessary to judge the quality of the GPU by the CPU, after all, different divisions are engaged in this. And if AMD with a GPU somehow doesn’t have CUDA support, for example, then their central processors are now simply a must-have when it comes to working with 3D graphics.
The fact that you are consistently satisfied with Intel processors is a little strange, just like AMD, this company offered products of different quality over the years.

2020-07-18

Jen, most likely the article is just crookedly written and for sure the starting price will be only at 3945, which has 12 cores, branded workstations like dell, hp, lenovo usually have a pretty decent markup, not like an apple, of course, but still

2020-07-18

current is sitting in the fsh

2020-07-19

Artyom, this is maya

2020-07-19

Alexander, is Lenovo doing shit? Are you talking about the best Windows laptops Ever?

2020-07-19

I’m talking about Lenovo, it says

9 there0002 2020-07-19

Alexander, yes, I said so, Thinkpads are the best Windows laptops ever

Alexander, clearly, the author is 10 years old

2020-07-19

laptops have to do with it? What is the topic of the post about?

2020-07-19

Alexander, «Knowing what shit Lenovo is doing, I don’t feel like it anymore»
Lenovo also makes laptops, in the post we are talking about their new computer, what is the disagreement?

2020-07-19

and they are just as shit, since it matters Can you prove

?

2020-07-19

Alexander, yes, I can

2020-07-19

prove it you’re Lenovo shit, so you either didn’t use them, or you just don’t know about them, which means you are 10 years old, because how can you be carried away by technology without knowing about them . ..

2020-07-19

did not see confirmation, and your fantasies do not interest me 02 you are going to submit some documentary evidence , or continue to lie?

2020-07-19

Alexander, you probably haven’t been in the army yet… And how does the army feel about Lenovo doing shit?

2020-07-19

Alexander, the army makes a man a man, and not this one in your picture …

2020-07-19

2020-07-19

Alexander, yes, you are impenetrable, so I decided to just make a joke

0002 Alexander, well stop strangling, are you really 10 years old or something

2020-07-19

well, can you prove it?

2020-07-19

Alexander, I never lie 3

Alexander, so you prove that I lied

2020-07-19

— you say something
— insist
-you can’t prove, therefore — you lied

2020-07-19

Alexander, and if I say that there is a god, but I can’t prove it, am I lying or not?

2020-07-19

we are not talking about God now, not about your preferences in faith, but about the fact that you lied about some exact data

2020-07-19

Alexander, no, you are not avoid the question, answer

2020-07-19

I’m not interested in answering the questions of a liar, given the fact that it is you who are trying to avoid the answer

2020-07-19

2020-07-19

I’m just not interested in talking to a liar. That’s all

2020-07-19

Alexander, obviously the author is tired of

2020-07-19

Awesome comp. judging by the stuffing. If I had extra ones that I don’t feel sorry for and don’t drink, I would take it without hesitation, even just to play. It is a pity that such equipment is not given for a «test drive», or, for example, for leasing, for a year or two. Recently, on the i9-9940X, and this is only 14/28 cores, and already, even the slowest of Arnold renders pleases, but on this monster it would probably give out something close to real-time.
ps By the way, I don’t follow AMD at all, and I don’t know the pros/cons. It was like that, in the year 2002-03, when the office decided to save money, and instead of intel upgraded the entire production to AMDhi, and what started here. Almost all projects more complicated than cubes / balls (I exaggerate) fell five times an hour, especially on render. I don’t know what was there, they said the batch of motherboards got caught. Yes, in general, don’t give a damn, but since then I don’t even consider AMD as a working percentage, although I understand that a lot of water has flowed under the bridge, but the sediment remains. Hoyat in the late afternoon and I hear that in some places Intel is doing it. and not only in games, but in 3D applications. But with GPU renderers they have a real problem, and it’s a pity

2020-07-19

Georgeman Awesome comp. judging by the stuffing. If I had extra ones that I don’t feel sorry for and don’t drink, I would take it without hesitation, even just to play. It is a pity that such equipment is not given for a «test drive», or, for example, for leasing, for a year or two. Recently, on the i9-9940X, and this is only 14/28 cores, and already, even the slowest of Arnold renders pleases, but on this monster it would probably give out something close to real-time.
ps By the way, I don’t follow AMD at all, and I don’t know the pros/cons. It was like that, in the year 2002-03, when the office decided to save money, and instead of intel upgraded the entire production to AMDhi, and what started here. Almost all projects more complicated than cubes / balls (I exaggerate) fell five times an hour, especially on render. I don’t know what was there, they said the batch of motherboards got caught. Yes, in general, don’t give a damn, but since then I don’t even consider AMD as a working percentage, although I understand that a lot of water has flowed under the bridge, but the sediment remains. Hoyat in the late afternoon and I hear that in some places Intel is doing it. and not only in games, but in 3D applications. But with GPU renderers they have a real problem, and it’s a pity

I own a system on AMD 3990X / 256 GB / RTX 5000. The stone is overclocked to 4.2 GHz on all cores. Compared to the previous machine on 18 core 2699v3 with boost unlock for all cores, I got an increase of 6! times in scenarios of 100% CPU utilization. I think there is no need to comment here. I will only add the fact that when compared with rendering on the GPU (E-Cycles), the rendering speed on the CPU is approximately comparable to that on an RTX 5000 or two 1080 TIs in overclocking.

2020-07-19

Michael, old maya somewhere in 2013-2016.

2020-07-19

Nadir Mursalov (Cooler3D)

I own a system on AMD 3990X / 256 GB / RTX 5000. The stone is overclocked to 4.2 GHz on all cores. Compared to the previous machine on 18 core 2699v3 with boost unlock for all cores, I got an increase of 6! times in scenarios of 100% CPU utilization. I think there is no need to comment here. I will only add the fact that when compared with rendering on the GPU (E-Cycles), the rendering speed on the CPU is approximately comparable to that on an RTX 5000 or two 1080 TIs in overclocking.

those. 2 1080ti better than 3990x?!? and then it makes sense to overpay for the 3990s ???

2020-07-19

Max Vic
those. 2 1080ti better than 3990x?!? and then it makes sense to overpay for the 3990s ???

If we talk about hybrid renderers, then yes, it’s better, but as long as the scene fits in the video memory. In OUT of Core GPU mode, rendering slows down about ten times. The CPU does not have such a problem, at 256Gb the scene usually somehow fits. Such a CPU should be considered primarily for tasks where the GPU is of little use.

2020-07-20

Now I’m just scratching my head about what to invest in, either to assemble a system on the basis of the 3950x, or simply take two (four) 1080 ti from hands for 30 tr on the existing platform. from the farm, or if there is too little memory (11 Gb), then instead of the entire platform, you can take Titan RTX, is there already 24Gb ???

2020-07-20

I’m only interested — where can I get such a case?

2020-07-20

And what can roughly fit into 11Gb 1080ti?

Item fits perfectly. I modeled a Persian or a car and quickly rendered it and immediately set up the mats.

2020-07-21

Max Vic Now I’m just scratching my head about what to invest in, either to assemble a system on the basis of the 3950x, or simply take two (four) 1080 ti from hands for 30 tr on the existing platform. from the farm, or if there is too little memory (11 Gb), then instead of the entire platform, you can take Titan RTX, is there already 24Gb ???

I released mine from farms up to almost 20 tons, and then they were reluctant to take them apart, they were like dirt on the secondary. IMHO, underestimated card, for this price as a gift.

2020-07-21

Nadir Mursalov (Cooler3D) I released mine from farms up to almost 20 tons, and then they were reluctant to take them apart, they were like dirt on the secondary. IMHO, underestimated card, for this price as a gift.

Yes, if it weren’t for the bacchanalia of the crypto boom, we wouldn’t have seen such prices. According to the PTX series, judging by the prices, everything is bad.

2020-07-21

Andrej Logvin
The item fits perfectly. I modeled a Persian or a car and quickly rendered it and immediately set up the mats.

The subject, this is somehow not enough, but what about interiors, exteriors ?! I looked at the redshift gallery, there are projects, but what they are made of, a mystery

2020-07-22

Max Vic
The subject, it’s somehow not enough, but interiors, exteriors ?! I looked at the redshift gallery, there are projects, but what they are made of is a mystery

In practice, interiors, and especially exteriors, almost never fit into 11Gb of video memory, unless you specifically bother with optimization.

2020-07-23

So where is the actually thought-out cooling system? What a game, one cooler for blowing in, one for blowing out, and one added to the disks, that’s all.

2020-07-28

Nadir Mursalov (Cooler3D) In practice, interiors, and especially exteriors, almost never fit into 11Gb of video memory

I would very much like to see the interior, which does not fit in 11 gigs.

2020-08-05

the computer is over powerful, I would love to have one myself, but I understand that with my salary I would have to start AT LEAST the same computer desk as the girl from the photo 😀 I saw a similar one in hoyz , but a fly badge will have to pay 7-10 thousand, and this is + $ 100 … Xs, maybe I’ll still decide to buy, otherwise mine is already dying. The computer is from rendering, and the table is from punching during the hassle and merged deadlines hah.

2020-08-21

Shlipidron all this, I really don’t know what it is connected with, and there are suspicions of some kind of conspiracy. In general, I bought another Ryzen 3900x, Nvidia RTX 2070s that winter, I’m sitting making a video in 3DsMax, I think let’s try to pin it into something while it’s being rendered on a Vray hybrid — and lo and behold, at maximum speeds I roll into a warzone and there are no drawdowns with FPS no freezes and on the render, the speed dropped by 5-10%.