Cpu comparison tool: AMD Ryzen 5 5500U vs Intel Core i5-1135G7 @ 2.40GHz [cpubenchmark.net] by PassMark Software

SilverBench · Online multi-core CPU benchmark & stress test · 100% JavaScript

No Chromium browser!? Firefox & Edge is slow · Learn how to be fast…

SilverBench · online multicore CPU benchmarking service (uses only JavaScript) to benchmark computer (PC or mobile device) performance using a photon mapping rendering engine. Three benchmark options available—Performance, Extreme, and Stress test. The photon mapping is performed by CPU alone (no GPU is used). Stress test is useful for CPU burn-in, temperature, and stability testing.


For the best experience, please rotate your device…
Ignore this useful recommendation and proceed anyway!
You’ve been warned 🙂

Last 10 benchmarks · Mixed

  • P46987i·15m ago

    tateW

  • S0.1047·31m ago
  • S0.0354·36m ago
  • S0. 0708·36m ago
  • S0.1547·2h ago
  • P40913i·2h ago

    AnonymousHP 840 G9

  • P28109·2h ago
  • P14177i·2h ago

    AnonymousHP 830 G9

  • S0.0662·2h ago
  • P30616i·2h ago

    test2

Results are shown in submission order.

All time TOP 5 · Performance

  • P198743i·283d ago

    blazeEPYC 7662

  • P193134i·303d ago

    DroidbotAmpere Altra 80c

  • P166636i·2y+ ago

    NML2

  • P164166i·2y+ ago

    ElzeleWilly76

  • P160353i·394d ago

    MinzerThreadripper 3960

Performance scores (higher is better).

All time TOP 5 · Extreme

  • X26608i·303d ago

    DroidbotAmpere Altra 80c

  • X23590i·2y+ ago

    DroidbotEPYC 7J13*2

  • X19316i·366d ago

    threadripper-3995wx

  • X18890·431d ago
  • X18053i·406d ago

    mahdimcdual epyc 7543

Extreme score (higher is better).

All time TOP 5 · Stress

  • S0.7008·86d ago
  • S0.6314·428d ago
  • S0.5519·2d ago
  • S0.5471·1d ago
  • S0.5425·73d ago

Stress score (higher is better). Scores are collected automatically after 10th rendered frame.

Drill down…

See more · Last 100 benchmarks

See best of the best · Top charts · Performance · Extreme · Stress

See system statistics · Overview · OS · CPU · Threads

Your feedback is very important to us, so please use Comment box to leave a message. Thanks!

Refresh: silver.urih.com/

Refresh

SilverBench stats

3,536,200 benchmarks done.

That’s about 45 Tera Rays!

Automate · Start free
Your IP?

IP: 161.97.168.212 more
Local IP: .

Sponsored content
Request header

Size: 355 bytes more

Performance

Done in 0.12207 seconds

CPU-Z benchmark tool is updated to version 1.79

HardwareNewsSoftware

Andres Alvarez Follow on Twitter Send an email 2 May 2017

0 601 3 reading minutes

01114726524l 1

All of us in our life have used benchmark tools such as AIDA64, 3DMark or CPU-Z to test certain aspects of our or pc or we simply want to know the general performance of it. In this case, the last mentioned program has received an update that has caused some controversy by showing much lower scores to the same processors than in previous versions.

This new update will completely change the way in which CPU performance is measured, at first Ryzen had been able to achieve higher scores and had been used much more than the Intel Kaby Lake / Skylake for clock frequency reasons. This improvement of up to 30% in the Ryzen made the results become a very misleading benchmark.

After downloading the new version of the software, all the results of processors in the Ryzen range will deteriorate considerably and this is because now the benchmark will offer greater support to CPUs with a higher multi-thread capacity, which is what truly shows the potential of a processor . The new update was released almost 10 days ago and many users were already wondering the reason for this change.

Why are the scores much lower than the previous version, and can they be compared?

At the time the first benchmark was released in 2015, only a few parts included 8 cores (like the 5960X). In the meantime, Ryzen was introduced, and therefore 6 and 8-core processors will become more and more prevalent. As a result, more models with 10, 12 and 16 cores are soon to be released. More cores mean higher multi-threaded scores, and a lower scale makes the comparisons easier. The new benchmark uses a new algorithm, and its scores cannot be compared with the previous version.

Why do the Ryzen performance decrease in comparison to the Intel processors with the new benchmark?

When the 1st version of the benchmark was released in 2015, it was tested on all existing architectures to check the relevancy of the scores. Almsot two years later, Ryzen was introduced, and scored — core for core and clock for clock — almost 30% higher than Intel Skylake. After a deep investigation, we found out that the code of the benchmark felt into a special case on Ryzen microarchitecture because of an unexpected sequence of integer instructions. These operations added a noticeable but similar delay in all existing microarchitectures at the time the previous benchmark was developed. When Ryzen was released, we found out that their ALUs executed this unexpected sequence in a much more efficient way, leading to results that mismatch the average performance of that new architecture. We reviewed many software and synthetics benchmarks without being able to find a single case where such a performance boost occurs. We’re now convinced that this special case is very unlikely to happen in real-world applications. Our new algorithm described below does not exhibit this behavior.

What algorithm does the benchmark use, and what instruction set is used?

The new benchmark computes a 2-dimensional noise function, that could typically be used in a game to generate a procedural map. The code is written in C ++, and compiled with Visual C ++ 2008. No special instruction set is used, but the x64 version uses scalar SSE / SSE2 instructions to achieve floating point operations, whereas the 32-bit version keeps using the legacy x87 instructions, resulting in almost half of the x64 performance.

When will the benchmark pages be updated with the new benchmark results?

The new benchmark was released 10 days ago, and thousands of scores were recorded in the meantime, We’re curently building a new database with these results and plan to put it online during this week.

 

Source: 3D overclock

Show more

Related publications

Hardware Benchmark Test — 2020

Third-party tools are available to test graphics, processors, and I/O, and the SOLIDWORKS RX Benchmark Tool tests hardware performance with SOLIDWORKS software.

Processor

This is an estimate of the time required for the computer to perform processor-based operations in the SOLIDWORKS software, such as rebuilding features or creating drawing views. A computer that passes this test in half the time will rebuild the part twice as fast.

Since the CPU is used in almost all operations, an improvement in the processor score due to a faster processor may also improve other results. processors or cores.

Graphics

This is an estimate of the smoothness of rotating, zooming, and panning a model in the SOLIDWORKS software. A short time indicates the ability to zoom, pan, and rotate complex and large models with less noticeable delay.

Graphics performance depends partly on the processor and graphics card. Graphics processing efficiency can be improved with a faster graphics card and processor.

I/O

This is an estimate of how long it takes to open and save files on the computer. This test measures the read and write speed of a computer’s hard drive.

This does not take into account network speeds or data management systems, so this rating may not be a good indicator of open and save speed for network environments.

Rendering

This score is only available if you have PhotoView 360 installed on your computer. This test measures the time it takes to complete photorealistic rendering of models.

PhotoView 360 uses multiple processors, if available. In the case of more processors and faster processors, rendering time improves.

RealView Graphics Processing Efficiency

This rating is available only if your computer has a graphics card that supports RealView graphics. The short time means that with RealView graphics enabled, complex and large models can be zoomed, panned, and rotated with little or no latency.

RealView graphics processing performance is partially dependent on processors and video card, but video card dependent more than for standard graphics. RealView graphics processing efficiency can be improved with a faster graphics card and processor.

Simulation

This assessment is available only if the SOLIDWORKS Simulation application is installed on the computer. This test measures the time required to complete a static analysis design study.

SOLIDWORKS Simulation uses multiple processors, if available. In the case of more processors and faster processors, the study execution time is improved. SOLIDWORKS Simulation requires frequent disk writes.

iXBT.com processor performance testing methodology

version 2.0 RC (2006)

A number of «general philosophical» arguments

Testing methods (of something) can be very different. Perhaps, in view of the aforementioned diversity, only one, on the one hand, very significant, and on the other hand, a widespread mistake that haunts their authors comes to mind. This error consists in violating the only fundamental rule: at any time, from the beginning of the development of the methodology to the publication, its author must clearly understand why he is doing all this . See the target. This goal may be unrealistic (develop a methodology that will satisfy everyone), real (develop a methodology that satisfies the audience he is targeting), and it may simply not exist — when the developer himself does not imagine a specific person who could be interested in the results of his tests, and tests everyone in a row, in the hope of an accidental hit. Unfortunately, the first and third options are the most common. The third, probably even more often ;).

The basic principle of testing methods for desktop processors, developed within the iXBT.com test laboratory, is to provide (if possible) the maximum amount of information to those users who operate their computers «in a serious way». People who choose a certain processor not because “I saw a commercial recently and they said it was cool”, not because “you need to outdo your neighbor”, and not because “it’s a fun thing, and it will interesting to tinker.» We see our potential audience as a set of users who, in general, do not need a computer by itself. They need a tool to solve their problems. It could be a computer, or a slide rule, or a magic wand. The main properties of this tool should be simple and uncomplicated:

  1. It must provide a solution to the tasks facing the user;
  2. It must be available;
  3. It should provide the solution of the above tasks in an optimal way — based on the ratio between the speed of solving problems and the price of the tool.

As you can see, everything is very simple. And it is the simplicity of the approach that is the main subject of our pride. Of course, he also has his drawbacks: this approach does not suit users at all, who are interested not so much in the practical application of their system as in the “width of the fingers spread” associated with owning it. However, their interests are quite successfully served by such a large number of people that we, the right word, do not really want to compete in such an oversaturated market with offers.

The new test method of the 2006 model (the second in the family of unified x86 CPU performance test methods) was developed by us in such a way that it would be at the same time an organic successor to the previous one, incorporating all its advantages and, if possible, eliminating the disadvantages, and, on the other hand — as a fairly modern way to determine the performance of today’s processors, which meets the main trends in the development of the hardware and software industry as a whole. The tests carried out (their results you will see below), from our point of view, confirm that the development was carried out in the right direction. Let’s put it this way: our purely subjective, based on personal experience, opinion about the comparative performance of various CPUs does not conflict with the results that were obtained. Which, by the way, is very good: after all, it is known that a real tester will tell the purest truth about which processor is faster in a half-asleep state, if you wake him up somewhere at 3 a.m. — without looking at the diagrams, without remembering which they show the results, and without even waking up :). Hardware and software

Pentium XE 840

Gigabyte GA-G1975X

Corsair CM2X1024-6400

GeForce 7800GTX 256 MB

Pentium XE 955

Gigabyte GA-G1975X

Corsair CM2x1024-6400

GeForce 7800GTX 256 MB

Pentium XE

Gigabyte Ga-Ga-Ga-Ga-Ga-Ga-Ga-Ga-GA0003

Corsair CM2X1024-6400

GeForce 7800GTX 256 MB

Athlon 64 FX-60

EPoX EP-9NPA3

Corsair CMX1024-3500LLPRO

GeForce 7800GTX 256 MB

  • Stand memory — 2 GB (2 modules)
  • Hard drive — Samsung SP1614C (SATA)
  • Used coolers — standard, supplied with processors 90 Nm

    65 Nm

    65 Nm

    90 Nm

    Frequencies of the nucleus, GHz

    9000 2. 66 9000 9000 9000 9000 2.000 2.000 2.000 2.000 2.000 2.000 2.000 2.000 2.000 2.000 2.000 2.000 9000 2.000 2.000 9000 2.000 2.000 9000 2.000 2.000 9000 2.000 2.000 9000 2.000 2.000 9000 2.000 9000 2.000 2,000 2,000 2,000 2,000 2.US 2

    3.46

    3.73

    2.6

    Nuclei

    2

    9000 9000 9000 9000 9000 9000 9000 9000 9000 9000 9000 9000 9000 9000 9000 9000 9000 9000

    071

    2

    2

    2

    L2*,

    2×1024

    2×1024

    2×1024

    2×2048

    2×2048

    2×1024

    Tire frequency **, MHz

    133 qp

    200 qp

    9000 qp

    +

    +

    +

    Hyper-Threading

    +

    +

    +

    Virtualization Technology

    9000 +

    9000 +

    072

    +

    * — if “2x…” is indicated, then it means “by . .. per core”
    ** — for AMD processors — memory controller bus frequency
    *** — it is measured differently for Intel and AMD processors, so it is not correct to compare directly

    It is easy to see that there are only two newcomers to testing: Pentium D 805 and Pentium XE 965. They do not contain any conceptual differences from the previously considered Intel processors. The Pentium D 805 is the new junior model in the «budget» 800 series of dual-core Pentium D processors, transferred to a slower bus and with a slightly lower speed, and the Pentium XE 965 is a Pentium XE 955 overclocked to 3.73 GHz. Perhaps the most interesting newcomer is the Pentium D 805, which holds the record among dual-core processors in terms of price: this dual-core processor is quite realistic to buy at retail for less than $150.

    Software

    1. Windows XP Professional x64 Edition SP1.
    2. 3ds max 7.0
    3. Maya 6.5
    4. Lightwave 8.5 x64 Edition
    5. WinRAR 3. 51
    6. 7-Zip 4.32 x64 Edition
    7. LAME 3.98
    8. Monkey Audio 4.01
    9. OGG Encoder 2.8 (Lancer)
    10. Windows Media Encoder 9 x64 Edition
    11. MATLAB 7.1
    12. Pro/ENGINEER Wildfire 2.0
    13. SolidWorks 2005
    14. Microsoft Visual C++ Professional 6.0
    15. CPU RightMark 2005 Lite x64 Edition
    16. F.E.A.R. 1.3
    17. Half-Life 2
    18. Unreal Tournament 2004 build 3339
    19. Quake 4 Point Release 1.1
    20. FineReader Professional 8.0
    21. Adobe Photoshop CS2 (9.0)
    22. Canopus Procoder 2.01.30
    23. Divx 6.1.1.1
    24. Windows Media Video 9 VCM
    25. x264 v.438
    26. RELEASE
    27. APACHE 2.0.55 FOR Windows
    28. Intel INF Update 7.2.2.1006
    29. NVIDIA ForceWare 81.98
    30. NVIDIA nForce SMBus Driver 4.50

    Description of the methodology with examples of real tests , all data types are reduced to one — integer relative «points», and, secondly, the form of presentation of detailed results has been changed — now, instead of many charts, one table in Microsoft Excel format will be available for download. Let us consider the reasons for these innovations in more detail.

    So, why «points»? First of all, because the dimensions of the compared values ​​and their type often differ even within those groups of applications that are presented in the main part of the article within one common diagram. For example, let’s take a summary diagram of 3D modeling packages. Each column in it is the average result of the SPEC apc packages for 3ds max 7.0 and Maya 6.5 (value type is points), as well as the rendering time of scenes in Maya 6.5 and Lightwave 8.5 (value type, respectively, time). At the same time, in the case of points, the result is the more — the better, and in the case of time — on the contrary, in view of which the latter has to be taken into account in the form of «1 / X». The figure obtained as a result of averaging these values ​​expresses … but it does not express anything. This is also nothing more than a certain conditional score, which the more — the faster the processor. Then why not give this score at least some sense? We decided to do this: let the performance of a processor be taken as 100 points, and it will be a «performance benchmark» for everyone else. The Intel Pentium D 805 was taken as the benchmark — as the slowest of all dual-core desktop x86-64 CPUs (at the moment, but, apparently, forever). Thus, if you see on the diagram that some processor has 150 conditional points, and another has 120, this means that the first in this set of applications is on average 1.5 times faster than the Pentium D 805, and the second is 1.5 times faster than the Pentium D 805. 1.2 times faster. Accordingly, the first one is 1.5/1.2=1.25 times faster than the second one. The performance benchmark for all materials released under this methodology will, of course, remain unchanged, so cross-article comparison of processors will be possible, as before. Scoring methods are open — they are built into the table with the results itself, so that anyone who has a minimum amount of work with Microsoft Excel formulas can familiarize themselves with them.

    Why a table instead of a “sheet” with dozens of diagrams? Firstly, the number of results obtained within the framework of the new methodology has become even greater than before, therefore, the construction of such a number of diagrams (unfortunately, this process is currently not automated) takes a lot of time and resources, significantly hindering the efficiency of the release of materials. Secondly, and no less important: getting data from a table in order to somehow process it in your own way is much easier than from charts (in the latter case, you just have to rewrite them from the screen, there are no other options). Thus, we decided that for those of our readers who really pay attention to detailed test results and are ready to “tinker” with them on their own, the tabular form of data presentation will be even more convenient. In addition, as already mentioned above, it is more convenient for us at the moment. Subsequently, it is likely that the table will be replaced by an online results database, equipped with an interactive engine, thanks to which site visitors will be able to build any charts and tables in accordance with their own desires, simply by selecting several processors to compare using drop-down menus. This is unlikely to happen tomorrow, but work on such a project is underway.

    3D modeling packages

    In this diagram, the results of four tests are combined into an average relative score: SPECapc for 3ds max 7, SPECapc for Maya 6.5, and tests for scene rendering speed in Maya 6.5 and Lightwave 8.5 packages (the last package is represented by 64-bit version). Changes in relation to the previous version of the methodology are quite minor: a new revision of SPECapc for 3ds max 7 and a new version of Lightwave — 8.5 instead of 8.2 are used. In addition, Lightwave 8.5 has got support for AMD64/EM64T, which we did not fail to take advantage of, using the 64-bit version of this package in the tests.

    CAD (computer-aided design) packages

    This section was significantly smaller in the previous methodology. There was no «section» — there was one application (SolidWorks 2003), which personified the whole class with its presence. We considered it necessary and interesting to expand the presence of CAD / CAM / CAE packages in our performance testing methodology, and now there are already three of them: the updated SolidWorks 2005 (using a test from SPEC), the world-famous Pro / ENGINEER Wildfire 2. 0 package (test script also from SPEC) , and the MATLAB 7.1 mathematical modeling package (using the «bench» built-in command).

    Compilation

    A completely new section of the methodology, which appeared according to the numerous wishes of our readers. Unfortunately, in the process of developing a compilation speed test, we encountered some difficulties. The first was that it would be at least strange to introduce a single test running under another OS into a methodology focused on the Microsoft Windows OS. On the one hand, its results will not say anything to the main audience that reads our test materials for the sake of results in Windows software, on the other hand, it is hard to imagine that fans of alternative operating systems will suddenly start reading us for the sake of a single test. Therefore, the compilation test must also be executed under Windows.

    The second problem was that finding open sources large enough (so that the compilation time was not counted in seconds), designed to use the standard Windows compiler Microsoft Visual C ++ (or . NET), was not such an easy task. Most either require the use of additional tools like Cygwin or MinGW (which was clearly considered a perversion, since they themselves can contribute to performance, and it would be completely unclear what we are measuring), or require the GCC compiler (show us the people which are not for fun, but constantly use GCC under Windows …) Fortunately, we managed to find such a package — this is an ACE + TAO package with open and downloadable sources of a fairly large size.

    The third problem is that open source packages, even adapted for native (without the use of Cygwin/MinGW) compilation under Windows using the compiler from Microsoft… are completely unsuitable for its new versions, and cannot be compiled with their help without significant processing. Thus, at the moment, the compilation speed test is the result of a compromise between what is desired and what is available, and is the compilation of the ACE + TAO package using Microsoft Visual C ++ Professional 6. 0 (without loading the IDE, by running the compiler and linker from the command line).

    RightMark Project

    CPU RightMark is considered by many to be a «synthetic» test, but from our point of view, it’s not exactly synthetic, or, if you look at things differently, it’s a very peculiar synthetic. The meaning of this test is that, on the one hand, it uses algorithms that are actually used, for example, for calculating physics and rendering (from this point of view, it is very close to game engines), on the other hand, CPU RM can be called a kind of «Performance pump»: it supports all modern extended instruction sets (moreover, optimization is done by hand-written assembler inserts), and its rendering engine supports up to 32 parallel executing threads. In a word, if we continue the comparison with the game engine, this is a very carefully optimized engine that can squeeze the maximum out of any processor that it is capable of. The “syntheticity” of the CPU RM, if it consists in something, is only in this: it demonstrates the maximum performance of the processor in the conditions of using all possible optimizations. This version of the technique uses the RightMark 2005 Lite CPU with improved multiprocessor support and adjusted default scene parameters (the number of objects has been increased to distribute the load on the physics and rendering modules more evenly).

    Raster image processing (photos)

    Raster digital image processing is still represented by only one program, but this is logical, since Adobe Photoshop has long been the de facto standard for professional work with photographs. The test script for Adobe Photoshop was not reworked, but simply added — to the Blur, Color, Light, Rotate, Sharpen, Size and Transform blocks, a Filters block was added, containing a large number of filters included with this program (several dozen). Those who wish can familiarize themselves with the test script by downloading it from our website. The image processed using the script has been enlarged, now its size is 4096×3072 pixels. The picture, like the script, can be downloaded by those who are especially curious (carefully: the size is about 20 megabytes). In addition, the version of the program was «modernized», now Adobe Photoshop CS2 (Version 9.0) is used in the method.

    WEB server

    Another completely new benchmark and task class. The measurement is performed using the built-in command of the Apache web server — AB (Apache Benchmark). The command is executed three times, on three different files, conventionally called by us «small» (size 33 kilobytes), «medium» (size 137 kilobytes) and «large» (size 1.8 megabytes). A small file is a typical front page, a medium file is one of the iXBT.com articles, a large file is an entire novel (any reputable reference book can take about the same size). Requests go to localhost (127.0.0.1), the parameter evaluated is the number of requests served per second. In the overall score, the results on small and large pages are taken into account with coefficients of 0.15 (15% each), the result of the average page is taken into account with a coefficient of 0.70 (70%). Multithreading in this test, of course, is activated — it creates a load on all the processors available in the system.

    Packing files

    The old benchmark, which remained practically unchanged, only the software versions have been updated: 7-Zip is now represented by version 4.32 (64-bit version), WinRAR — by version 3.51 (this archiver, unfortunately, does not have a 64-bit version) ). The file set (carefully! 75 megabytes!) has remained the same, and consists of 53 MB BMP files, 48 ​​MB DBF, 49 MB DLL, 49 MB DOC, 49 MB PDF and 49 MB TXT. RAR did not get multithreading support either, it is present only in 7-Zip (besides, as far as we understand, de facto this is not support for has a lot of processors, and supports two processors, that is, the ability to use a second CPU — but no more).

    Audio Data Coding

    Audio Coding (by popular demand) has been moved to a separate section and the test suite has been greatly expanded. We now test encoding speed to MP3 using LAME (version 3.98), encoding to lossless Monkey Audio and Windows Media Audio formats (MAC 4. 01 and Windows Media Encoder 9 x64 Edition), to OGG format using OGG Encoder 2.8 (Lancer), and to WMA format with lossy quality («CD Quality» in terms of Windows Media Encoder, i.e. 64 kbps 2-pass VBR). In the final score, all tests have equal weight categories. Unfortunately, we cannot upload the file used in the audio encoding tests: posting commercial Audio CD originals for free is prohibited by law, and we use the Jacques Loussier Trio «The Best of Play Bach» WAV image for encoding. . You can «rob» the specified album yourself: it will be a 100% exact copy of what we use (unless, of course, you have the original, «proprietary» CD).

    Video encoding

    Video encoding speed test has undergone minor changes, and for the first time acts as an independent subgroup (before, audio and video encoding were in one). Only the version of Canopus ProCoder remained unchanged (no newer version was released) and the MPEG2 encoding file (represents a fragment of amateur footage from a digital video camera, kindly provided by Mikhail Afanasenkov). For MPEG4 encoding, a fragment (trailer of the Taxi-3 movie) is now used as a source, available for download from the Microsoft website (1080p, 00:02:42). The DivX codec has been updated to version 6.1.1, XviD is represented by version 1.1.0 release (downloaded from Koepi’s Media Development Homepage). The Windows Media Video (VCM) codec has not been updated in the past, so it has remained unchanged. x264 version 438 was downloaded from http://x264.nl/. Profiles for VirtualDubMod can be downloaded from our site. Note on profiles: since the x264 codec allows you to set the number of encoding streams only manually, there are three profiles for it: for one-, two- and four-processor configurations. Separately, I would like to say about the already existing 64-bit version of VirtualDub. Unfortunately, there is absolutely no sense from it in our case, since, as far as we managed to understand, it can only work with 64-bit versions of codecs — accordingly, the list of available ones included only those that come with Windows XP x64 Edition.

    Text recognition system

    Another new «benchmarking group», represented by only one application so far. Judging by the statements of the developers, FineReader 8.0 Professional also supports multi-threading, however, while monitoring core load graphs for dual-core CPUs, to be honest, we failed to notice this. A 200-page PDF file with text and graphics is used as a source for recognition. This seemed reasonable to us, since the graphic files of the pages take up much more space, and, accordingly, would increase the load on the disk subsystem, but we are interested in the speed of the processor. The PDF file used to run the tests is traditionally available for download.

    3D games

    Only the veteran of Unreal Tournament 2004 is left in our list of games from the previous method, since the new version has not yet been released. The rest of the games are quite modern: F.E.A.R., Half-Life 2, Quake 4. As in the previous version of the technique, three performance indicators are taken: at 640×480 with minimal graphics complexity settings, at 800×600 with medium quality settings, and at 1024×768 with high. Even higher resolutions still seem to us not a very good idea, since we are examining, as a reminder, the performance of processors, not video cards. However, only the test results in the «medium» (800×600) mode participate in the average score (with equal weight categories) — let’s not forget that we test not only top-end processors, but also low-end ones. The rest of the results are available in the XLS table. For F.E.A.R. the built-in benchmark is used, for UT 2004 — the ONS_dria demo, for Half-Life 2 — a demo from Andrey Vorobyov, editor of the video section, for Quake 4 — our own demo (download for the sake of curiosity — the right word, it’s not worth it: it is very long and takes 200 MB in the archive).

    Total scores

    Yes, we finally decided on this: in each test there will be diagrams with the average temperature in the hospital :). True, there will be three of them: a “professional” total score, “home”, and “very common”. As you can easily see from the titles of the diagrams, they differ in terms of which test results are summarized in them. So, for a professional score, only the results of 3D modeling packages, CAD, a compilation speed test, CPU RightMark, photo processing in Photoshop (after all, this is a professional tool, it is easier for amateurs) and a web server performance test are taken into account. The home score includes audio and video encoding, file packaging, OCR, and of course games. When calculating the total score, all tests are taken into account without exception.

    Of course, everyone determines the value of this information for himself, and some, for sure, will consider it close to zero. It’s their business, no one forces you to view those diagrams that you are not interested in. On the other hand, these are not just taken from heaven, and it is not clear by what formulas the calculated «parrots» (the formulas are all open — download the table), but numbers based on very real tests in very specific applications. Let’s put it this way: for those who cannot or do not want to analyze something more complex, it is better to be guided by our final scores than advertising prospectuses. Is not it?

    Test results

    As for the results shown in the diagrams, as mentioned above, they are more an illustration of the properties and «preferences» of the new technique than something interesting in itself. All the processors that took part in the tests, except for two (Pentium D 805 and Pentium XE 965), are well known to us from previous tests, and newcomers are not something valuable in themselves, they are simple and predictable. However, of course, the time was not wasted. We think that all these results will still be useful to us — for other articles in which there will be more new, interesting processors. The presence of only one AMD processor is also natural: Athlon 64 FX-60 is a kind of embodiment of the achievements of modern computer technology in the sector of multi-core x86 CPUs, today it is being compared. The FX-60 is the fastest dual-core processor in existence, both in our old tests and in our new ones. It seems to us that to some extent this symbolizes the close connection of the testing methodology under consideration with the real state of affairs in the processor market, and confirms the adequacy of the results obtained within its framework;).