Benchmark list cpu: PassMark — CPU Benchmarks — List of Benchmarked CPUs

CPU Benchmark — CPU Benchmark Comparison

CPU Benchmark — CPU Benchmark Comparison — Processor Benchmarks

Server CPUDesktop CPULaptop CPUMobile/Embedded CPU

Over 4.1K CPU models

Explore our list of CPU benchmarks from millions of user results. We update our results list daily to provide the most accurate and up-to-date information possible. We have a lot of processor models with complex information about their performance and essential details that you need to know.

Top CPU

  • AMD EPYC 9654

    Benchmark: 124119

  • AMD Ryzen Threadripper PRO 5995WX

    Benchmark: 95539

  • AMD EPYC 7773X

    Benchmark: 90731

  • AMD EPYC 7773X 64-Core

    Benchmark: 90731

  • AMD EPYC 7763

    Benchmark: 85944

See more

Latest CPU

  • Intel Core i3-6167U @ 2.70GHz

    Benchmark: 3156

  • Intel Core i9-13900KS

    Benchmark: 62287

  • AMD EPYC 9654

    Benchmark: 124119

  • AMD Ryzen 5 7600X

    Benchmark: 28751

  • Intel Pentium Dual T2330 @ 1.

    60GHz

    Benchmark: 677

See more

Popular CPU

  • Intel Xeon Platinum 8370C @ 2.80GHz

    Benchmark: 11443

  • Unisoc ums9230

    Benchmark: 2908

  • Intel Xeon Platinum 8375C @ 2.90GHz

    Benchmark: 51836

  • Intel Core i5-4402E @ 1.60GHz

    Benchmark: 2681

  • Intel Xeon Platinum 8255C @ 2.50GHz

    Benchmark: 2112

See more

Latest Article

Choosing the Right Processor: Understanding the Different Types and Capabilities

When choosing a processor for a computer, it is important to consider the tasks the computer will be used for and select a processor that can perform those tasks efficiently. Different types of processors are available, each with their own strengths and weaknesses, such as the Intel Pentium, AMD Athlon, Intel Core i3, i5, i7, AMD Ryzen, GPU and TPU. These processors range from budget options suitable for basic tasks to high-performance options for gaming and resource-intensive tasks. It’s also important to note that new processors are being developed with improved performance and power efficiency, making the choice more difficult….

See more

Here at CPUBM, we’ve got the latest and greatest CPUs benchmarks for you! We know that making a decision on which chip to buy can be difficult, so we made this list of benchmarks. This information will help give an idea as what kind of performance each type offers (hyper-threading enabled or not), based on their ranking in our tests; it also includes some other key facts such as clock speeds & core counts — everything related enough to make informed buyers out there who want nothing but top-notch products with no fuss.

We use cookies to improve your experience. By continuing to visit this site, you agree to our cookie policy.

SPEC CPU® 2006

 

 Results

  • Published Results
  • Results Search
  • Fair Use Policy

 Information

  • CPU®2006
  • Documentation
  • Support

 Press & Publications

  • V1. 0 Release
  • V1.1 Release
  • V1.2 Release
  • Related Publications

 Order Benchmarks

  • Purchase the benchmark

Resources

  • Site Map
  • Site Search
  • Glossary
  • Performance Links
 

The SPEC CPU® 2006 benchmark is SPEC’s
industry-standardized, CPU-intensive benchmark suite, stressing a system’s processor,
memory subsystem and compiler.

BENCHMARK RETIREMENT: With the release of the SPEC CPU 2017
benchmark suite, the CPU 2006 suite has been retired.
See below for details on the retirement schedule and result publication.

This benchmark suite includes the SPECint® benchmarks and the SPECfp® benchmarks.
The SPECint® 2006 benchmark contains 12 different benchmark tests and the SPECfp® 2006
benchmark contains 19 different benchmark tests.

SPEC designed this suite to provide a comparative
measure of compute-intensive performance across the widest practical
range of hardware using workloads developed from real user applications.
These benchmarks are provided as source code and require the user to be
comfortable using compiler commands as well as other commands via a
command interpreter using a console or command prompt window in order
to generate executable binaries.

The SPEC CPU® 2006 benchmark has several different ways to measure
computer performance. One way is to measure how fast the computer completes a
single task; this is a speed measurement. Another way is to measure how many
tasks a computer can accomplish in a certain amount of time; this is called a
throughput, capacity or rate measurement.

  • The SPECspeed® metrics (e.g., the SPECint® 2006 benchmark) are
    used for comparing the ability of a computer to complete single tasks.
  • The SPECrate® metrics (e.g., the SPECint®_rate 2006 benchmark)
    measure the throughput or rate of a machine carrying out a number of tasks.

For more information about the SPECrate® and SPECspeed® metrics, see
the technical documentation.

The current version of the benchmark suite is V1.2, released in September 2011.

SPEC CPU2006 Retirement

With the release of SPEC CPU2017, SPEC has retired
SPEC CPU2006 as of January 9, 2018. No further result submissions are being
accepted and technical support has ended.

  • SPEC CPU2006 allows rule-compliant results to be published independently. Therefore, although
    SPEC will not be publishing results after this date, it is possible that licensees might choose
    to do so. The rules and license must still be followed and any such publication must plainly
    disclose that SPEC CPU2006 has been retired (see:
    http://www.spec.org/fairuse.html#Retired for more information
    on how SPEC addresses retired benchmarks).
  • Note that the requirements of the previous paragraph apply only to public use of the benchmark.
    Benchmark retirement has no effect on licensees’ internal (unpublished) use of the benchmark
    product.

Results

Submitted Results
Text, HTML, CSV, PDF, and Configuration file outputs for the SPEC CPU® 2006 metrics;
includes all of the results submitted to SPEC from the SPEC member companies
and other licensees of the benchmark.

Search across all SPEC CPU® 2006 results in SPEC’s online result database.

Information

Benchmark Press Releases
Press release material, documents, and announcements:

  • SPEC Ships V1.2 (09/07/2011)
    – What’s new
    in V1.2?
  • SPEC Ships V1.1 (06/03/2008)
    – What’s new
    in V1.1?
  • SPEC Ships V1.0 (08/24/2006)
  • Press Background (presented as Q&A)

Benchmark Documentation
Technical and support documents, Run and Reporting Rules, etc.
All documentation available on the DVD is
also available here.

Benchmark Descriptions
A survey of the
benchmarks comprising each of the SPEC CPU® 2006 component suites:

  • SPECint® 2006 — the integer benchmarks.
  • SPECfp® 2006 — the floating point benchmarks.
  • A PDF summary of all
    29 benchmarks, as published in the ACM SIGARCH newsletter, Computer
    Architecture News, Volume 34, No. 4
    , September 2006.

Related Publications
An archive of selected publications related to the suite is available.

Support

Installation, build, and runtime issues raised by users of the benchmark software.

Flags

Flag Descriptions —
explanations from the testers for what all those cryptic flags in the
results’ notes section really mean.

We evaluate infrastructure performance with Phoronix Test Suite / Sudo Null IT News

Aloha to all habravchans! I’m Vlad, Cloud4Y system administrator. I want to tell you how and why we use the Phoronix Test Suite product, as well as how you can accurately and easily determine the real (not declared) performance of equipment provided by a cloud service provider.

When choosing a site for hosting infrastructure, customers often ask where Cloud4Y has data centers, what network infrastructure, virtualization, SLA, etc. This is correct, but it is important to pay attention to the hardware complex of the data center. How can you evaluate its effectiveness? A good option is with Phoronix Test Suite. nine0003

What is Phoronix Test Suite

Phoronix Test Suite (PTS) is a free (GNU GPLv3) cross-platform benchmarking software. PTS can run automated tests and is available on Windows, Linux, macOS, Solaris, and BSD. Phoronix Test Suite comprehensively evaluates the performance of system components, as it includes more than 600 different tests, ranging from processor performance to application tests (for example, Apache or NGINX). nine0003

PTS has tight integration with the OpenBenchmarking.org site, where you can upload results, save them in a personal database, share test configurations. According to the OpenBenchmarking.org test database, there are 6 types of tests:

  1. Disk — testing the disk subsystem. For example, Flexible IO Tester — known as fio, is a popular I/O tester.

  2. Graphics — testing the graphics adapter. For example, Unigine Heaven — the test calculates the average frame rate in Heaven for the Unigine engine. This engine is extremely demanding on the video card of the system. nine0003

  3. Memory — testing of RAM. For example, Stream is the most popular test for checking the performance of RAM.

  4. Network — testing the network performance of the system. For example, Loopback TCP Network Performance — the test checks how efficiently the TCP / IP network stack works.

  5. Processor — processor efficiency test. For example, x264 is a performance test when encoding a sample file using the x264 codec running on the CPU (OpenCL is disabled). nine0003

  6. System — testing the overall performance of the system. For example, Apache HTTP Server is a test of the Apache HTTPD web server using the Golang program «Bombardier».

Important: Although phoronix can be used on various operating systems, tests do not always support all platforms. Therefore, before testing, I recommend that you carefully read the selected test. There are two ways to check compatibility:

First: open the Tests tab on the OpenBenchmarking.org website and, having found the desired test, see a list of supported operating systems. nine0003

Second: by opening the test, see the list of supported operating systems.

Now that we are familiar with the product, we can move on to working with it. Let’s go!

Installing Phoronix Test Suite

Consider installing PTS on Linux and Windows operating systems. Ubuntu 20.04 LTS is used as Linux OC, Windows Server 2019 is used as Windows OC.0072

install the necessary packages

sudo apt install php7.4-gd curl git sqlite3 bzip2 php-cli php-xml

download the PTS distribution from the official phoronix github phoronix-test-suite/phoronix-test-suite

go to the directory with phoronix-test-suite and install PTS by running the script

cd phoronix-test-suite && sudo . /install-sh

If you have Windows Server 2019 , do this:

  • We update the system using Windows Update.

  • Download the PTS distribution from the official github.

  • Dearchive the downloaded file to the root of drive C. Open the console with administrator rights, go to the folder with PTS.

cd C:\phoronix-test-suite

Run the script phoronix-test-suite.bat , wait a bit, but it’s too early for drinks — sometimes you need to squeeze next-next-next. To start the program, dial phoronix-test-suite on the command line (the command line should change its appearance).

Testing

Installed? Great, now testing. All PTS commands are the same for Windows and Linux distributions. We will consider working on the Linux command line. The test can be run in two ways, simple and complex. Hard way — first you need to install the test (the test itself + dependencies will be installed)

phoronix-test-suite install

then run the test

phoronix-test-suite run < test name >

but I suggest a simpler option. We use the command

phoronix-test-suite benchmark < test name >

to run the test, after which Phoronix will install the test, dependencies and start testing.

Let’s try running the hmmer test. Let’s type in the console

phoronix-test-suite benchmark hmmer

The necessary components will be loaded, after which we will see the characteristics of the machine on which the test is run. We will be asked several questions.

Phoronix Test Suite offers to save the results. Let’s choose Y. The next question is about the name of the test results file, enter hmmer-test-result. Another question is about the name of the test configuration. It must be unique, but can be left at the default by simply pressing Enter (Phoronix will automatically generate the name). Next, the question of describing the configuration — let’s leave it by default by pressing Enter. nine0003

Next, the test will start. Patiently waiting for the results, but for now, let’s get acquainted with the useful Phoronix Test Suite commands:

  • phoronix-test-suite help — will show all available commands.

  • phoronix-test-suite list-all-tests — will help you see all available tests for the current machine.

  • phoronix-test-suite list-all-tests | grep Processor for Linux, or .\phoronix-test-suite list-all-tests | Select-String -Pattern Processor for Windows — we see all tests for the processor.

  • phoronix-test-suite list-installed-tests — tests that are installed on the machine.

  • phoronix-test-suite list-recommended-tests — a list of recommended tests for your OS.

  • phoronix-test-suite info — allows you to view technical information about the test.

  • phoronix-test-suite benchmark — allows you to install (if necessary) and run tests test1, test2, test3 in a row.

And now the test is over, let’s get back to the hmmer results.

A couple more questions from Phoronix Test Suite after completing the test. Want to see the test result in the console? / Y . Want to upload the result to OpenBenchmarking.org? / Y . Want to add technical information about the car to the result? / Y . And with the help of the received link, you can see the results in the browser. nine0003

Too many questions? You can get rid of them by adding the execution of all test options to the parameters. Enter on the command line:

phoronix-test-suite batch-setup

For example:

Attention! In order for batch settings to be applied, it is necessary to run tests with the batch-benchmark command instead of benchmark, otherwise the syntax is preserved. For example, phoronix-test-suite batch-benchmark hmmer .

Another handy feature of Phoronix Test Suite is the ability to record data from various system sensors. For example, the frequency of processor cores, processor utilization, etc. To use this functionality, type in the console:

MONITOR=all phoronix-test-suite batch-benchmark hmmer

As you can see, we got a weighty set of additionally recorded data:

And now the most important thing — let’s talk about the results.

After executing the tests, the PTS outputs information about the performed measurement to the console and/or to a file. Test results are saved to default directories

for linux: ~/.phoronix-test-suite/test-results

for Windows: C :\Users\< User >\.phoronix-test-suite\test-results

To view saved tests:

phoronix-test-suite list-results

To view test results in detail

2

2 phoronix-test-suite show-result

, for example

phoronix-test-suite show-result hmmer-test-result

You can convert the result file to one of the convenient formats: csv, json, pdf etc.

phoronix-test-suite result-file-to-

Let’s figure out how to interpret the received data. The simplest option is Excel — we will use it. I used many different tests, so I’ll show you what part of my table looks like (green is the best result, red is the worst):

Each test, when displaying the result, indicates how the numerical value should be interpreted: more value, the better. nine0003

  • Lower Is Better — the lower the value, the better.

  • The result of each test was recorded on a plate and subsequently analyzed. Each tested site received +1 for the best result in a typical test, and -1 for the worst, all started from 0. After the calculation, we get a relative number by type of load on the site, based on which you can get an idea of ​​​​the performance of the tested sites.

    The convenience of the data obtained is that we can unambiguously say which system is better. Numerical output allows you to unambiguously interpret the obtained values. The results obtained using the Phoronix Test Suite can be used as metrics against which comparative testing is performed. If you need to receive important and up-to-date data on the state or change of the current infrastructure, as well as respond faster to bottlenecks in the systems you use, solve problems on time, then the testing methodology using the Phoronix Test Suite is an easy and convenient way. nine0003

    In this article, I showed how we conduct one of the types of testing of various sites in data centers. You too can compare your current infrastructure to the cloud with the Phoronix Test Suite software. May your choice be easy and profitable!

    Thank you for your attention.

    P.S. And on December 28 at 15-00 we will have a webinar on security in social networks. We invite you to register.


    What else is interesting in the Cloud4Y blog

    → IT pizza quest. Summary

    → How I Accidentally Locked 10,000 Phones in South America

    → Keyboards That Failed

    → WD-40: A Tool That Can Do Almost Anything

    → Learn Your Hardware: Reset BIOS Passwords on Laptops

    to our Telegram channel so as not to miss the next article. We write no more than twice a week and only on business.

    results and comparison with AMD EPYC0001

    Under the hood of Ampere Altra — 80 cores at 3.0 GHz with a TDP of 210 watts. None of the major vendors offer these features. But what kind of performance does it give in practice? In the text — the results of comparing the processor with AMD EPYC and conclusions about the potential of such a solution for data centers. With photo!

    1394
    views

    Hello! My name is Maxim, I work as a hardware tester at Selectel Lab. I recently took a GIGABYTE E252-P30 server with an 80-core Ampere Altra Q80-30 processor for a test. And I decided to compare it with the AMD EPYC 7513, which has the closest characteristics. In the text, I share the main results and invite you to test the processor for free. nine0003

    Use navigation if you don’t want to read the entire text. The main test results are described in the conclusions.

    • Why did you take an ARM processor for a test?
    • Collecting test config
    • Test plan
    • Operating system and software preparation
    • Geekbench5 results
    • GPU Test
    • Processor stress test
    • Temperature and power consumption
    • Pins

    Why did they take an ARM processor for a test?

    Previously, Selectel data centers already had representatives of the ARM family — for example, we have servers with «raspberries» and M1 processors. But there were no full-fledged server ARMs yet, although the architecture is promising. For example, the Ampere Altra Q80-30 performs well in HPC and cloud computing. nine0003

    Rent a dedicated server with an ARM processor (Ampere Altra Max M128-30 3 GHz, 128 cores) or test for free in Selectel Lab (80-core processor in the test).

    Amazon is the leader in the use of ARM chips in infrastructure: the company plans to transfer part of its cloud services to this architecture by 2025. Also Ampere and Huawei are engaged in the production of processors for the open market. By the way, AMD also tried to follow the trend, but so far has not been successful. nine0003

    This is how interest in the ARM architecture is growing.

    Most platforms — JVM, V8, PVM and others — have been ported to ARM, and the free software «ecosystem» is rapidly evolving to suit the architecture. But this is all in words — let’s see what ARM is really capable of.

    Collect test config

    In addition to the processor itself, the test configuration includes:

    • motherboard GIGABYTE MP32-AR1-00,
    • 16 x 16 GB RAM (Micron DDR4 3200 MHz ECC),
    • 2 Micron_5300 480 GB SSDs,
    • 1TB M.2 SSD NVMe Drive (GIGABYTE GP-AG41TB).

    Three Nvidia TESLA T4 graphics cards are also connected to test PCIe lanes at full load.

    The config collected for tests is already working in the rack, in this form we photographed it. Further in the text, we offer a photo of another assembly with an ARM processor — in a 1U case. nine0003

    In the tested configuration, a compact edge server in a shallow 2U chassis (439 x 86 x 449 mm). For convenience, a disk cage for six SFF drives (SAS-3 / SATA-3) with hot-swap support, as well as an IO-panel of the motherboard and all PCI Express slots are displayed in the front panel.

    Test plan

    In Selectel Lab, we test hardware not only to compare the declared characteristics with reality. It is important for us to find out to what extent the tested platforms and server components are able to integrate into the overall system of Selectel data centers. The potential configuration should be well-matched by existing dedicated server automation and energy efficiency requirements. nine0003

    Generated the following checklist for testing:

    1. Use AI-Benchmark to check the speed, power consumption and memory requirements for key artificial intelligence algorithms.
    2. Make tests in Geekbench5.
    3. Run a classic graphics card stress test with gpu-burn.
    4. Estimate the speed of video encoding and decoding through the ffmpeg NVENC program. nine0003

    5. Test in conjunction with the CUDA Toolkit.
    6. Estimate power consumption and temperature conditions — we used Grafana and Prometheus to get the desired graphs.

    What will we compare

    To compare with Amper Altra, we chose two AMD EPYC 7513 processors — 64 cores in total, base frequency 2.6 GHz (up to 3.65 GHz in Turbo CORE mode). This is a worthy competitor, which is in the top 3 according to the results of benchmarks in single-core mode. This is our own rating of processors that were tested in Selectel Lab. The full list can be viewed on our Geekbench5 profile. nine0003

    In addition, the prices of the compared processors are approximately equal.

    Preparing the operating system and software

    ARM architecture supports 24 distributions, we at Selectel provide four of them — Astra Linux, Debian, CentOS and Ubuntu. The test methods used Ubuntu 22.04.01 LTS 5.15.0-50-generic aarch64 — this is the most recent distribution for the aarch64 architecture.

    There are no surprises here: the operating system was installed without problems, there were no difficulties with the drivers for video cards. Installation of cuDNN 11.7, NVIDIA-SMI 515.65.01, libcudnn8 was successful. nine0003

    Let’s move on to the test results.

    Geekbench5 results

    In general, the Ampere Altra processor is close in performance to the flagship AMD EPYC.

    1 — https://browser.geekbench.com/v5/cpu/177, 2 - https://browser.geekbench.com/v5/cpu/19557141, 3 - https://browser.geekbench.com/v5/ cpu/428800

    The graph below is a comparison of the Ampere Altra Q80-30 and AMD EPYC 7513 tests in Geekbench 5.

    Result in Single-core mode.

    In Single-core mode, AMD has a big advantage over ARM processor. But everything changes if you test in Multi-core mode:

    Result in Multi-core mode.

    It can be seen that the ARM processor is strongly "knocked out" ahead in all criteria. This is especially noticeable in the parameters that relate to parallelization.

    For parallelization, the rule is true: the more physical cores, the better. Therefore, in a number of criteria - for example, Gaussian blur, HDR, Camera and others - ARM is in the lead. However, you can see for yourself: all the magic happens in multi-threaded mode. nine0003

    In single-threaded mode, the processor does not give the highest results, but they are not "gone" so far from AMD EPYC. But in the multi-threaded Ampere Altra is in the lead. Even though we used two AMD EPYC 7513 processors. At the time of publication, according to Selectel internal benchmarks, Ampere Altra is second in performance behind AMD EPYC 7742.

    GPU testing

    Not all of the planned GPU tests could be carried out due to the peculiarities of the ARM architecture. The same aarch64 TensorFlow is quite difficult to build. In the first iteration of testing, there was no time for this, so we postponed this task for the future - it was not possible to test the GPU through AI-Benchmark. Geekbench 5 does not currently support the CUDA benchmark and OpenCL GPU under aarch64. But there are results of encoding and decoding through ffmpeg. nine0003

    Video decoding results

    We wanted to see how the processors perform on these configurations. Also find out what works out of the box, without interfering with the software. For testing, we took a 4K video with a size of 618 MB.

    Decode command:

    time ffmpeg -y -i input.mp4 -preset fast -b:v 5M -profile:v high -bf 3 -rc-loo

    AMD won in the video decoding test. Maybe it's because of more threads. But it is worth remembering: this processor has two sockets, while ARM has only one. nine0003

    Processor stress test

    Reference information on processor operation policies

    For the stress test of the ARM processor, several processor operation policy modes were used - ondemand, performance. These modes are also available on x86 architectures.

    Ondemand

    The mode is set by default. It is responsible for incrementally increasing the frequency and voltage of the CPU depending on the load. Every 20~200ms, the load on the CPU, general or system user is measured. If the load at the current frequency is more than 95%, the frequency rises. If less than 20% - the frequency is reduced by one step.

    For example, if the available frequencies are 800-2000-3000 MHz, when the CPU is loaded at 95%, the frequency switches from 2000 to 3000 MHz. The sample rate and transition loading percentage for all modes are set when the kernel is compiled.

    Performance

    The name of the mode speaks for itself. In it, the processor produces the maximum available frequency, the maximum of its performance.

    There are other modes in which the processor behaves differently. But we did not test them, because we wanted to study the stable operation of the processor and its maximum capabilities. nine0003

    Possible processor operating modes:

    • powersave - energy saving mode: minimum performance, maximum energy efficiency.
    • shedutil - similar to ondemand, but can use CFS task scheduler data. Due to this, it acts more “reasonably”. but does not work with schedulers other than CFS, such as Kolivas' muQSS.
    • userspace - processor operation at a user-specified frequency. nine0018
    • conservative - similar to ondemand mode, but the indicator for switching the frequency up or down is lower - usually 20%. For example, 500-1000-2000 MHz is available, we work at 500 MHz. The load has increased by 30% - we are switching to 1000 MHz.

    To set the processor mode or policy in Ubuntu, just enter the following command:

    #echo performance | tee /sys/devices/system/cpu/cpu*/cpufreq/scaling_governor

    Stress test results

    Stress test results are interesting: the temperature graphs did not change, although the core frequency in the first test was lower than in the second.

    The graph above shows the average frequency of the cores. The second one shows the frequency of each.

    The first test started from 12:50 to 13:40 in ondemand mode. Repeated stress test from 13:40 to 14:50 in performance mode. At the same time, power consumption has not changed. As you can see, in ondemand mode, the processor does not work at its maximum and drops the frequency on some cores. nine0003

    There may be several reasons for this behavior in ondemand mode. The first is that the processor really overheats and reduces the frequency on the hottest cores. The second is that ARM is so "smart" that to save power consumption, it reduces the frequency if the core is not recycled.

    Before choosing a specific processor mode, you should determine for what tasks you will use the server. Each mode has its own characteristics that should be considered.

    nine0002 Temperature and power consumption

    As the graphs have already shown, the processor supports the thermal package during the stress test very well. Changing the frequency of work does not affect the performance. The power consumption of the processor was 150 W at the peak of the stress test with a passport TDP of 210 W.

    The temperature and energy efficiency ratings are pretty complimentary for use in data centers. But we plan to double-check the data in other configs and assemblies. The processor is new to us and requires a more extensive check. nine0003

    Terminals

    Ampere Altra's basic tests showed that the processor is efficient in terms of power consumption. Considering this and the price, it is cost effective for the data center.

    For the client, Ampere Altra is an opportunity to evaluate the advantages of the ARM architecture in high-performance computing. In addition, the configuration with the new ARM is cheaper than the assembly with the same AMD EPYC.

    We will continue testing the new platform - we invite you to test it too.