Cpu core meaning: What Is a CPU Core? A Basic Definition

What is a multicore processor and how does it work?

By

  • Stephen J. Bigelow,
    Senior Technology Editor

What is a multicore processor?

A multicore processor is an integrated circuit that has two or more processor cores attached for enhanced performance and reduced power consumption. These processors also enable more efficient simultaneous processing of multiple tasks, such as with parallel processing and multithreading. A dual core setup is similar to having multiple, separate processors installed on a computer. However, because the two processors are plugged into the same socket, the connection between them is faster.

The use of multicore processors or microprocessors is one approach to boost processor performance without exceeding the practical limitations of semiconductor design and fabrication. Using multicores also ensure safe operation in areas such as heat generation.

How do multicore processors work?

The heart of every processor is an execution engine, also known as a core. The core is designed to process instructions and data according to the direction of software programs in the computer’s memory. Over the years, designers found that every new processor design had limits. Numerous technologies were developed to accelerate performance, including the following ones:

  • Clock speed. One approach was to make the processor’s clock faster. The clock is the «drumbeat» used to synchronize the processing of instructions and data through the processing engine. Clock speeds have accelerated from several megahertz to several gigahertz (GHz) today. However, transistors use up power with each clock tick. As a result, clock speeds have nearly reached their limits given current semiconductor fabrication and heat management techniques.
  • Hyper-threading. Another approach involved the handling of multiple instruction threads. Intel calls this hyper-threading. With hyper-threading, processor cores are designed to handle two separate instruction threads at the same time. When properly enabled and supported by both the computer’s firmware and operating system (OS), hyper-threading techniques enable one physical core to function as two logical cores. Still, the processor only possesses a single physical core. The logical abstraction of the physical processor added little real performance to the processor other than to help streamline the behavior of multiple simultaneous applications running on the computer.
  • More chips. The next step was to add processor chips — or dies — to the processor package, which is the physical device that plugs into the motherboard. A dual-core processor includes two separate processor cores. A quad-core processor includes four separate cores. Today’s multicore processors can easily include 12, 24 or even more processor cores. The multicore approach is almost identical to the use of multiprocessor motherboards, which have two or four separate processor sockets. The effect is the same. Today’s huge processor performance involves the use of processor products that combine fast clock speeds and multiple hyper-threaded cores.

Multicore processors have multiple processing units incorporated in them. They connect directly with their internal cache, as well as with the system bus and memory.

However, multicore chips have several issues to consider. First, the addition of more processor cores doesn’t automatically improve computer performance. The OS and applications must direct software program instructions to recognize and use the multiple cores. This must be done in parallel, using various threads to different cores within the processor package. Some software applications may need to be refactored to support and use multicore processor platforms. Otherwise, only the default first processor core is used, and any additional cores are unused or idle.

Second, the performance benefit of additional cores is not a direct multiple. That is, adding a second core does not double the processor’s performance, or a quad-core processor does not multiply the processor’s performance by a factor of four. This happens because of the shared elements of the processor, such as access to internal memory or caches, external buses and computer system memory.

The benefit of multiple cores can be substantial, but there are practical limits. Still, the acceleration is typically better than a traditional multiprocessor system because the coupling between cores in the same package is tighter and there are shorter distances and fewer components between cores.

Consider the analogy of cars on a road. Each car might be a processor, but each car must share the common roads and traffic limitations. More cars can transport more people and goods in a given time, but more cars also cause congestion and other problems.

What are multicore processors used for?

Multicore processors work on any modern computer hardware platform. Virtually all PCs and laptops today build in some multicore processor model. However, the true power and benefit of these processors depend on software applications designed to emphasize parallelism. A parallel approach divides application work into numerous processing threads, and then distributes and manages those threads across two or more processor cores.

There are several major use cases for multicore processors, including the following five:

  1. Virtualization. A virtualization platform, such as VMware, is designed to abstract the software environment from the underlying hardware. Virtualization is capable of abstracting physical processor cores into virtual processors or central processing units (vCPUs) which are then assigned to virtual machines (VMs). Each VM becomes a virtual server capable of running its own OS and application. It is possible to assign more than one vCPU to each VM, allowing each VM and its application to run parallel processing software if desired.
  2. Databases. A database is a complex software platform that frequently needs to run many simultaneous tasks such as queries. As a result, databases are highly dependent on multicore processors to distribute and handle these many task threads. The use of multiple processors in databases is often coupled with extremely high memory capacity that can reach 1 terabyte or more on the physical server.
  3. Analytics and HPC. Big data analytics, such as machine learning, and high-performance computing (HPC) both require breaking large, complex tasks into smaller and more manageable pieces. Each piece of the computational effort can then be solved by distributing each piece of the problem to a different processor. This approach enables each processor to work in parallel to solve the overarching problem far faster and more efficiently than with a single processor.
  4. Cloud. Organizations building a cloud will almost certainly adopt multicore processors to support all the virtualization needed to accommodate the highly scalable and highly transactional demands of cloud software platforms such as OpenStack. A set of servers with multicore processors can allow the cloud to create and scale up more VM instances on demand.
  5. Visualization. Graphics applications, such as games and data-rendering engines, have the same parallelism requirements as other HPC applications. Visual rendering is math- and task-intensive, and visualization applications can make extensive use of multiple processors to distribute the calculations required. Many graphics applications rely on graphics processing units (GPUs) rather than CPUs. GPUs are tailored to optimize graphics-related tasks. GPU packages often contain multiple GPU cores, similar in principle to multicore processors.

Homogenous vs. heterogeneous multicore processors

The cores within a multicore processor may be homogeneous or heterogeneous. Mainstream Intel and AMD multicore processors for x86 computer architectures are homogeneous and provide identical cores. Consequently, most discussion of multicore processors are about homogeneous processors.

However, dedicating a complex device to do a simple job or to get greatest efficiency is often wasteful. There is a heterogeneous multicore processor market that uses processors with different cores for different purposes. Heterogeneous cores are generally found in embedded or Arm processors that might mix microprocessor and microcontroller cores in the same package.

There are three general goals for heterogeneous multicore processors:

  1. Optimized performance. While homogeneous multicore processors are typically intended to provide vanilla or universal processing capabilities, many processors are not intended for such generic system use cases. Instead, they are designed and sold for use in embedded — dedicated or task-specific — systems that can benefit from the unique strengths of different processors. For example, a processor intended for a signal processing device might use an Arm processor that contains a Cortex-A general-purpose processor with a Cortex-M core for dedicated signal processing tasks.
  2. Optimized power. Providing simpler processor cores reduces the transistor count and eases power demands. This makes the processor package and the overall system cooler and more power-efficient.
  3. Optimized security. Jobs or processes can be divided among different types of cores, enabling designers to deliberately build high levels of isolation that tightly control access among the various processor cores. This greater control and isolation offer better stability and security for the overall system, though at the cost of general flexibility.

Examples of multicore processors

Most modern processors designed and sold for general-purpose x86 computing include multiple processor cores. Examples of latest Intel 12th-generation multicore processors include the following:

  • Intel Core i9 12900 family provides 8 cores and 24 threads.
  • Intel Core i7 12700 family provides 8 cores and 20 threads.
  • Top Intel Core i5 12600K processors offer 6 cores and 16 threads.

Examples of latest AMD Zen multicore processors include:

  • AMD Zen 3 family provides 4 to 16 cores.
  • AMD Zen 2 family provides up to 64 cores.
  • AMD Zen+ family provides 4 to 32 cores.

This was last updated in March 2022



Continue Reading About multicore processor


  • Pick the best CPU for virtualization
  • Data processing units accelerate infrastructure performance
  • Evaluate GPU vs. CPU for data analytics tasks
  • How do CPU, GPU and DPU differ from one another?
  • CPUs vs. GPUs for AI workloads

Dig Deeper on Data center hardware and strategy

  • quad-core processor

    By: Robert Sheldon

  • multithreading

    By: Paul Kirvan

  • How to choose the best CPU for virtualization

    By: Ryann  Burnett

  • Open Source Processors for Next-Generation Storage Controllers


SearchWindowsServer



  • How to decide on what Office 365 add-on licenses to use

    Missing certain functionality and want to supplement your subscription to Office 365 or Microsoft 365? Find out what extras make …



  • Consider Azure AD group-based licensing for Office 365 users

    Administrators who manage many users can go one step further toward streamlining license assignments by taking advantage of a new. ..



  • Microsoft Ignite 2022 conference coverage

    News related to the tech company’s ever-expanding portfolio of cloud offerings is expected to take center stage at the …


SearchCloudComputing



  • Choose the right on-premises-to-cloud migration method

    One of the first steps in a cloud migration is to choose a data transfer model. There are two options to consider — online and …



  • Companies search for ROI from cloud spending

    A KPMG survey of 1,000 executives found that two-thirds were reevaluating cloud spending after failing to achieve a significant …



  • Microsoft, Kyndryl deal connects Azure to IBM mainframe data

    Aiming to break the mainframe out of its silo, Microsoft and Kyndryl will collaborate on allowing mainframe users to send data . ..


SearchStorage



  • DPUs vs. SmartNICs: What storage admins need to know

    To determine whether a SmartNIC or DPU is right for their organization, admins must understand the capabilities of different …



  • IBM integrates Red Hat storage for hybrid cloud

    IBM storage has integrated Red Hat OpenShift Data Foundation and Ceph into its new hybrid cloud data storage offering. Analysts …



  • Spot by NetApp exec on acquisitions and catalog slimming

    In this Q&A, Spot by NetApp’s Kevin McGrath talks about the future of Fylamynt, evolving partnerships with hyperscalers and how …

CPU Core, Multi-Core, Thread, Core vs Threads, Hyper-Threading

ByMatthew Martin

Hours

Updated

What is Concurrency or Single Core?

In Operating Systems, concurrency is defined as the ability of a system to run two or more programs in overlapping time phases.

Concurrent execution with time slicing

As you can see, at any given time, there is only one process in execution. Therefore, concurrency is only a generalized approximation of real parallel execution. This kind of situation can be found in systems having a single-core processor.

In this Concurrency tutorial, you will learn

  • What is Concurrency or Single Core?
  • What is Parallel Execution or (Multi-Core)?
  • What is Thread?
  • What is Multithreading?
  • How Multithreading Works?
  • What is CPU Core?
  • What is the Main Issue with Single Core?
  • The Solution Provided by Multi-Core:
  • Benefits of Multi-core Processor
  • Difference between Core vs. Threads
  • What is Hyper-Threading?

What is Parallel Execution or (Multi-Core)?

In parallel execution, the tasks to be performed by a process are broken down into sub-parts, and multiple CPUs (or multiple cores) process each sub-task at precisely the same time.

Parallel Execution

As you can see, at any given time, all processes are in execution. In reality, it is the sub-tasks of a process which are executing in parallel, but for better understanding, you can visualize them as processes.

Therefore, parallelism is the real way in which multiple tasks can be processed at the same time. This type of situation can be found in systems having multicore processors, which includes almost all modern, commercial processors.

KEY DIFFERENCE

  • Cores increase the amount of work accomplished at a time, whereas threads improve throughput, computational speed-up.
  • Cores is an actual hardware component whereas thread is a virtual component that manages the tasks.
  • Cores use content switching while threads use multiple CPUs for operating numerous processes.
  • Cores require only a signal process unit whereas threads require multiple processing units.

What is Thread?

A thread is a unit of execution on concurrent programming. Multithreading is a technique which allows a CPU to execute many tasks of one process at the same time. These threads can execute individually while sharing their resources.

What is Multithreading?

Multithreading refers to the common task which runs multiple threads of execution within an operating system. It can include multiple system processes.

How Multithreading Works?

For example, most modern CPUs support multithreading. A simple app on your smartphone can give you a live demo of the same.

When you open an app that requires some data to be fetched from the internet, the content area of the app is replaced by a spinner. This will rotates until the data is fetched and displayed.

In the background, there are two threads:

  • One fetching the data from a network, and
  • One rendering the GUI that displays the spinner

Both of these threads execute one after the other to give the illusion of concurrent execution.

What is CPU Core?

A CPU core is the part of something central to its existence or character. In the same way in the computer system, the CPU is also referred to as the core.

There are basically two types of core processor:

  1. Single-Core Processor
  2. Multi-Core Processor

What is the Main Issue with Single Core?

There are mainly two issues with Single Core.

  • To execute the tasks faster, you need to increase the clock time.
  • Increasing clock time increases power consumption and heat dissipation to an extremely high level, which makes the processor inefficient.

The Solution Provided by Multi-Core:

  • Creating two cores or more on the same die to increase the processing power while it also keeps clock speed at an efficient level.
  • A processor with two cores running an efficient speed can process instructions with similar speed to the single-core processor. Its clock speed is twice, yet the multicore process consumes less energy.

Benefits of Multi-core Processor

Here are some advantages of the multicore processor:

  • More transistor per choice
  • Shorter connections
  • Lower capacitance
  • A small circuit can work at fast speed

Difference between Core vs.

Threads

Parameters Core Threads
Definition CPU cores mean the actual hardware component. Threads refer to the virtual component which manages the tasks.
Process The CPU is fed tasks from a thread. Therefore, it only accesses the second thread when the information sent by the first thread is not reliable. There are many different variations of how CPU can interacts with multiple threads.
Implementation Achieved through interleaving operation Performed through suing multiple CPU’S
Benefit Increase the amount of work accomplished at a time. Improve throughput, computational speed-up.
Make use of Core uses content switching Uses multiple CPUs for operating numerous processes.
Processing units required Requires only signal process unit. Requires multiple processing units.
Example Running multiple application at the same time. Running web crawler on a cluster.

What is Hyper-Threading?

Hyper-threading was Intel’s first effort to bring parallel computation to end user’s PCs. It was first used on desktop CPUs with the Pentium 4 in 2002.

The Pentium 4’s at that time only featured just a single CPU core. Therefore, it only performs a single task and fails to perform any type of multiple operations.

A single CPU with hyper-threading appears as two logical CPUs for an operating system. In this case, the CPU is single, but the OS considers two CPUs for each core, and CPU hardware has a single set of execution resources for every CPU core.

Therefore, CPU assumes as it has multiple cores than it does, and the operating system assumes two CPUs for each single CPU core.

Summary:

  • A thread is a unit of execution on concurrent programming.
  • Multithreading refers to the common task which runs multiple threads of execution within an operating system
  • Today many modern CPUs support multithreading
  • Hyper-threading was Intel’s first effort to bring parallel computation to end user’s PCs.
  • A CPU core is the part of something central to its existence or character
  • In, Operating System concurrency is defined as the ability of a system to run two or more programs in overlapping time phases.
  • In parallel execution, the tasks to be performed by a process are broken down into sub-parts.
  • The main issue of single-core processor is that in order to execute the tasks faster, you need to increase the clock time.
  • Multicore resolves this issue by creating two cores or more on the same die to increase the processing power, and it also keeps clock speed at an efficient level.
  • The biggest benefit of the multicore system is that it helps you to create more transistor per choice
  • The CPU cores mean the actual hardware component whereas threads refer to the virtual component which manages the tasks.

What Does Processor Count Mean?

Processor Count refers to the amount of Cores a CPU has. A core is essentially a small processor built into a larger chipset that is capable of independent computations. The number of cores varies greatly, with some CPUs sporting two cores and beefed-up ones having up to 64 cores or more.

In the old days, when things were simpler, CPUs had just one core. Meaning there was single set of ALU, registers, cache memory etc. However, as we progressed, things changed and the CPU started featuring multiple physical entities called cores under a single chipset. When we talk about what does processor count mean, we are generally referring to these cores.

Each core handling one task will be independent of another core working on a different task. More cores let the CPU work on multiple tasks seamlessly.

In this article, we look further at what processor count means and what a core actually entails.

TABLE OF CONTENTS

What Does Processor Count Mean?

As mentioned earlier, processor count basically is the number of cores on a processor.

Additionally, a single CPU core can be broken down into virtual processing units known as threads or logical processors. More on this below.

ONE CAVEAT – SERVER GRADE COMPUTERS!

Image: For server grade systems, Processor Count may refer to the actual number of CPU sockets, rather than core count. Image Source: Supermicro REV 2.01

In the server category, some motherboards have multiple CPU sockets which can take two or more separate CPUs for more demanding processing.

How Processor Count Influences CPU Performance

It’s time to see what does processor count mean for overall computer performance.

Multitasking is a staple of modern-day computers. It’s what lets you work, have multiple browser tabs open, watch a video, and do several other things on your computer at the same time seamlessly.

Higher core counts let you run multiple applications concurrently since each core handles a different data stream on its thread(s). In this kind of situation where you have multiple apps and services running, the more threads you have running different tasks, the better the performance.

Modern computers have a lot of background services and apps running without you knowing about the. Even when your PC is ideal, there are OS related services running in the background all utilizing the CPU resources.

Having many cores means you have more workers to handle the computation.

Additionally, some professional processes like encoding, rendering, machine learning or those that rely on massive amounts of computation require you to have many different workers (cores) processing small chunks of data simultaneously.

Also Read: Is a Quad Core Processor Good for Gaming?

Figuring Out How Many Cores You Have

You can figure out the number of cores in your processor using two methods:

1. Through Manufacturer’s Specsheet Online

You will  need to know the CPU make and model to search it up on Google for its specsheet.

Image: Under the CPU specifications, you can find out the number of cores a processor has in the manufacturer’s spechsheet. Source: Intel

2. Through Task Manager

Another simple method is to open Task Manager (CTRL + ALT + DELETE), head over to the “Performance” then press the “CPU” section and look for the Core Count.

In the following image you can see that the Intel Core i7-7700HQ has 4 cores.

Image: Task manager showing the processor model as well as the core count for the processor.

You will also notice another field called Logical Processors (8 in this case), this however, does not pertain to the actual cores on your CPU. Instead, it relates to the amount of threads you have. Read more about Logical Processor here:

  • How to Check How Many CPU Threads Do I Have

How Many Processor Count Do You Need?

The amount of cores you need depends upon your needs.

For Basic Computing 2-4 Cores

The most basic CPUs available at the moment for laptop and desktop are the Intel Celeron CPUs. These have 2 cores.

The next level of CPUs in the budget category are the AMD Athlon and the Intel Pentium series. These can feature upto 4 cores.

I generally do not recommend going for dual core Intel Celeron CPUs as they can show limitation even in the most basic work environments.

For basic computing, 4 cores is what I recommend.

Mainstream Computing

If you are a gamer, or a casual editor and a designer, 4-6 cores are recommended. You are looking into the likes of Intel Core i3/Core i5 and AMD Ryzen 3/Ryzen 5 CPUs here.

High Performance

These are often used by professional gamers and professionals needing a light workstation. Here you can expect to find CPUs offering 8 cores such as AMD Ryzen 7 or the Intel Core i7 processors

Workstation Grade CPUs

Here, sky is the limit. Workstation grade CPUs can feature upto 64 cores.

We have covered the basics till here, and by now you should have a good idea about what processor count means. However to learn about what does processor count mean, what cores mean, what single core and multi core performance means, we recommend you read the rest of the article.

What You Need to Know About Cores

Cores on a processor are by themselves individual processing units. This means that they come with actual capabilities to perform independent data processing. Think of them as a smaller processor within the main processor.

They share some resources like Level 3 cache, memory controllers, and the system interface which connect to other devices, however, the ALU, Control unit, and Level 1 and 2 cache are built internally into each core as far as the current architecture stands.

To understand the need of cores and how having more cores affect the performance let us start with understand what single core processors are/were.

Single Core Processors

In the early days of computing, processors only had one core. This was responsible for being the brain of the processor.

Some of the important sub units of CPU are as follows (You DON’T, need to know these for the purpose of this article)

  • Arithmetic Logic Unit: Where the logical and arithmetic operating happen. If CPU is the brain of a computer. ALU is the brain of a CPU.
  • Floating Point Unit: A supporting unit for ALU for performing computations with complex decimal number.
  • Registers: Temporary storage for executing operations. Also server as status flags.
  • Control Unit: For instruction execution. Works as the orchestrator.
  • Cache: Very fast memory for fetching data and instruction.

All of these sub units within a Single Core are the key ingredients of the Fetch-Decode-Execute cycle.

Fetch Decode Execute Cycle. Source: Christopher Kalodikis.

Since a single core CPU has only a single set of the sub units it could only perform a single Fetch-Decode-Execute cycle at a time.

The scheduling algorithms used made it seem that the computer was multitasking, but in an actual sense, the core was just handling different processes and switching between them indiscernibly fast!

Limitations of Single Core and Introduction Of Multicore Processors

As the market demanded faster and faster performance from CPUs. Initially, the answer was to increase the clock speed of the Single Core CPU.

Hence, the single core clock speed of a Pentium III released in 1999, for instance, drastically improved over the single core clock speed of the Pentium II released in 1997.

This introduced two problems:

  • Heat
  • Context Switch Overhead

As the clockspeed became higher and higher, so did the heat generation. Cooling requirement and the power consumption was just not feasible enough.

The other issue was the Context Switch Overheard. A Context Switch Overhead is basically a delay that happens when a CPU has to switch from one task to another. So if you switch from a Word Document to an Excel Sheet, the CPU would experience a delay.

Now if you were to have two cores, you could have the Word Document loaded onto one core and the Excel Sheet onto the other, thus eliminating the Context Switch Overhead.

Therefore, in terms of multitasking efficiency, a single core processor was just not great enough.

Hence, as the prospects of multitasking and parallel processing gained momentum and as market started demanding better performance, multiple cores came out to be the answer.

Also Read: How to Check What is My Processor Architecture

MultiCores vs Multi-CPU Approach

Early attempts of having more processing units in a computer had engineers redesign motherboards to make them accommodate multiple CPU sockets. More CPUs inherently meant higher operational efficiency of computers.

This had issues though, first was the increased hardware requirements. Each extra CPU needed cooling, motherboards required new tracks to connect all sockets to various I/O devices and controllers.

Ironically, the builds weren’t as efficient as earlier thought out because along came latency issues. But, with miniaturization, it became possible to fit multiple processors onto a single silicon chip, and this gave rise to multicore processors.

Also Read: How to Check How Many Cores You Have in Your CPU?

What is a Multi Core Processor?

Quad Core Dye Chart

A multicore processors is basically a CPU that has several independent smaller processors inside. This is also referred to as Processor Count.

Each core has its own ALU. FPU, registers, cache etc. There are few components that are shared across cores as well as such as the L3 cache memory, however, for the most part, each core works as an independent CPU.

The immediate benefit here is that it drastically improves the multitasking performance of a PC.

Also Read: Can I Upgrade My Laptop Processor from i5 to i7?

Single Core vs Multicore Performance

While having multiple cores, or processor count, can drastically improve the performance of a CPU, the Single Core performance is still a critical measurements of its prowess.

Single-Core Performance

As the name suggests, single core performance refers to how well a singular core performs.

This is an important measure since there are many applications and tasks out there that heavily utilize a single core and do not scale well with multiple cores.

For instance, many games and tasks in professional software like designing in AutoCAD is heavily reliant on a single core performance over the multi core performance of a CPU.

This is not a rare, but a very common observation. Hence when you look at the benchmark results for a CPU from a test suite such as Cinebench, you will see that they typically talk about both Single and Multi Core performance separately.

The CPU frequency is the most commonly used measurement of a computers single core performance. It is measured in Gigahertz (GHz) and higher values mean higher cycles which can be interpreted as a faster chip.

In reality, the individual performance of CPU cores depends on myriad of factors and not just the clockspeed, such as the design of the core, the architecture used, transistor size, cache memory etc.

Most single core on CPUs also tend to leverage higher clock speeds to boost performance when needed. The Ryzen 5 5600X, for instance, has a base frequency of 3. 7GHz. A single core can boost to 4.6GHz whereas all six cores combined max out at 4.2GHz.

Also Read:

  • What is AMD Equivalent to Intel Core i5?
  • What is AMD Equivalent to Intel Core i3?

Multi Core Performance

Multi core performance is, again as the name suggests, the measure of how well the multiple cores working together perform.

Multiple cores have almost become a necessity since a typical computational needs have become so complex even for an average person.

After all, a typical PC can have so many background applications running at the same time. All those background operation would preferably require a core of their own to run smoothly.

Therefore, even if your game or your software uses a single core at most, the overall performance of the PC will benefit from a higher core count.

Where Multicore Performance Matters

If you need to perform multiple operations simultaneously, you will benefit from a multicore processor. With several processors onboard, one core can handle one instruction while another awaits for resources, and you will still get good performance benefits.

Works such as 3D modeling and rendering require a lot of parallel computing. This is also the same for things like virtualization, simulation, and video editing and encoding.

They make multitasking seamless and offer better performance. Gamers can enjoy the benefits as well since many newer titles can access multiple cores. At the same time, many simulation games that require fast and complex calculations also perform better on multicore processors.

These are however much more expensive. They are also harder to manage and build applications for them.

However, often newbies get carried away by the core count of the CPU not realizing that if their computational requirement is basic, they will never utilize the full potential of their CPU anyways.

Also Read: Figuring Out What CPU is Compatible with My Motherboard?

Threads, Multithreading and Logical Processor Count

As stated, cores are individual CPUs, each capable of performing its instruction cycle.

Multithreading (or hyperthreading for Intel CPUs and Hypertransport for AMD) breaks down the core into smaller processing-capable subunits called threads.

This allows single cores to work on essentially two tasks at a time.

Every process your CPU performs gets assigned to a thread, and each core can have two threads if it has multi-threading enabled. This means that a four-core CPU with multithreading enabled will have eight threads.

Your computer may read the threads as a processor as well. However, these are not physical processors but logical processors.

You can access the Task Manager to check your core and logical processor count. This processor has 4 Cores but 8 Logical Processor since it has hyper threading enabled.

Each process that you initialize creates a thread and the thread gets executed. Multiple threads executing different processes concurrently makes it look like the CPU is multitasking.

Unlike cores, threads aren’t physical segments on the processor. They are entirely logical units of processing whereas cores are actual processors on the chip. The creation of such processes and their disposal is handled by the CPU scheduler.

Also Read: Difference Between Pentium and Core i3 Processors

Benefits of Multithreading

Multithreading improves CPU performance particularly for multitasking and rendering work. It can be very beneficial for single-core CPUs by allowing the one core to handle multiple tasks simultaneously.

Also Read: What are Motherboard Standoffs?

Final Words

Knowing what does processor count mean is essential if you’re planning on buying or building a PC. If you often do some demanding productive work that involves complex calculations and a lot of virtualization, you’ll benefit from having many cores.

But for majority of the casual users you don’t really need more than 4 cores. For gamers and professionals having six cores is also a sweet spot.

Cores or threads: finding out what is more important for the processor

The specification of each processor necessarily contains information about the number of cores and threads. The rules “the more the better” have not been canceled in this situation, but let’s find out in which tasks virtual cores can give a significant performance boost, and in which they will remain useless.

Why does a processor need multiple cores?

The processor is the computing center of any computer, tablet, smartphone, and even a game console. It is the processor that receives user commands entered in various applications and programs, processes them and distributes tasks between other system nodes — a video card, RAM, a solid state drive.

That is why the processor is the brain center of each computer, responsible for its computing abilities and speed.

The first processors were single devices that received commands and executed them in strict order. One core allowed you to choose a processor when buying only in terms of frequency. And the lack of performance at first was compensated by the creation of two- and multi-processor configurations. In such assemblies, the user’s input commands were processed by the first processor, and the remaining operations, if possible, were evenly distributed among the others. To build such systems, dual-processor boards or multi-socket configurations were used.

As the next step, the manufacturers created a multi-core architecture that allows to place several computer centers on the area of ​​a seemingly small microchip, which in fact were independent processors. So two-, four- and eight-core devices appeared on sale, which processed several streams of information at once.

Later, in the Pentium processor line, Intel introduced the technical ability to execute two instructions per clock with one core, which marked the beginning of a new era in computer technology — processor hyperthreading. And now the company’s specialists are actively working on a new technology for implementing four threads on a single core, and in the near future such processors will be presented to the public.

How cores and threads differ

The core is an independent computing unit in the processor architecture, capable of performing a linear sequence of tasks over a certain period of time. If you load one core with several task sequences, then it will alternately switch between them, processing one task from each thread. On a system scale, this slows down programs and services.

A thread is a programmatically allocated area in the physical processor core. This virtual implementation allows you to share kernel resources and work in parallel with two different instruction sequences. Thus, the operating system perceives the thread as a separate computing center, therefore, the kernel resource is used more rationally, and the calculation speed increases.

Should we expect a doubling of performance?

The virtual division of the processing power of the processor into threads is called hyperthreading. In practice, this is not a physical increase in the number of cores, therefore, the computing potential of the processor remains constant.

Hyperthreading is a tool that allows the processor to more quickly execute operating system commands and allocate computing resources.

Thus, doubling the number of threads relative to the cores can increase the efficiency of the processor by performing multiple tasks simultaneously on each core. But the increase, even according to the assurances of the market leader in the production of Intel processors, will be within 30%.

But you should not worry about the increase in power consumption and excessive heating. Since the virtual separation is performed in production, the company has calculated all the operating parameters, such as power and TDP, specified in the specification.

What to choose: cores or threads?

Since the cores are the physical «think tanks» involved in calculations, they are responsible for the overall performance of the central processor. Therefore, the number of cores, and also the frequency of the processor, determines its performance.

But the number of threads also deserves attention. Let’s look at an example:

A dual-core processor with two threads is loaded by the operating system with four parallel sequences of instructions, for example, from open games and programs. The commands will remain in the four «queues», and the cores will alternately perform calculations from each. At the same time, the performance of the kernel is often excessive for processing one instruction. Therefore, part of the computing potential of the core, and hence the processor, will remain in reserve.

If we take a similar processor with two cores, but for four threads, then all four queues will be activated simultaneously, loading the cores to the maximum. Consequently, tasks will be solved faster, and downtime of computing power can be avoided.

In practice, this gives us the opportunity to run several programs at the same time: work with documents, listen to music, chat in instant messengers and search in the browser. At the same time, programs will work efficiently, quickly, without braking and freezing.

On a production scale, workstations or servers should also be favored with more threads for the same number of cores. With the exception of special cases, such as working with 1C, when the clock frequency plays a decisive role, and a number of other applications that actively use the TCP / IP stack. In these cases, parallelization causes a significant delay in packet processing.

Thus, the more cores there are in the processor, the higher its performance and the speed of performing various tasks. And doubling the number of threads allows you to increase the efficiency of the processor and use its technical potential to the fullest.

In conclusion, an interesting video from Intel about how they create microchips.

What is a processor.

Processor core. CPU frequency. – MediaPure.Ru

Probably, every user who is little familiar with a computer has come across a bunch of incomprehensible characteristics when choosing a central processor: process technology, cache, socket; sought advice from friends and acquaintances competent in the matter of computer hardware. Let’s look at the variety of all possible parameters, because the processor is the most important part of your PC, and understanding its characteristics will give you confidence in the purchase and further use.

Central processing unit

The personal computer processor is a microcircuit that is responsible for performing any data operations and controls peripheral devices. It is contained in a special silicon case called a crystal. For a short designation, the abbreviation is used — CPU (central processing unit) or CPU (from the English Central Processing Unit — central processing unit). In today’s computer hardware market, there are two competing corporations, Intel and AMD , which are constantly in the race for the performance of new processors, constantly improving the technological process.

Process

Process is the size used in the manufacture of processors. It determines the size of the transistor, the unit of which is nm (nanometer). Transistors, in turn, form the internal basis of the CPU. The bottom line is that continuous improvement in manufacturing techniques allows you to reduce the size of these components. As a result, much more of them are placed on the processor chip. This helps to improve the performance of the CPU, so the process technology used is always indicated in its parameters. For example, the Intel Core i5-760 is made according to the 45 nm process technology, and the Intel Core i5-2500K is made according to the 32 nm process technology, based on this information, one can judge how modern the processor is and outperforms its predecessor in performance, but when choosing it is necessary to take into account a number of other options.

Architecture

Also, processors are characterized by such a characteristic as architecture — a set of properties inherent in a whole family of processors, usually produced for many years. In other words, the architecture is their organization or the internal design of the CPU.

Number of cores

The core is the most important element of the central processor. It is a part of the processor capable of executing a single instruction stream. The cores differ in cache size, bus frequency, manufacturing technology, etc. Manufacturers assign new names to them with each subsequent technical process (for example, the AMD processor core is Zambezi, and Intel is Lynnfield). With the development of processor manufacturing technologies, it became possible to place more than one core in one package, which significantly increases the performance of the CPU and helps to perform several tasks simultaneously, as well as use several cores in programs. Multi-core processors will be able to handle archiving, video decoding, modern video games, etc. faster. For example, Intel’s Core 2 Duo and Core 2 Quad processor lines, which use dual-core and quad-core CPUs, respectively. At the moment, processors with 2, 3, 4 and 6 cores are widely available. Most of them are used in server solutions and are not required by an ordinary PC user.

Frequency

In addition to the number of cores, performance is affected by Clock frequency . The value of this characteristic reflects the performance of the CPU in the number of cycles (operations) per second. Another important characteristic is the bus frequency (FSB — Front Side Bus), which demonstrates the speed at which data is exchanged between the processor and the computer’s peripherals. The clock frequency is proportional to the bus frequency.

Socket

In order for the future processor to be compatible with the existing motherboard during the upgrade, you need to know its socket. The socket is called socket , in which the CPU is installed on the computer motherboard. The socket type is characterized by the number of pins and the processor manufacturer. Different sockets correspond to certain types of CPU, so each socket accepts a certain type of processor. Intel uses the LGA1156, LGA1366, and LGA1155 socket, while AMD uses AM2+ and AM3.

Cache

Cache — the amount of memory with a very high access speed, needed to speed up access to data that is constantly in memory with a lower access speed (RAM). When choosing a processor, keep in mind that increasing the size of the cache improves the performance of most applications. The CPU cache is distinguished by three levels ( L1, L2 and L3 ), located directly on the processor core. Data from RAM gets into it for higher processing speed. It is also worth considering that for multi-core CPUs, the amount of L1 cache for one core is indicated. The second-level cache performs similar functions, differing in lower speed and larger volume. If you intend to use the processor for resource-intensive tasks, then a model with a large amount of second-level cache will be preferable, given that the total amount of L2 cache is indicated for multi-core processors. The most powerful processors such as AMD Phenom, AMD Phenom II, Intel Core i3, Intel Core i5, Intel Core i7, Intel Xeon are equipped with L3 cache. The third level cache is the least fast, but it can be up to 30 MB.

Power consumption

The power consumption of a processor is closely related to its manufacturing technology. With a decrease in the nanometers of the process technology, an increase in the number of transistors and an increase in the clock frequency of processors, there is an increase in the power consumption of the CPU. For example, Intel’s Core i7 processors require up to 130 or more watts. The voltage supplied to the core clearly characterizes the power consumption of the processor. This setting is especially important when choosing a CPU for use as a multimedia center. Modern processor models use various technologies that help combat excessive power consumption: built-in temperature sensors, automatic voltage and frequency control systems for processor cores, and power-saving modes with low CPU load.

Additional features

Modern processors have acquired the ability to work in 2- and 3-channel modes with RAM, which significantly affects its performance, and also supports a larger set of instructions, raising their functionality to a new level. GPUs process video on their own, thereby offloading the CPU, thanks to DXVA technology (from the English DirectX Video Acceleration — video acceleration by the DirectX component). Intel uses the above technology Turbo Boost to dynamically change the CPU clock speed. Technology Speed ​​Step manages CPU power consumption depending on the activity of the processor, and Intel Virtualization Technology creates a virtual environment in hardware for running multiple operating systems. Also, modern processors can be divided into virtual cores using technology Hyper Threading . For example, a dual-core processor is able to split the clock speed of one core into two, which contributes to high processing performance with four virtual cores.

Thinking about the configuration of your future PC, do not forget about the video card and its GPU (from the English Graphics Processing Unit — graphic processing device) — the processor of your video card, which is responsible for rendering (arithmetic operations with geometric, physical objects, etc.). P.). The higher the frequency of its core and the frequency of memory, the less will be the load on the central processor. Gamers should pay special attention to the GPU.

What is more important for the processor? Number of cores or threads?

06/30/2020

processors
technology
multithreading

Processor cores vs. threads is an issue that still gnaws at PC enthusiasts and hobbyists. What is more important for a good processor, the number of cores or threads? Well, as you might expect, this question cannot be answered directly. Threads basically help the kernels process information in a more efficient manner. That being said, CPU threads bring real visible performance in very specific tasks, so a hyper-threaded CPU can’t always help you achieve better results.

What is a central processing unit?

The processor (central processing unit) is the core of every smartphone, tablet, computer and server. It is a critical component that determines how your computer will perform and determines how well it can do its job.

The processor takes the basic instructions you enter on your computer and distributes those tasks to other chips in your system. By redeploying complex tasks to the chips best equipped to handle them, it allows your computer to perform at peak levels.

The processor is sometimes called the brain of the computer. It is located on the motherboard (also called the main board) and is a separate component from the memory component.

It acts on the memory component that stores all the data and information in your system. The memory component and processor are separate from your graphics card. The sole function of the graphics card is to take the data and convert it into the images you see on the monitor.

As technology advances from year to year, we see processors getting smaller and smaller. And they work faster than ever before. You will understand what it means faster if you learn something about Moore’s law, which got its name from Intel co-founder Gordon Moore. Moore believes that the number of transistors in an integrated circuit doubles every two years.

What does the processor do?

As we said earlier, the processor is the brain of your computer. It takes data from a specific program or application, performs a series of calculations, and executes a command. It performs a three-part cycle, otherwise known as an iterative fetch, decode, and execute cycle. In the first step, the processor fetches instructions from your system’s memory. Once it receives instructions from memory, it moves on to the second stage. It is in this second stage that it decodes those instructions.

As soon as the machine decodes the instructions, it proceeds to the third stage of execution. The decoded information passes through the CPU to reach the blocks that are actually supposed to perform the required function. During the decoding process, it performs mathematical equations to send the required signal to your system.

This cycle repeats over and over again for every action and command you perform. The processor is an important part of any system, and it works closely with threads. Different processors have different numbers of threads to limit or increase your computer’s performance.

What is multithreading?

Thread is a small sequence of programmed instructions. Threads refer to the highest level of code that your processor can execute. They are usually managed by the scheduler, which is a standard part of any operating system.

To create a thread, a process must first be started. Then, the process creates a thread that runs, this may last for a short or long period of time, depending on the process. No matter how long a task takes to complete, it looks like your computer is doing many things at the same time.

Each process has at least one thread, but there is no maximum number of threads a process can use. For specialized tasks, the more threads you have, the better the performance of your computer. With multiple threads, one process can process different tasks at the same time.

You will also hear people use terms like «multithreading» and «hyperthreading». Hyper-Threading technology allows one CPU core to act as two cores, speeding up the execution of a particular program or application.

Even with one core, it can simulate performance as if you had two cores. The more cores in the processor, the more threads. The more threads you have, the better the performance of your system will be.

What is Hyper-Threading

Hyperthreading debuted in 2002 and was Intel’s attempt to bring parallel computing to users. This is a bit of a trick, as the OS recognizes threads as separate processor cores. When you use an Intel Chip, your task manager will show you twice the number of cores and treat them as such. This allows them to exchange information and speed up the decoding process by sharing resources between the cores. Intel claims that this technology can improve performance by up to 30%.

How do processor cores and threads work?

Processor cores are hardware. They do all the hard work. Threads are used to help the processor perform many parallel tasks more efficiently at the same time. If the CPU does not have hyper-threading or multi-threading, tasks will be scheduled less efficiently, forcing it to work harder to access information that is important to run certain applications.

One core can work on one task at a time. Multiple cores will help you run various applications more smoothly. For example, if you are planning to run a video game, it will need multiple cores to run it, while other cores can run background apps like Skype, Spotify, Chrome, or whatever. Multithreading only makes processing more efficient. This will of course improve performance, but it will also cause the processor to consume more power, but since multithreading is already enabled in the chips, this is not a cause for concern. Although the processor consumes more power, this rarely causes the temperature to rise.

In short, when you’re considering upgrading, more threads means more performance or better multitasking, depending on which applications you’re using. If you use multiple programs at the same time, it will definitely result in a performance boost.

Multicore

Initially, processors had one core. This meant that there was one CPU on the physical processor. To improve performance, processors are replaced with models with more «cores», or additional central processors are added, if such an option is provided by the manufacturer. A dual-core processor has two CPUs, so it appears to the operating system as two processors. For example, a processor with two cores can run two different processes at the same time. It speeds up your system because your computer can do several things at the same time.

Unlike multithreading, there are no tricks here — a dual-core CPU literally has two CPUs on a CPU chip. A quad-core processor has four CPUs, an octa-core CPU has eight CPUs, and so on.

This helps to greatly improve performance while keeping the physical CPU small enough to fit in a single socket. There should only be one CPU socket with one CPU module, not four different CPU sockets with four different CPUs, each requiring its own power, cooling, and other hardware. Latency is shorter because the cores can communicate faster since they are all on the same chip.