Bristol ridge processors: AMD Releases Bristol Ridge to Retail: AM4 Gets APUs

AMD Releases Bristol Ridge to Retail: AM4 Gets APUs

The focus for AMD’s AM4 platform is to span a wide range of performance and price points. We’ve had the launch of the Ryzen CPU family, featuring quad cores up to octa-cores with the new Zen microarchitecture, but AM4 was always designed to be a platform that merges CPUs and integrated graphics. We’re still waiting for the new Zen cores in products like Ryzen to find their way down into the desktop in the form of the Raven Ridge family, however those parts are going through the laptop stack first and will likely appear on the desktop either at the end of the year or in Q1 next year. Until then, users get to play with Bristol Ridge, originally released back in September 2016, but finally making its way to retail.

First the OEMs, Now Coming To Retail

Back in 2016, AMD released Bristol Ridge to OEMs only. These parts were the highest performing iteration of AMD’s Bulldozer design, using Excavator v2 cores on an AM4 motherboard and using DDR4.  We saw several systems from HP and others that used proprietary motherboard designs (as the major OEMs do) combined with these CPUs at entry level price points. For example, a base A12-9800 system with an R7 200-series graphics card was sold around $600 at Best Buy. Back at launch, Reddit user starlightmica saw this HP Pavilion 510-p127c in Costco:

$600 gets an A12-9800, 16GB of DDR4, a 1TB mechanical drive, an additional R7 2GB graphics card, 802.11ac WiFi, a DVDRW drive, and a smattering of USB ports.

Initially AMD’s focus on this was more about B2B sales. AMD’s reasoning for going down the OEM only route was one of control and marketing, although one might suggest that by going OEM only, it allowed distributors to clear their stocks of the previous generation APUs before Ryzen hit the shelves.

Still, these were supposed to be the highest performing APUs that AMD has ever made, and users still wanted a piece of the action. If you were lucky, a part might pop up from a broken down system on eBay, but for everyone else, the question has always been when AMD would make them available through regular retail channels. The answer is today, with a worldwide launch alongside Ryzen 3. AMD states that the Bristol Ridge chips aren’t designed to be hyped up as the biggest thing, but fill in the stack of CPUs below $130, an area where AMD has had a lot of traction in the past, and still provide the best performance-per-dollar APU on the market.

The CPUs

The eight  APUs and three CPUs being launched f spans from a high-frequency A12 part to the A6, and they all build on the Bristol Ridge notebook parts that were launched in 2016. AMD essentially skipped the 6th Gen, Carrizo, for desktop as the Carrizo design was significantly mobile focused (for Carrizo we ended up with one CPU, the Athlon X4 845 (which we reviewed), with DDR3 support but no integrated graphics). Using the updated 28nm process from TSMC, AMD was able to tweak the microarchitecture and allow full on APUs for desktops using a similar design.

The table of ‘as many specifications as we could get our hands on’ is as follows:














AMD 7th Generation Bristol Ridge Processors
  Modules/

Threads
CPU Base /

 Turbo (MHz)
GPU GPU Base / 

Turbo (MHz)
TDP
A12-9800 2M / 4T 3800 / 4200 Radeon R7 800 / 1108 65W
A12-9800E 2M / 4T 3100 / 3800 Radeon R7 655 / 900 35W
A10-9700 2M / 4T 3500 / 3800 Radeon R7 720 / 1029 65W
A10-9700E 2M / 4T 3000 / 3500 Radeon R7 600 / 847 35W
A8-9600 2M / 4T 3100 / 3400 Radeon R7 655 / 900 65W
A6-9550 1M / 2T 3800 / 4000 Radeon R5 576 / 800 65W
A6-9500 1M / 2T 3500 / 3800 Radeon R5 720 / 1029 65W
A6-9500E 1M / 2T 3000 / 3400 Radeon R5 576 / 800 35W
Athlon X4 970 2M / 4T 3800 / 4000 65W
Athlon X4 950 2M / 4T 3500 / 3800 65W
Athlon X4 940 2M / 4T 3200 / 3600 65W

AMD’s new entry-level processors will hit a maximum of 65W in their official thermal design power (TDP), with the launch offering a number of 65W and 35W parts. There was the potential to offer CPUs with a configurable TDP, as with previous APU generations, however much like the older parts that supported 65W/45W modes, it was seldom used, and chances are we will see system integrators stick with the default design power windows here. Also, the naming scheme: any 35W part now has an ‘E’ at the end of the processor name, allowing for easier identification.

Back when these CPUs were first launched, we were able to snag a few extra configuration specifications for each of the processors, including the number of streaming processors in each, base GPU frequencies, base Northbridge frequencies, and confirmation that all the APUs launched will support DDR4-2400 at JEDEC sub-timings.

The A12-9800 at the top of the stack is an interesting part on paper. If we do a direct comparison with the previous high-end AMD APUs, the A10-7890K, A10-7870K and A10-7860K, a lot of positives end up on the side of the A12.



















AMD Comparison
  Ryzen 3 1200   A12-9800   A10-7890K A10-7870K A10-7860K   A10-9700
MSRP $109   ?   $165 $137 $117   ?
Platform Ryzen   Bristol   Kaveri Refresh   Bristol
uArch Zen   Excavator   Steamroller   Excavator 
Threads 4C / 4T   2M / 4T   2M / 4T   2M / 4T
CPU Base 3100   3800   4100 3900 3600   3500
CPU Turbo  3400   4200   4300 4100 4000   3800
IGP SPs   512   512   384
GPU Turbo    1108   866 866 757   1029
TDP 65W   65W   95W 95W 65W   65W
L1-I Cache 4×64 KB   2×96 KB   2×96 KB   2×96 KB
L1-D Cache 4×32 KB   4×32 KB   4×16 KB   4×32 KB
L2 Cache 4×512 KB   2×1 MB   2×2 MB   2×1 MB
L3 Cache 8 MB      
DDR Support DDR4-2667

DDR4-2400
  DDR4-2400   DDR3-2133   DDR4-2400
PCIe 3. 0 x16   x8   x16 x16 x16   x8
Chipsets B350

A320

X/B/A300
  B350

A320

X/B/A300
  A88X

A78

A68H
  B350

A320

X/B/A300

The frequency of the A12-9800 gives it a greater dynamic range than the A10-7870K (having 3.8-4.2 GHz, rather than 3.9-4.1), but with the Excavator v2 microarchitecture, improved L1 cache, AVX 2.0 support and a much higher integrated graphics frequency (1108 MHz vs. 866 MHz) while also coming in at 30W less TDP. The 30W TDP jump is the most surprising – we’re essentially getting better than the previous A10-class performance at a lower power, which is most likely why they started naming the best APU in the stack an ‘A12’. Basically, the A12-9800 APU will be an extremely interesting one to review given the smaller L2 cache but faster graphics and DDR4 memory.

One thing users will notice is the PCIe support: these Bristol Ridge APUs only have PCIe 3.0 x8 for graphics. This means that most X370 motherboards that have two GPU slots will leave the second slot useless. AMD suggests moving to B350 instead, which only allows one add-in card.

The Integrated GPU

For the A-series parts, integrated graphics is the name of the game. AMD configures the integrated graphics in terms of Compute Units (CUs), with each CU having 64 streaming processors (SPs) using GCN 1.3 (aka GCN 3.0) architecture, the same architecture as found in AMD’s R9 Fury line of GPUs. The lowest processor in the stack, the A6-9500E, will have four CUs for 256 SPs, and the A12 APUs will have eight CUs, for 512 SPs. The other processors will have six CUs for 384 SPs, and in each circumstance the higher TDP processor typically has the higher base and turbo frequency.














AMD 7th Generation Bristol Ridge Processors
  GPU GPU SPs GPU Base GPU Turbo TDP NB Freq
A12-9800 Radeon R7 512 800 1108 65W 1400
A12-9800E Radeon R7 512 655 900 35W 1300
A10-9700 Radeon R7 384 720 1029 65W 1400
A10-9700E Radeon R7 384 600 847 35W 1300
A8-9600 Radeon R7 384 655 900 65W 1300
A6-9550 Radeon R5 384 576 800 65W 1400?
A6-9500 Radeon R5 384 720 1029 65W 1400
A6-9500E Radeon R5 256 576 800 35W 1300
Athlon X4 970 65W 1400?
Athlon X4 950 65W 1400
Athlon X4 940 65W ?

The new top frequency, 1108 MHz, for the A12-9800 is an interesting element in the discussion. Compared to the previous A10-7890K, we have a +28% increase in raw GPU frequency with the same number of streaming processors, but a lower TDP. This means one of two things – either the 1108 MHz frequency mode is a rare turbo state as the TDP has to be shared between the CPU and APU, or the silicon is sufficient enough to maintain a 28% higher frequency with ease. Obviously, based on the overclocking results seen previously, it might be interesting to see how the GPU might change in frequency without a TDP barrier and with sufficient cooling. For comparison, when we tested the A10-7890K in Grand Theft Auto at a 1280×720 resolution and low-quality settings, we saw an average 55.20 FPS.

Bearing in mind the change in the cache configuration moving to Bristol Ridge, moving from a 4 MB L2 to a 2 MB L2 but increasing the DRAM compatibility from DDR3-2133 to DDR4-2400, that value should move positive, and distinctly the most cost effective part for gaming.

Each of these processors supports the following display modes:

— DVI, 1920×1200 at 60 Hz

— DisplayPort 1. 2a, 4096×2160 at 60 Hz (FreeSync supported)

— HDMI 2.0, 4096×2160 at 60 Hz

— eDP, 2560×1600 at 60 Hz

Technically the processor will support three displays, with any mix of the above. Analog video via VGA can be supported by a DP-to-VGA converter chip on the motherboard or via an external dongle.

For codec support, Bristol Ridge can do the following (natively unless specified):

— MPEG2 Main Profile at High Level (IDCT/VLD)

— MPEG4 Part 2 Advanced Simple Profile at Level 5

— MJPEG 1080p at 60 FPS

— VC1 Simple and Main Profile at High Level (VLD), Advanced Profile at Level 3 (VLD)

— H.264 Constrained Baseline/Main/High/Stereo High Profile at Level 5.2

— HEVC 8-bit Main Profile Decode Only at Level 5.2

— VP9 decode is a hybrid solution via the driver, using CPU and GPU

AMD still continues to support HSA and the arrangement between the Excavator v2 modules in Bristol Ridge and the GCN graphics inside is no different – we still get Full 1. 0 specification support. With the added performance, AMD is claiming equal scores for the A12-9800 on PCMark 8 Home with OpenCL acceleration as a Core i5-6500 ($192 tray price), and the A12-9800E is listed as a 17% increase in performance over the i5-6500T. With synthetic gaming benchmarks, AMD is claiming 90-100% better performance for the A12 over the i5 competition.

Performance Preview

Back when Bristol Ridge first launched to OEMs, several users managed to benchmark the processors to get some data. We cannot confirm these results, but it paints an interesting picture.

NAMEGT, a South Korean overclocker with ties to ASUS, has pushed the A12-9800 APU to 4.8 GHz by adjusting the multiplier. To do this, he used an early ASUS AM4 motherboard and AMD’s 125W Wraith air cooler.

Credit: NAMEGT and HWBot

NAMEGT ran this setup on multi-threaded Cinebench 11.5 and Cinebench 15, scoring 4.77 and 380 respectively for a 4.8 GHz overclock. If we compare this to our Bench database results, we see the following:

For Cinebench 15, this overclocked score puts the A12-9800 above the Haswell Core i3-4360 and the older AMD FX-4350, but below the Skylake i3-6100TE. The Athlon X4 845 at stock frequencies scored 314 while running at 3.5 GHz, which would suggest that a stock A12-9800 at 3.8 GHz would fall around the 340 mark.

A preview by Korean website Bodnara, using the A12-9800 in a GIGABYTE motherboard, scored 334 for a stock Cinebench 15 multithreaded test and 96 for the single threaded test.

When we previously tested the Excavator architecture for desktop on the 65W Athlon X4 845, overclocking was a nightmare, with stability being a large issue. At the time, we suspected that due to the core design being focused towards 15W, moving beyond 65W was perhaps a bit of a stretch for the design at hand. This time around, as we reported before, Bristol Ridge is using an updated 28nm process over Carrizo, which may have a hand in this.

Price

Prices were not disclosed at the time of writing, although all the chips should be in the $50-$110 range. Certain models will be shipped with AMD’s 65W and 95W near-silent coolers, as we saw on the Kaveri refresh CPUs early last year.

AMD’s main competition in this space will be Intel’s Kaby Lake Pentium and Celeron lines, with AMD pushing the integrated graphics performance being on a much higher level. Intel would counter with a stronger single-thread performance in more office type workloads.

AMD is planning to launch Raven Ridge for desktops sometime at the end of the year or Q1, after the laptop launch. These processors fill in that hole for the time being, although we’re all ready to experience Zen in an APU.

Some parts of this news were posted when Bristol Ridge originally launched. We’re still waiting on some of the processor specifications and will update when we get them.

Gallery: AMD Releases Bristol Ridge to Retail: AM4 Gets APUs

Tweet

AMD Launches 7th Generation A-Series And Athlon Processors To Retail

AMD originally detailed its 7th Generation Bristol Ridge A-Series APUs and Athlon processors during Computex 2016. The OEM and system integrator launch followed in September 2016 and was most noteworthy for heralding the first details of the AM4 socket and B350/A350 chipsets. HP, Lenovo, and a number of system integrators soldiered on with systems in the interim. Unfortunately, AMD never specified a launch date or pricing for the PIB (Product In Box) units that you could buy at retail.

That changes today. AMD announced that it is shipping the PIB units, but the company didn’t give us any pricing information. The company also added three new models to the lineup, the Athlon X4 970, X4 940 and the A-Series A6-9550, but didn’t provide even the most basic specifications.

That means we’ll be investigating further as products pop up on shelves, but for now, we can give you the details of the existing models.

The 7th Generation Bristol Ridge A-Series And Athlon Lineup

The A-Series processors don’t come wielding the 14nm Zen process. Instead, they are an incremental evolution of the existing Carrizo that fields Excavator cores. The Bristol Ridge APUs leverage the 28nm process and Polaris-style GCN (Graphics Core Next) cores. AMD didn’t transition to a new node, but it did tweak the existing 28nm process to boost performance and reduce power consumption. AMD claims that, as a result, the 65W models provide the same performance as their 95W predecessors.

Bristol Ridge /  Athlon Cores CPU Boost/Base Processor Graphics GPU CU / Max Frequency TDP
A12-9800 4 4.2 / 3.8GHz Radeon R7 Graphics 8 / 1,108MHz 65W
A12-9800E 4 3.8 / 3.1GHz Radeon R7 Graphics 8 / 900MHz 35W
A10-9700 4 3.8 / 3.5GHz Radeon R7 Graphics 6 / 1029MHz 65W
A10-9700E 4 3.5 / 3.0GHz Radeon R7 Graphics 6 / 847MHz 35W
A8-9600 4 3. 4 / 3.1GHz Radeon R7 Graphics 6 / 900MHZ 65W
Athlon X4 970 ? ? ?
Athlon X4 950 4 3.5 / 3.8GHz 65W
Athlon X4 940 ? ? ?
A6-9550 ? 2 ? ? Radeon R5 Graphics ? ?
A6-9500 2 3.8 / 3.5GHz Radeon R5 Graphics 6 / 1,029MHz 65W
A6-9500E 2 3.4 / 3.0GHz Radeon R5 Graphics 4 / 800MHz 35W

The processors drop into the AM4 socket, and true to AMD’s value proposition, expose unlocked multipliers. You can also overclock the graphics cores to your liking, and existing AM4 motherboards come with the requisite video outputs to support the APUs. The A-Series APUs provide eight lanes of PCIe 3.0, dual-channel DDR4 (up to 2,400MHz), USB 3. 1 Gen 2, and two SATA + x2 NVMe or two SATA + x2 PCIe.

AMD offers a range of A6, A8, A10, and A12 models. The A12-9800 offers the best performance with four cores and a 3.8/4.2GHz base/boost clock. It comes armed with eight CUs that operate at a maximum frequency of 1,108MHz. The A12, A10, and A8 models come equipped with Radeon R7 graphics, while the A6 models come with Radeon R5 Graphics. The «E» designation denotes the low-power 35W TDP models. The Athlon models come bereft of integrated graphics.

AMD originally compared the A12-9800 to the Skylake Core i5-6500 and i5-6500T back in September 2016. AMD provided a few updated performance comparisons (generated in its test lab) against the Kaby Lake Pentium G4560. Intel added hyperthreading to the G4560, so it’s a much more capable model than in the past.

The company lines the A12-9800 up to tackle the G4560, and AMD claims the A12-9800 offers up to 204% more performance in 3DMark 11’s performance benchmark. The slide lists ~204% more performance than the G4560 in the test. AMD also claims the A12 matched the G4560 in the PCMark 8 Home test. It’s notable that AMD loaded the Intel system up with 16GB of DDR4, whereas it equipped the A12 with 8GB.

Thoughts

As we see in the Ryzen 3 1300X review, AMD’s bulkier Ryzen models address Intel’s i3 models, so a challenger for the Pentium lineup is a key requirement as AMD expands its new products to fill all market segments. Considering the seemingly late nature of the PIB launch, the processors will have a relatively short life. AMD has announced that Raven Ridge, which will come bearing the Zen microarchitecture and Vega cores, will come to market either by the end of the year or in early Q1 2018.

In the meantime, we’ll be collecting a few of the new APUs to run through our test suite. Stay tuned.

Paul Alcorn is the Deputy Managing Editor for Tom’s Hardware US. He writes news and reviews on CPUs, storage and enterprise hardware.

AMD Bristol Ridge APUs Launched For AM4 Desktop PCs

AMD has officially launched their 7th generation, Bristol Ridge APUs for desktop platforms. The latest APUs will be available for sale through several OEM manufacturers. The new series would utilize the Excavator and GCN core architecture to deliver increased performance on the same process node. AMD has done some fine tuning with their latest processors and that should be visible on Bristol Ridge processors.

AMD Launches Bristol Ridge APUs For AM4 Based Desktop PCs

AMD has launched a total of 8 new Bristol Ridge processors for the AM4 platform. These comprise of A-Series and Athlon branded SKUs. AMD had already launched Bristol Ridge for notebooks but are now bringing it to the masses on desktop platforms. AMD has chosen OEMs to ship their AM4 platform to the mass market. The DIY market would get a taste of AM4 a bit later when Zen is closer to launch. Till then, those interested in AM4 PCs can grab a ready-to-use PC from a local PC OEM.

“The consumer release of these new HP and Lenovo designs is an important milestone for AMD on two fronts. First, it marks a major increase in productivity performance, streaming video and eSports gaming experiences sought after by today’s consumers, delivered through our new 7th Generation AMD A-Series desktop processors. Second, because these new OEM designs also feature our new AM4 desktop platform, the motherboard ecosystem shows its readiness for our upcoming high-performance “Summit Ridge” desktop CPUs featuring “Zen” cores, which share the same platform,” said Kevin Lensing, Corporate VP and general manager of Client Computing at AMD.

2 of 9

Some features of Bristol Ridge include up to four x86 Excavator cores with 2 MB of shared L2 cache. They will have support for HSA compute acceleration and the latest DDR4 memory standard. The Excavator core ensures better IPC (Instruction per clock) versus previous generation cores. AMD Bristol Ridge A-Series APUs also come with built-in, high throughput media accelerators for HEVC (4K H.264 & H.265) support. The more beefy details are about the AM4 platform itself so let’s get into those.

The Promontory Chipset SKUs — Powering the AM4 Desktop Motherboards

AM4 is a big deal for team red and they are going all in with feature support. First up is the fact that AM4 will be a unified socket for both CPUs and APUs. It will support both Zen based processors and Excavator based APUs. This means that it will be able to support the entire stack of processors ranging from entry level, all the way up to the enthusiast stuff offering users more longevity and future upgrade support. AMD plans to support AM4 for many years so that’s a plus point.

2 of 9

Some general features of the AM4 platform is that it offers new I/O capabilities. We are looking at faster DDR4-2400 MHz memory, PCI-e Gen 3.0, USB 3.1 Gen 2, NVMe and SATA Express support. These features have been missing on AMD platforms for a while but it’s nice that AMD is finally making a proper comeback with modern feature support.

Next up, we will be looking at four chipsets on AM4 motherboards. These are part of the Promontory chipset stack which is the codename for all chipsets used on AM4 boards. The two chipsets available today are B350 (Mainstream) and A320 (Essential). There’s also the A300 for small form factor PCs and a «TBA» chip for high-end, enthusiast processors that will launch later. The Bristol Ridge APUs feature 8 Gen 3 PCI-e lanes, dual channel DDR4 support, 4 USB 3.1 channels and 2 SATA + 2 NVMe or 2 PCI-e expansion support. Aside from that, B350 features higher PCI-e Gen 2.0 lanes and more USB 3.1/3.0/2.0 ports compared to A320.

2 of 9

The AMD B350 chipset features 70% power reduction over its AM3+ predecessor (5.8W vs 19.6W). The latest DDR4 memory controller also offers 22% more bandwidth compared to DDR3.

The 7th Gen Bristol Ridge APU Lineup Detailed — A12-9800 Leads The Pack

The A12-9800 is leading the pack of the Bristol Ridge lineup with a boost clock of up to 4.2 GHz. It also comes with the latest GCN core architecture with 512 stream processors running at 1108 MHz. The chip comes with 65W TDP. Rest of the variants comprise of regular and «E» variants which are tuned to offer lower power of 35W.

  • A12-9800 : 4C4T, rating 3.8GHz, up to 4.2GHz, Radeon R7 Graphics (Compute Unit number eight, up to 1108MHz), TDP 65W
  • A12-9800E : 4C4T, rating 3.1GHz, up to 3.8GHz, Radeon R7 Graphics (Compute Unit number eight, up to 900MHz), TDP 35W
  • A10-9700 : 4C4T, rating 3.5GHz, up to 3.8GHz, Radeon R7 Graphics (Compute Unit number six, up to 1029MHz), TDP 65W
  • A10-9700E : 4C4T, rating 3.0GHz, up to 3.5GHz, Radeon R7 Graphics (Compute Unit number six, up to 847MHz), TDP 35W
  • A8-9600 : 4C4T, rating 3.1GHz, up to 3.4GHz, Radeon R7 Graphics (Compute Unit number six, up to 900MHz), TDP 65W
  • Athlon X4 950: 4C4T, rating 3.5GHz, up to 3.8GHz, GPU non-integrated, TDP 65W
  • A6-9500 : 2C2T, rating 3.5GHz, up to 3.8GHz, Radeon R5 Graphics (Compute Unit number six, up to 1029MHz), TDP 65W
  • A6-9500E : 2C2T, rating 3. 0GHz, up to 3.4GHz, Radeon R5 Graphics (Compute Unit number four, up to 800MHz), TDP 35W

2 of 9

AMD also provided some internal benchmarks and comparison of their new processors against Intel’s Core i5 solution. In PCMark 8 and 3DMark 11, the AMD A12-9800 performed well against a Skylake Core i5 CPU. Both performance tests and performance per watt tests turned out really good. The Bristol Ridge lineup is looking great but I guess most enthusiasts will be waiting to upgrade to the high-end stuff when Zen comes early next year.

AMD Bristol Ridge Desktop AM4 SKUs:

SKU Cores Base/Boost Clock GPU CUs GPU SPs GPU Clock Memory TDP
AMD A12-9800 4 3.8/4.2 GHz 8 CUs 512 SPs 1108 MHz DDR4-2400 65W
AMD A12-9800E 4 3. 1/3.8 GHz 8 CUs 512 SPs 900 MHz DDR4-2400 35W
AMD A10-9700 4 3.5/3.8 GHz 6 CUs 384 SPs 1029 MHz DDR4-2400 65W
AMD A10-9700E 4 3.0/3.5 GHz 6 CUs 384 SPs 847 MHz DDR4-2400 35W
AMD A8-9600 4 3.1/3.4 GHz 6 CUs 384 SPs 900 MHz DDR4-2400 65W
AMD Athlon X4 950 4 3.5/3.8 GHz N/A N/A N/A DDR4-2400 65W
AMD A6-9500 2 3.5/3.8 GHz 6 CUs 384 SPs 1029 MHz DDR4-2400 65W
AMD A6-9500E 2 3.0/3.4 GHz 4 CUs 256 SPs 800 MHz DDR4-2400 35W
Athlon X4 970 4 3.5+/3.8 GHz+ N/A N/A N/A DDR4-2400 65W
Athlon X4 950 4 3. 5/3.8 GHz N/A N/A N/A DDR4-2400 65W
Athlon X4 940 2 3.2/3.5 GHz N/A N/A N/A DDR4-2400 35W

AMD Bristol Ridge Family Could Feature a 16 CU APU

A rumor has emerged from Bitsnchips which alleges that AMD might introduce an insanely powerful APU under their Bristol Ridge family. The Bristol Ridge family is the name of the upcoming family of mainstream APUs that will be featured on the AM4 platform. We have previously detailed a range of SKUs that are bound to be placed in the family but the rumored APU might be packing a serious punch at consoles if its actually planned by AMD.

The AMD Bristol Ridge family will be the first to hit the AM4 platform that is expected to launch in June 2016. AMD may use Computex as a platform to showcase their new motherboards and their first processor family to be featured on AM4. We have already seen a couple of leaks surrounding Bristol Ridge and rather than being a revolution to AMD’s APU family, the Bristol Ridge will mostly remain an evolution to the current Carrizo APUs.

The main thing to note is that Carrizo was only limited to mobility platforms (notebooks). We saw some late introductions of Excavator powered Athlons in the lineup but those are limited to current sockets. AM4 will be a huge departure from AM3+ and FM2+ as it brings the latest I/O and feature support to AMD motherboards with updated chipsets that allow the next iteration of storage and PCI-E capabilities. AM4 will be the first and foremost platform to support AMD’s latest Bristol Ridge APU family in mid-2016 and Zen based Summit Ridge FX family in Q4 2016.

Bristol Ridge’s Flagship Part Rumored To Feature 16 GCN 3.0 Compute Units

Straight from Bitsnchips, the rumor states that AMD is preparing a specific Bristol Ridge APU with 16 compute units that are based on the GCN 3.0 architecture. The 16 CUs mean that the chip would feature a total of 1024 stream processors which is the same amount of processors featured on the Radeon HD 7850. The Radeon HD 7850 graphics card launched back in 2012 and was particularly great in the budget department. Fast forward four years later and we could be on our way to see similar performance from an APU.

This specific SKU would be based on the 28nm process node from GlobalFoundries which is confirmed. The site reports that due to the Bulk High Density design that is being used on Bristol Ridge family of processors which allows greater transistor density on a small die package, AMD could incorporate more beefier embedded GPUs on their APUs. A 1024 SP core would mean the doubling of the shaders compared to current generation Kaveri and Godavari APUs. T

he massive GPU part will require a lot of bandwidth to keep the bottleneck as low as possible since bandwidth became a noticeable bottleneck for even the 8 CU based Kaveri APUs on FM2+. AM4 will have support for dual channel DDR4 memory which can provide around 50 GB/s bandwidth which isn’t as much as the 153.6 GB/s featured on the discrete HD 7850 cards but lower clocks and power requirements will compensate for overall performance penalty where efficiency of the chip doesn’t get affected much.

Since this is a rumor, we wanted to speculate whether a 16 CU part is even possible on the 28nm process or not and if this rumor makes much sense. A 16 CU part doesn’t seem way out of proportions as AMD already has beefier chips on the PlayStation 4. A Quad core chip with a 1024 SP GPU is very possible given the increase in density offered by the BHD process node. However, another possibility exists and that also makes a lot of sense for now.

AMD’s marketing team loves to couple up the CPU cores and GCN cores when talking about total compute cores on APUs. We saw with Carrizo that AMD announced they have a 12 Compute Core chip which had 4 Excavator cores and 8 GCN CUs. AMD’s Bristol Ridge is expected to keep the core count maximum to 4 that leaves us with 12 Compute Units on the GPU side. If that’s the case, we can see a 768 stream processor GPU which is still higher than the current Carrizo offerings and makes for a fast desktop class APU and about as powerful as Microsoft’s Xbox One console. But since this rumor points out to CU in the title, we will have to wait for actual confirmation by AMD to see if the SKU does actually exist.

A quick glance at the technical features of both Carrizo and Bristol Ridge reveal no differences from both APU families. Both are based on a 3rd generation GCN architecture (GCN 1.2) which is the same version incorporated on the Fiji and Tonga GPUs. Both APU families have full support for DirectX 12, provide great audio, UVD, VCD, DCE features and support Dual Graphics, Panel Self Refresh, Dynamic Bezel Adjust along with the ability to run up to 3 simultaneous displays. Bristol Ridge APUs will feature up to four x86 Excavator cores with 2 MB of shared L2 cache. They will have support for HSA 1.0 and the latest DDR4 memory standard. The Excavator core ensures better IPC (Instruction per clock) versus previous generation cores.

2 of 9

Image Credits: Benchlife

The FP4 platforms is solely for the mobility products and will include several Bristol Ridge SOCs.  The lineup will include several Quad and Dual core SKUs. The maximum base and boost clock speeds are 3.0 GHz and 3.7 GHz on the top end chip and the TDPs range from 15W to 35W. The Stoney Ridge family will have lower TDPs than the mainstream family as they are geared for low-power devices. All chips feature a graphics processing units that comes in 8/6/4 CU dies and are clocked in the range of 600 up to 900 MHz. The 16 CU part was not listed in this lineup but the slide themselves are old so the new SKU might have been picked up in the latest details. Detailed specifications of these chips can be seen below:

SKU Cores Base/Boost Clock L2 $ GPU CUs GPU SPs GPU Clock Memory TDP/cTDP
AMD FX-9830P / Pro A12-9830B 4 3.0/3.7 GHz 2 MB 8 CUs 512 SPs 900 MHz DDR4-2400
DDR3-2133
35W/25W
AMD FX-9800P / Pro A12-9800B 4 2. 7/3.6 GHz 2 MB 8 CUs 512 SPs 758 MHz DDR4-1866
DDR3-1866
15W/12W
AMD A12-9730P / Pro A10-9730B 4 2.8/3.5 GHz 2 MB 6 CUs 384 SPs 900 MHz DDR4-2400
DDR3-2133
35W/25W
AMD A12-9700P / Pro A10-9700B 4 2.5/3.4 GHz 2 MB 6 CUs 384 SPs 758 MHz DDR4-1866
DDR3-1866
15W/12W
AMD A10-9630P / Pro A8-9630B 4 2.6/3.2 GHz 2 MB 6 CUs 384 SPs 800 MHz DDR4-2400
DDR3-2133
35W/25W
AMD A10-9600P / Pro A8-9600B 4 2.3/3.2 GHz 2 MB 6 CUs 384 SPs 686 MHz DDR4-1866
DDR3-1866
15W/12W
AMD Pro A6-9500B 2 2.3/3.2 GHz 1 MB 4 CUs 256 SPs 800 MHz DDR4-1866
DDR3-1866
15W/12W

The Bristol Ridge AM4 APU family will contain 8 SKUs, there’s no name defined for them but they will be using a new branding scheme, most probably A-Series 9000 as was revealed in a previous leak. There are seven SKUs that are based on a quad core design while one chip retains a dual core design. Clock speeds range from 2.5 to 3.6 GHz base and 2.8 to 4.0 GHz boost clocks across the processors listed. The graphics chips which will range from 256/384/512 stream processors (Radeon R7/R5/R3) will come with clock speeds ranging from 900 to 948 MHz. All chips will support DDR4 memory clocked at 2400 MHz (native speeds). Surprisngly, there are also few models without the graphics units which means they will either be Athlon or FX branded chips. The TDPs will be adjusted from 35 to 65W. More details can be seen in the table below:

AMD Bristol Ridge Desktop AM4 SKUs:

SKU Cores Base/Boost Clock GPU CUs GPU SPs GPU Clock Memory TDP
AMD A12-9800 4 3.8/4.2 GHz 8 CUs 512 SPs 1108 MHz DDR4-2400 65W
AMD A12-9800E 4 3. 1/3.8 GHz 8 CUs 512 SPs 900 MHz DDR4-2400 35W
AMD A10-9700 4 3.5/3.8 GHz 6 CUs 384 SPs 1029 MHz DDR4-2400 65W
AMD A10-9700E 4 3.0/3.5 GHz 6 CUs 384 SPs 847 MHz DDR4-2400 35W
AMD A8-9600 4 3.1/3.4 GHz 6 CUs 384 SPs 900 MHz DDR4-2400 65W
AMD Athlon X4 950 4 3.5/3.8 GHz N/A N/A N/A DDR4-2400 65W
AMD A6-9500 2 3.5/3.8 GHz 6 CUs 384 SPs 1029 MHz DDR4-2400 65W
AMD A6-9500E 2 3.0/3.4 GHz 4 CUs 256 SPs 800 MHz DDR4-2400 35W
Athlon X4 970 4 3.5+/3.8 GHz+ N/A N/A N/A DDR4-2400 65W
Athlon X4 950 4 3. 5/3.8 GHz N/A N/A N/A DDR4-2400 65W
Athlon X4 940 2 3.2/3.5 GHz N/A N/A N/A DDR4-2400 35W

[socialpoll]

Raven Ridge — Cores — AMD

Raven Ridge is codename for AMD series of mainstream mobile and desktop APUs based on the Zen CPU and Vega GPU microarchitectures succeeding Bristol Ridge. Raven Ridge processors are fabricated on GlobalFoundries 14 nm process and incorporate four cores.

Raven Ridge is an SoC for the mobile segment based on the Zen microarchitecture incorporating a Vega GPU.

200GE Athlon $ 55.00

€ 49.50
£ 44.55
¥ 5,683.15

6 September 2018 2 4 4 MiB

4,096 KiB
4,194,304 B
0.00391 GiB

3.2 GHz

3,200 MHz
3,200,000 kHz

35 W

35,000 mW
0.0469 hp
0.035 kW

Radeon Vega 3 1,000 MHz

1 GHz
1,000,000 KHz

220GE Athlon $ 65. 00

€ 58.50
£ 52.65
¥ 6,716.45

21 December 2018 2 4 4 MiB

4,096 KiB
4,194,304 B
0.00391 GiB

3.4 GHz

3,400 MHz
3,400,000 kHz

35 W

35,000 mW
0.0469 hp
0.035 kW

Radeon Vega 3 1,000 MHz

1 GHz
1,000,000 KHz

240GE Athlon $ 75.00

€ 67.50
£ 60.75
¥ 7,749.75

21 December 2018 2 4 4 MiB

4,096 KiB
4,194,304 B
0.00391 GiB

3.5 GHz

3,500 MHz
3,500,000 kHz

35 W

35,000 mW
0.0469 hp
0.035 kW

Radeon Vega 3 1,000 MHz

1 GHz
1,000,000 KHz

3000G Athlon $ 49.00

€ 44.10
£ 39.69
¥ 5,063.17

20 November 2019 2 4 4 MiB

4,096 KiB
4,194,304 B
0.00391 GiB

3. 5 GHz

3,500 MHz
3,500,000 kHz

35 W

35,000 mW
0.0469 hp
0.035 kW

Radeon Vega 3 1,100 MHz

1.1 GHz
1,100,000 KHz

PRO 200GE Athlon 6 September 2018 2 4 4 MiB

4,096 KiB
4,194,304 B
0.00391 GiB

3.2 GHz

3,200 MHz
3,200,000 kHz

35 W

35,000 mW
0.0469 hp
0.035 kW

Radeon Vega 3 1,000 MHz

1 GHz
1,000,000 KHz

2200G Ryzen 3 $ 99.00

€ 89.10
£ 80.19
¥ 10,229.67

12 February 2018 4 4 4 MiB

4,096 KiB
4,194,304 B
0.00391 GiB

3.5 GHz

3,500 MHz
3,500,000 kHz

3.7 GHz

3,700 MHz
3,700,000 kHz

65 W

65,000 mW
0.0872 hp
0.065 kW

Radeon Vega 8 1,100 MHz

1. 1 GHz
1,100,000 KHz

2200GE Ryzen 3 19 April 2018 4 4 4 MiB

4,096 KiB
4,194,304 B
0.00391 GiB

3.2 GHz

3,200 MHz
3,200,000 kHz

3.6 GHz

3,600 MHz
3,600,000 kHz

35 W

35,000 mW
0.0469 hp
0.035 kW

Radeon Vega 8 1,100 MHz

1.1 GHz
1,100,000 KHz

2200U Ryzen 3 8 January 2018 2 4 4 MiB

4,096 KiB
4,194,304 B
0.00391 GiB

2.5 GHz

2,500 MHz
2,500,000 kHz

3.4 GHz

3,400 MHz
3,400,000 kHz

15 W

15,000 mW
0.0201 hp
0.015 kW

Radeon Vega 3 1,000 MHz

1 GHz
1,000,000 KHz

2300U Ryzen 3 8 January 2018 4 4 4 MiB

4,096 KiB
4,194,304 B
0.00391 GiB

2 GHz

2,000 MHz
2,000,000 kHz

3. 4 GHz

3,400 MHz
3,400,000 kHz

15 W

15,000 mW
0.0201 hp
0.015 kW

Radeon Vega 6 1,100 MHz

1.1 GHz
1,100,000 KHz

PRO 2200G Ryzen 3 10 May 2018 4 4 4 MiB

4,096 KiB
4,194,304 B
0.00391 GiB

3.5 GHz

3,500 MHz
3,500,000 kHz

3.7 GHz

3,700 MHz
3,700,000 kHz

65 W

65,000 mW
0.0872 hp
0.065 kW

Radeon Vega 8 1,100 MHz

1.1 GHz
1,100,000 KHz

PRO 2200GE Ryzen 3 10 May 2018 4 4 4 MiB

4,096 KiB
4,194,304 B
0.00391 GiB

3.2 GHz

3,200 MHz
3,200,000 kHz

3.6 GHz

3,600 MHz
3,600,000 kHz

35 W

35,000 mW
0.0469 hp
0.035 kW

Radeon Vega 8 1,100 MHz

1.1 GHz
1,100,000 KHz

PRO 2300U Ryzen 3 8 January 2018 4 4 4 MiB

4,096 KiB
4,194,304 B
0. 00391 GiB

2 GHz

2,000 MHz
2,000,000 kHz

3.4 GHz

3,400 MHz
3,400,000 kHz

15 W

15,000 mW
0.0201 hp
0.015 kW

Radeon Vega 6 1,100 MHz

1.1 GHz
1,100,000 KHz

2400G Ryzen 5 $ 169.00

€ 152.10
£ 136.89
¥ 17,462.77

12 February 2018 4 8 4 MiB

4,096 KiB
4,194,304 B
0.00391 GiB

3.6 GHz

3,600 MHz
3,600,000 kHz

3.9 GHz

3,900 MHz
3,900,000 kHz

65 W

65,000 mW
0.0872 hp
0.065 kW

Radeon RX Vega 11 1,250 MHz

1.25 GHz
1,250,000 KHz

2400GE Ryzen 5 19 April 2018 4 8 4 MiB

4,096 KiB
4,194,304 B
0.00391 GiB

3.2 GHz

3,200 MHz
3,200,000 kHz

3.8 GHz

3,800 MHz
3,800,000 kHz

35 W

35,000 mW
0. 0469 hp
0.035 kW

Radeon RX Vega 11 1,250 MHz

1.25 GHz
1,250,000 KHz

2500U Ryzen 5 26 October 2017 4 8 4 MiB

4,096 KiB
4,194,304 B
0.00391 GiB

2 GHz

2,000 MHz
2,000,000 kHz

3.6 GHz

3,600 MHz
3,600,000 kHz

15 W

15,000 mW
0.0201 hp
0.015 kW

Radeon Vega 8 1,100 MHz

1.1 GHz
1,100,000 KHz

2600H Ryzen 5 10 September 2018 4 8 4 MiB

4,096 KiB
4,194,304 B
0.00391 GiB

3.2 GHz

3,200 MHz
3,200,000 kHz

3.6 GHz

3,600 MHz
3,600,000 kHz

45 W

45,000 mW
0.0603 hp
0.045 kW

Radeon Vega 8 1,100 MHz

1.1 GHz
1,100,000 KHz

PRO 2400G Ryzen 5 $ 169. 00

€ 152.10
£ 136.89
¥ 17,462.77

10 May 2018 4 8 4 MiB

4,096 KiB
4,194,304 B
0.00391 GiB

3.6 GHz

3,600 MHz
3,600,000 kHz

3.9 GHz

3,900 MHz
3,900,000 kHz

65 W

65,000 mW
0.0872 hp
0.065 kW

Radeon RX Vega 11 1,250 MHz

1.25 GHz
1,250,000 KHz

PRO 2400GE Ryzen 5 10 May 2018 4 8 4 MiB

4,096 KiB
4,194,304 B
0.00391 GiB

3.2 GHz

3,200 MHz
3,200,000 kHz

3.8 GHz

3,800 MHz
3,800,000 kHz

35 W

35,000 mW
0.0469 hp
0.035 kW

Radeon RX Vega 11 1,250 MHz

1.25 GHz
1,250,000 KHz

PRO 2500U Ryzen 5 8 January 2018 4 8 4 MiB

4,096 KiB
4,194,304 B
0.00391 GiB

2 GHz

2,000 MHz
2,000,000 kHz

3. 6 GHz

3,600 MHz
3,600,000 kHz

15 W

15,000 mW
0.0201 hp
0.015 kW

Radeon Vega 8 1,100 MHz

1.1 GHz
1,100,000 KHz

2700U Ryzen 7 26 October 2017 4 8 4 MiB

4,096 KiB
4,194,304 B
0.00391 GiB

2.2 GHz

2,200 MHz
2,200,000 kHz

3.8 GHz

3,800 MHz
3,800,000 kHz

15 W

15,000 mW
0.0201 hp
0.015 kW

Radeon Vega 10 1,300 MHz

1.3 GHz
1,300,000 KHz

2800H Ryzen 7 10 September 2018 4 8 4 MiB

4,096 KiB
4,194,304 B
0.00391 GiB

3.3 GHz

3,300 MHz
3,300,000 kHz

3.8 GHz

3,800 MHz
3,800,000 kHz

45 W

45,000 mW
0.0603 hp
0.045 kW

Radeon Vega 11 1,300 MHz

1.3 GHz
1,300,000 KHz

PRO 2700U Ryzen 7 8 January 2018 4 8 4 MiB

4,096 KiB
4,194,304 B
0. 00391 GiB

2.2 GHz

2,200 MHz
2,200,000 kHz

3.8 GHz

3,800 MHz
3,800,000 kHz

15 W

15,000 mW
0.0201 hp
0.015 kW

Radeon Vega 10 1,300 MHz

1.3 GHz
1,300,000 KHz

AMD’s Bristol Ridge AM4 CPUs will soon be hitting retail

Shame these are not Zen-based

Published: 27th July 2017 | Source: AMD | Author: Mark Campbell

AMD’s Bristol Ridge AM4 CPUs will soon be hitting retail

 

Today it has been announced that AMD’s Bristol-Ridge APUs will be having a retail release, filling in a market segment below Ryzen 3 and offering price points that are said to be under £100. 

 

These APUs will be the first to be released on the AM4 platform, outside of the OEM space. These CPUs will be based on AMD’s older Excavator CPU architecture and GCN GPU architecture, which means that the performance of these parts will not be revolutionary.  

 

Even so, these CPUs will offer their users attractive pricepoints, with the most expensive model being said to cost less than £100. Sadly final pricing has not been confirmed at this time, though these CPUs are expected to hit retail shelves within the next few weeks.  

 

 

AMD 7th Generation Bristol Ridge Processors
  Modules/
Threads
CPU Base /
 Turbo (MHz)
GPU GPU Cores GPU Base / 
Turbo (MHz)
TDP
A12-9800 2M / 4T 3800 / 4200 Radeon R7 512 800 / 1108 65W
A12-9800E 2M / 4T 3100 / 3800 Radeon R7 512 655 / 900 35W
A10-9700 2M / 4T 3500 / 3800 Radeon R7 384 720 / 1029 65W
A10-9700E 2M / 4T 3000 / 3500 Radeon R7 384 600 / 847 35W
A8-9600 2M / 4T 3100 / 3400 Radeon R7 384 655 / 900 65W
A6-9550 1M / 2T 3800/4000 Radeon R5 384 800/1108 65W
A6-9500 1M / 2T 3500 / 3800 Radeon R5 384 720 / 1029 65W
A6-9500E 1M / 2T 3000 / 3400 Radeon R5 256 576 / 800 35W
Athlon X4 970 2M / 4T 3800/4000 N/A 65W
Athlon X4 950 2M / 4T 3500 / 3800 N/A 65W
Athlon X4 940 2M / 4T  3200 / 3800 N/A 65W

 

  

 

Being based on Excavator, these CPUs are not expected to offer revolutionary performance levels, though they will certainly offer a lot of value for money when building basic office or web browsing machines. It will be interesting to see how these APUs will perform in modern games, though it certainly will not be able to compete with any modern CPU + Dedicated GPU setup. 

 

You can join the discussion on AMD’s Bristol Ridge APUs coming to retail shelves on the OC3D Forums

 

1 — Shame these are not Zen-based2 — But when can we expect to see Zen based APUs?«Prev 1 2 Next»

Most Recent Comments

27-07-2017, 18:15:32

Avet

Reply

Socket FP2 — Socket FP2

Socket FP2 or µBGA-827 is a processor socket for Notebooks which was released in May 2012 by AMD with these APU processors codenamed Trinity 90 and Richland .

Trinity — A branded product from Koper Northern Isles (VLIW4 TeraScale), ATC 3 and VCE 1 video acceleration and AMD Eyefinity — based multi-monitor support for up to two non-DisplayPort — or up to four DisplayPort monitors. 245 245 250 210 [3] 156 75 125 149 Min. TDP () 35 17 12 10 4.5 4 3.95 10 6 TDP () 100 95 65 18 25 9Michelle Danzer «[AD] xf86-video-amdgpu 1.2.0». lists.x.org .

  • Socket FS1 Design Specification

AMD Raven Ridge: No More Budget Graphics Cards?

In the stream of announcements from AMD at CES 2018, the presentation of new desktop processors from the Raven Ridge family deserves special attention. These are the first Ryzen hybrid chips to feature Radeon RX Vega integrated graphics. At the start, the manufacturer offered two processor models for the Socket AM4 desktop platform — Ryzen 5 2400G and Ryzen 3 2200G. The official start of sales of new APUs is scheduled for February 12, but in order to quickly evaluate the capabilities of Raven Ridge, the information already available is enough. 9GHz. The processor is equipped with a GPU with 11 Compute Units (CUs) containing 704 shader units with a peak operating frequency of 1250 MHz.
In turn, the Ryzen 3 2200G has a frequency formula for x86 cores of 3.5 / 3.7 GHz, and the graphics module includes 8 CUs with 512 computers operating at frequencies up to 1100 MHz. Both processors are equipped with 4MB L3 cache and have a TDP of 65W. The
Ryzen 5 2400G is priced at $169, while the Ryzen 3 2200G has a MSRP of $99.

It is curious that in the table with chip price adjustments from the manufacturer’s presentation, the positions of Ryzen 5 1400 (4/8; 3.2/3.4 GHz) and Ryzen 3 1200 (4/4; 3.1/3.4) are not displayed. GHz). Does this mean that the new APUs will nominally replace these models? With a high probability this is so.

Ryzen 5 2400G Ryzen 3 2200G Ryzen 5 1400 Ryzen 3 1200
Family Raven Ridge Raven Ridge Summit Ridge Summit Ridge
Production technology 14 nm 14 nm 14 nm 14 nm
Number of cores/threads 4/8 4/4 4/8 4/4
Frequency formula 3. 6/3.9 GHz 3.5/3.7 GHz 3.2/3.4 GHz 3.1/3.4 GHz
L2 cache size 4×512 KB 4×512 KB 4×512 KB 4×512 KB
L3 cache size 4 MB 4 MB 8 MB 8 MB
Memory support DDR4-2933 DDR4-2933 DDR4-2666 DDR4-2666
Integrated graphics Radeon RX Vega 11 (11 CU, 704 units) Radeon RX Vega 8 (8 CU, 512 units)
Processor socket Socket AM4 Socket AM4 Socket AM4 Socket AM4
Unlocked multiplier + + + +
Heat pack 65W 65W 65W 65W
Recommended price at the time of announcement $169 $99 $169 $109

Even if AMD does not formally abolish these positions in its price list, this will happen naturally. In terms of computing performance, the Ryzen 5 2400G and Ryzen 3 2200 are at least as good, and in most cases will be faster due to the increased clock speeds.

In general, AMD does not try to hide it. Even the presentation slides reflect the balance of power between the new APUs and the base Summit Ridge chips in multi-threaded computing tasks. However, nuances are possible here. Recall that the Raven Ridge processors are equipped with 4 MB of L3 cache, while the 4-core Summit Ridge has twice the amount of L3 — 8 MB. Surely, for some types of tasks, the increased cache size will play an important role. However, without additional price adjustments (Ryzen 5 1400 – $169, Ryzen 3 1200 — $ 109) the younger representatives of Summit Ridge have no chance.

Radeon RX Vega 11 and Radeon RX Vega 8 graphics

A clear advantage of the new models is the integrated Radeon RX Vega graphics. For universal systems, the presence of a processor with an integrated GPU is an opportunity to do without a discrete graphics card, that is, a direct savings when assembling a PC.
If with the computing performance of Raven Ridge everything is more or less clear initially, then the possibilities of integrated graphics are of maximum interest. And here the manufacturer also gives initial guidelines.

Ryzen 5 2400G with an integrated Radeon RX Vega 11 graphics core looks very good against the background of the Core i5-8400 with an integrated Intel UHD 630. In the diagrams presented by the manufacturer, we see a 2-3-fold advantage of Raven Ridge. Given the capabilities of the Intel UHD 630, the performance of the Radeon RX Vega 11 is close to that of discrete graphics cards of the GeForce GT 1030 level. Core i5-7600/i7-7700 and GeForce GT 1030 and Ryzen 5 2400G-based platforms with integrated graphics in 3DMark 11 Performance. The performance of the graphic part of the systems turned out to be similar at all the graphic stages of the benchmark. However, in this comparison, the operating frequencies of the RAM and the integrated GPU remain behind the scenes, so the figures obtained can only be used as estimates. And the performance in the synthetic test does not always reflect the state of affairs in real games.

AMD also speaks about the same performance level, assuring that in 3DMark Time Spy the Ryzen 5 2400G processor will provide results available to the Core i5-8400 + GeForce GT 1030 bundle. The manufacturer is somewhat cunning, using a similar example. A combination of the Core i3-8100 with a similar graphics card would certainly have provided similar performance in the test, but the price difference would not have been so impressive.

Anyway, GT 1030-level integrated graphics in a processor with a TDP of 65 W is very worthy. Of course, even for a basic gaming system, this is still not enough. Such performance of the built-in GPU allows you to get an acceptable number of frames / s to watch even new projects and feel quite comfortable in most games 3-5 years ago.

When using integrated graphics, it is worth remembering that the memory for storing textures and necessary GPU data will be stored in system RAM. For a PC with 8 GB, this is an additional burden. This amount of RAM is already hardly enough for new games, and if you allocate another 1.5–2 GB for the needs of the integrated video core, then accessing the swap file cannot be avoided. However, on a system with an “embedded” system, the most sparing modes with low graphics quality will most likely be used or age projects will be launched that use memory more economically.
The graphics capabilities of the Ryzen 3 2200G are somewhat more modest due to the reduced number of compute units and reduced clock speeds. We will not rely entirely on AMD data, because the figures presented raise questions. For now, let’s just assume that the Radeon RX Vega 8 is 1.5-2 times more productive than the Intel UHD 630. Also quite good.

For new chips, AMD has not only retained the ability to overclock x86 cores, but also allows you to experiment with the operating frequency of the GPU. And it looks like there’s a lot of potential for extra speed here as well. In addition, work with high-speed memory modules should improve. For integrated graphics, RAM bandwidth is very important and has a significant impact on performance. A dual-channel access mode with a pair of memory modules is simply a must. As you can see on the slide, overclocking the integrated video core and using DDR4-3600 (wow!), instead of DDR4-2400, can improve processor performance by almost 40%, at least in the 3DMark Fire Strike test. And if we also connect CPU overclocking … In general, the field for experiments is very large here.

At first glance, the Ryzen 5 2400G and Ryzen 3 2200G are quite curious and viable. The older model attracts with slightly better performance and integrated graphics comparable to entry-level discrete graphics cards. The Ryzen 5 1400 is definitely not the most popular model in the Ryzen family. Obviously, those who focus on ~$170 chips are more interested in pulling up to the Ryzen 5 1600 ($219, and after the price cut — $189) and getting a 6-core 12-thread CPU. The powerful integrated graphics will be an argument for those who choose a chip for an all-in-one system in this category, although I would still like to see the Ryzen 5 2400G price tag at $150.

The quad-core Ryzen 3 2200G is certainly interesting at $99, as well as the overall combination of cost and features. The Ryzen 3 1200 costs $109, but here we get an initially faster processor, which is seasoned with a good integrated GPU and costs less. Here, there are no options.

With the advent of the younger Ryzen APUs, the Bristol Ridge family of chips are dropping into the lowest segment, where performance requirements are low, and price is the determining factor. Surely the cost of older processors A12 / A10 will be further reviewed (A12-9800 — $99, A10-9700 — $79).

Prospects

The further development of the Raven Ridge chip line is interesting. Surely we will see economical versions of the presented models, but with reduced CPU / GPU frequencies and a declared thermal package of 35 W. In the next six months, the manufacturer is unlikely to offer even more affordable chips in this series. The lower segment of the APU will be covered by Bristol Ridge, while the “title” of Ryzen is now worn by processors starting at $99. And it’s no longer possible to especially simplify the Ryzen 3 2200G without noticeable losses. With the same A12/10/8/6, few people on desktops will be interested, for example, in dual-core chips with Zen architecture and less powerful graphics. Unless as a replacement for A6-9500 for the same money ($47).
6/8-core processors with a relatively powerful video core would surely find their buyers. As Intel’s practice has shown, an integrated GPU is never superfluous. Owners of productive CPUs do not always need discrete graphics, but in the case of classic Ryzen, there is no choice. Alas, we will not see chips with more than four x86 cores yet. The
«Large» Ryzen die (Summit Ridge) initially contains 8 compute cores with 16 MB L3 and this requires about 200 mm².

The die of 4-core Ryzen Mobile chips equipped with integrated Vega graphics has similar dimensions. As you can see, about a third of the area is occupied only by GPU computing units (blue on the right). The structure of the desktop Raven Ridge is very similar, the differences are only in frequencies, voltages and activated modules. Therefore, if you imagine that you need to add another GPU unit with accompanying strapping to the 8-core Summit Ridge, then it is obvious that you will need to increase the total area of ​​\u200b\u200bthe silicon plate by about one and a half times. AMD will certainly not go to such costs.

Perhaps after a successful transition to a 12-nanometer manufacturing process of crystals, additional opportunities will open up, but rather, chips with GPUs and a large number of cores will appear only in the next generation of APUs. But even in the current version, Raven Ridge is interesting in its own way, so we are waiting for the opportunity to get to know the Ryzen 5 2400G and Ryzen 3 2200G better.

step-by-step guide with pictures when required, frequencies, terms, overclocking Athlon, Phenom, Ryzen

The idea to overclock the computer comes to almost any user, but is it worth it? Overclocking a video card is a common thing for most users, things are different with regard to the processor. Partly because as a result of overclocking, you can lose more than gain. Especially when overclocking caused a strong increase in core temperature. But the software is constantly evolving, and the technical parameters of the hardware are limited. Older models of processors are in dire need of overclocking, as the latest drivers cannot work miracles. But proper overclocking can.

When overclocking is required

For hardware released after 2021, the procedure may not be necessary. Slow data processing, general lags and freezes are not always dependent on the processor. Before overclocking, the possible influence on the speed of the PC of other factors is excluded. If the slowdown was not caused by a lack of frequencies, the procedure will only exacerbate the problem, leading to early wear. The latest processor models do not need overclocking — this is superfluous for them, since they are already capable of a lot. Before overclocking, it is worth understanding whether it is possible in principle for the user’s machine. If the motherboard chipset was not designed with core acceleration in mind, it’s better to forget about overclocking. But most motherboards do not block overclocking.

References [edit]

  1. «AMD PowerTune Technology» (pdf). AMD. March 23, 2012
  2. «AMD PowerTune vs. PowerPlay» (PDF). AMD. December 1, 2010 Archived from the original (PDF) on July 14, 2014. Retrieved 13 July 2014.
  3. «Add support for amdgpu powerplay». November 11, 2015.
  4. «AMD x86 SMU firmware analysis». December 27, 2014
  5. «Reverse Engineering Power Management on Nvidia GPUs» (PDF).
  6. «Rethinking TDP with PowerTune». Anandtech. December 15, 2010
  7. «Introducing PowerTune Accelerated Technology» . Anandtech. June 22, 2012
  8. «New PowerTune: Adding More States». Anandtech. March 22, 2013
  9. «PowerTune: Improved Flexibility and Fan Speed ​​Control» . Anandtech. October 23, 2014
  10. «What is AMD PowerTune 2.0 and what is it for?» . Semi Accurate
    . December 16, 2013 9 a b
    «Radeon Performance Matrix» .
    freedesktop.org
    . Retrieved 10 January 2016.
  11. «AMD Announces 7th Generation APUs: Excavator mk2 at Bristol Ridge and Stoney Ridge for Notebooks». May 31, 2021 . Retrieved 3 January 2021.
  12. «AMD Mobile» Carrizo «Family of APUs Designed for a Major Leap in Performance and Energy Efficiency in 2015» (press release). November 20, 2014 . Retrieved 16 February 2015. 9 a b
    «AMD VEGA10 and VEGA11 GPUs detected in OpenCL driver» . videocardz.com. Retrieved 6 June 2017.
  13. Cutress, Jan (February 1, 2018). «Zen and Vega Cores: Ryzen APUs for AM4 — AMD Tech Day at CES: 2021 Roadmap Revealed, with Ryzen APUs, Zen+ at 12nm, Vega at 7nm». Anandtech. Retrieved 7 February 2021.
  14. Larabel, Michael (November 17, 2017). «Radeon VCN encoding support appears in Mesa 17.4 Git». Phoronix. Retrieved November 20, 2017.
  15. Liu, Leo (September 4, 2021). «Add support for Renoir VCN decoding». Retrieved 11 September 2021. Has the same VCN2.x block as Navi1x
  16. Tony Chen; Jason Greaves, «AMD Graphics Core Next (GCN) Architecture» (PDF), AMD
    , retrieved August 13, 2021
  17. «A technical look at the AMD Kaveri architecture» . Semi accurate. Retrieved 6 July 2014.
  18. «How do I connect three or more monitors to an AMD Radeon™ HD 5000, HD 6000, and HD 7000 graphics card?» . AMD. Retrieved 8 December 2014. 9 a b
    Michelle Danzer (November 17, 2021). «[AD] xf86-video-amdgpu 1.2. 0″.
    lists.x.org
    .
  19. «NPOT Texture (OpenGL Wiki)». Chronos Group
    . Retrieved February 10, 2021.
  20. «AMD Radeon Software Crimson Edition Beta». AMD
    . Retrieved April 20, 2021.
  21. «Mesamatrix» . mesamatrix.net
    . Retrieved April 22, 2021. 9 a b c
    Killian, Zach (March 22, 2021). «AMD releases patches to support Vega on Linux». Technical report . Retrieved 23 March 2017.
  22. Larabel, Michael (September 15, 2020). «AMD Radeon Navi 2/VCN 3.0 supports AV1 video decoding». Phoronix. Retrieved 1 January 2021.
  23. «Next Generation Radeon Vega Architecture» (PDF). Radeon Technologies Group (AMD). Archived from the original (PDF) on September 6, 2021. Retrieved 13 June 2021.
  24. Larabel, Michael (December 7, 2016). «The best features of the Linux 4.9 kernel». Phoronix. Retrieved 7 December 2016.

Frequencies and terms

The frequencies related to the operation of processors have different designations. For proper overclocking, you need to understand what functions are assigned to different frequencies, their names — confusion can seriously damage the PC.

  1. CPU frequency. This is the frequency of the core itself. Names: CPU clock speed, CPU-speed. On it, the computer’s central processing unit executes algorithms. The value is indicated in the product description in the catalogs. To increase the overall performance, the figure is raised during overclocking.
  2. Base frequency. The value is also called the reference frequency. The default is 200 MHz. Participates in formulas for calculating other frequencies to ensure proper operation.
  3. HyperTransport frequency. Responsible for executing server bridge and CPU interface algorithms. The value does not exceed the number for the north bridge (as a rule, these figures are equal)
  4. The frequency of the north bridge (northbrige). The value can be greater than or equal to HyperTransport, but must not be less. An increase in the indicator leads to an increase in the performance of the memory controller.
  5. DRAM frequency, also known as memory speed/frequency. The value is measured in MHz. Due to it, the memory bus functions.

AMD graphics card cooler control with MSI Afterburner

Literally immediately prompted about the existing Msi Afterburner program, in which everything is human. It turns out, indeed, that this miracle of programmer’s thought works great, and even the software was written, judging by the information, based on the engine of the aforementioned RivaTuner.

You can download this joy, say, from this link. The installation is extremely simple, and I will not dwell on it (after installation and / or at the first start, they may be asked to restart the computer, which is useful and highly recommended).

The program is easy to operate and very pleasant. The program has several sliders in the control window on the left and the monitoring window on the right:

manually. Most likely, the first time you turn on this feature, you will be asked if you want to go to the advanced graph to adjust the rotation speed versus temperature. Agree. If you refused or this did not happen, then use the «Settings» button and the «Cooler» tab (it will appear after pressing the «Settings» button).

In the window that appears, set the parameters that suit you and enjoy, in fact, work, do not forget to check the corresponding box «Enable user auto mode» and autoload the program at system startup, which is done on the main settings tab:

These are the pies . Use on health.

Overclocking Athlon

There are different working methods for overclocking Athlon (of varying degrees of complexity for the user and the PC itself), but AMD still descended to the problems of ordinary users and released a program for overclocking their whimsical products — AMD Overdrive. It is the safest and easiest way to operate through it — most modern chipsets are supported, and the interface is simple and straightforward.

After installation, you only need:

  1. Activate Performance Control.
  2. Select the Select all Cores option and move the slider labeled CPU Core 0 Multiplier. The current speed (subject to change) is displayed in Current Speed.
  3. View the current processor temperature and repeat the small increase. It is carried out gradually, in small steps. The maximum allowable acceleration should not be accompanied by heating above 60 degrees. It is best to move the slider little by little, increasing the value by a maximum of ten.
  4. Voltage adjustment. It is not enough just to change the frequency value — for stable operation, changes are made to the voltage. To do this, move the CPU VID knob. If you do not change the voltage, overclocking will lead to an emergency shutdown of the system.

After each movement of the slider, the performance of the computer is evaluated not only by temperature. The Performance Control/Stability Test is suitable for this. You can run testing in AIDA 64, Prime95.
Overclocking through BIOS — a simple algorithm for accelerating the processor, without loading Windows. The main condition is that the motherboard must support the procedure. Regardless of the type of BIOS, the basic sequence of actions for overclocking does not change — the differences are only in the interface.

The second condition is that the BIOS must have the latest firmware version. Difficulties may arise with this, but rather domestic ones. The fact is that flashing the BIOS requires a backup power source. You can take a chance and flash without it, but if there is a voltage drop, knocks out traffic jams, or the energy just suddenly turns off during the process, the computer will become a brick, as it will not be able to perform basic startup algorithms. Attempting to overclock on an outdated BIOS version often results in hardware wear, critical errors, or, at best, no overclocking.

The following steps are followed:

  1. To speed up the core, entering the BIOS, the user must adjust the indicators in the Frequency column. It is enough to increase the indicator by 100 MHz (for example, from 3500 to 3600). This is the final frequency.
  2. The CPU Ratio and BCLK Frequency graphs are an indicator of the multiplier value and the bus frequency, respectively. The changes must follow the formula «Final frequency = multiplier * bus».
  3. To check the effect of changes, they are saved before restarting. After downloading, a test is carried out. You can run a “demanding” game, but it’s more convenient to use utilities like AIDA 64, Prime95.
  4. Voltage adjustment. Changing the frequencies in the utility or Bios has the same effect on the algorithms. Most likely, the system will crash into a blue screen. This is normal — due to a lack of energy, changes in BIOS will either be reset to default settings, or it will be a normal emergency shutdown. In any case, this is “treated” — in the BIOS in the Voltage column. It is slightly increased and tested again until the optimum value is reached.

The advantage of overclocking an AMD processor through BIOS is its complete safety. Even if it leads to a critical error, resetting the settings to the default values ​​\u200b\u200bis a matter of twenty seconds. By spending time on the selection of settings, you can ensure safe overclocking.
The overclocking program, and sometimes the BIOS will not help if the installed processor belongs to Duron or Athlon (Thunderbird). Hardware of this type requires a 462-pin socket on the motherboard. This socket — PGA-socket fits both types. They differ only in the size of the L2 cache memory.

Otherwise, the processors are similar, a common problem is also difficult overclocking. The processor socket is not adapted to changes in resistors, which limits overclocking. Acceleration is done by increasing the bus frequency — depending on the chipset, this option may be available in the BIOS (but very rarely). In this case, an increase in voltage by more than 10% is unacceptable. Trying to overclock processors of this type yourself, in the absence of the necessary options, is not worth it — there is a risk of making damage, not changes.

There are no working utilities for full-fledged, on all fronts, overclocking of these processors — their design simply does not allow this. Some craftsmen speed up these models by patiently picking up iron and with a blowtorch in one hand. For an amateur user, overclocking will be an impossible task.

How to use the program

When you start AMD OverDrive, the following menu appears on the screen.

This window provides basic information for all computer components. On the left side of the window there is a menu that allows you to go to other subsections. To overclock the processor, you need to go to the tab Clock / Voltage , since all further actions take place in it.

The standard overclocking mode implies the usual shift of the slider near the desired parameter to the right.

For users with technology 9 activated 1480 Turbo , you must first press the green button — “ Core Control ”. After that, a menu will open in which you need to check the box “ Enable Turbo Core ”, and then start overclocking.

The principle of overclocking the processor is the same as that of the video card. Some recommendations:

  1. Slider should be moved smoothly without making big steps. Each change must be saved by pressing the button “ Apply ”.
  2. After each action, it is recommended to test the performance of the system.
  3. Monitor heating temperature (this is done using the tab “ Status Monitor ” – “ CPU Monitor ”).
  4. When overclocking, do not focus on the extreme position of the slider on the right. In some cases, being at this point will not only not help in increasing productivity, but simply will harm the computer . Sometimes the first step is enough to overclock the computer.

You can also select the item “ CPU Core 0 Multiplier ”, which will allow you to assign a multiplier increase to all cores at the same time.

Overclocking Phenom

Phenom processors are perfectly overclockable via AMD Overdrive, with rare exceptions. The procedure is carried out according to a similar algorithm. It makes sense to overclock Phenom II processors. The first generation, even with the maximum available overclocking, does not provide a noticeable performance improvement — it is godlessly outdated. Second-generation processors have a high potential — they are competitive on their own, and they perform better than Intel Core 2 Quad in overclocking. Although, they still do not reach the level of i7.

To improve Phenom, they take into account that as a result the core will heat up very much — before overclocking, the user makes sure that the cooling is working properly. The sequence of actions for overclocking Athlon and Phenom is the same.

The main feature of overclocking is that although the core is overclocked to a little below 4 4 GHz, when accelerated above 3.8, the Cool’n’Quiet option is disabled. This causes it to get very hot — so cooling is critical to increasing the performance of Phenom processors. The new cooling system should act as efficiently as possible on the core itself, and the motherboard should have its own cooling so that there are no errors due to overheating of the components.

Phenom products are in high demand on the AMD market — despite the problems with overheating, overclocking «phenoms» allows you to squeeze out the maximum performance.

General tips

  1. If you have used one of the programs to adjust the operation of the cooler, set the temperature, after reaching which the speed of rotation of the blades will increase, not higher than 50 0 . Setting an increase in fan speed at t below this is unprofitable, because when heated to 50 degrees, the system works normally, and increasing the speed is fraught with an increase in the computer’s energy consumption and premature wear of the cooler itself.
  2. Before overclocking, clean and lubricate the screw, replace the thermal paste. If you increase the speed of the fan blades without this, most likely, the device will burn out or simply break.
  3. Check which connector is used to connect the cooler to the computer or laptop. If it’s 3 or 4-pin, download the software. If the connector is two-pin, only mechanical adjustment is possible.

Download software only from the official websites of the manufacturer of your personal computer — this is a guarantee that you will not catch a virus. It also increases the chance of utility compatibility with your PC.

Frequent problems

When using the program, there are 2 main problems:

  • amd overdrive does not see the processor ;
  • does not overclock the processor;

Often, the problem lies in the preparatory stage — either the processor is not included in the serviced list, or not all BIOS settings have been edited. You can also update the motherboard drivers to solve problems.

In most cases, updating your drivers solves problems.

Source

Heterogeneous System Architecture

Heterogeneous System Architecture ( HSA ) is a set of specifications from different manufacturers that allows the integration of CPUs and GPUs on the same bus, with shared memory and tasks. [1] HSA is being developed by the HSA Foundation, which includes (among many others) AMD and ARM. The platform’s stated goal is to reduce communication latency between the CPU, GPU, and other computing devices, and to make these various devices more compatible from a programmer’s point of view, [2] :3 [3] relieving the programmer of the task of scheduling data movement between non-overlapping device memory locations (which currently has to be done with OpenCL or CUDA). [4]

CUDA and OpenCL, as well as most other fairly advanced programming languages, can use HSA to improve execution performance. [5] Heterogeneous computing is widely used in system-on-a-chip devices such as tablets, smartphones, other mobile devices and game consoles.

  • 4.1 AMD
  • 4.2 Hand
  • 5 See also
  • 6 Recommendations
  • 7 External link
  • OPPLICATION

    The Osa Osa is to facilitate the load on the programmers to take advantage of the programmers. Originally developed exclusively by AMD and called FSA, the idea has been expanded to cover processors other than GPUs, such as those from other manufacturers. DSP, too.

    Modern GPUs are very well suited for single instruction, multiple data (SIMD) and single instruction, multiple thread (SIMT) operations, while modern CPUs are still being optimized for branching. and Etc.

    Review

    This section of needs additional citations to review . Please help improve this article by adding citations to reliable sources. Material not received from the source may be challenged and removed. (May 2014) (Learn how and when to delete this message template) heterogeneous computing more common. Heterogeneous computing itself refers to systems that contain multiple processors: central processing units (Processors), graphics processing units (GPUs), digital signal processors (DSPs), or any type of application specific integrated circuits (ASICs). The system architecture allows any accelerator, such as a GPU, to operate at the same level of processing as the system CPU.

    Among its main characteristics, HSA defines a unified virtual address space for computing devices: where GPUs traditionally have their own memory separate from main (CPU) memory, HSA requires these devices to share page tables so that devices can communicate by sharing pointers. This must be supported by custom memory management units. [2] :6–7 To enable interoperability, as well as to facilitate various aspects of programming, HSA is designed to be IT-agnostic for both processors and accelerators, as well as to support high-level programming languages.

    So far, the HSA specifications cover:

    HSA Intermediate Level

    HSA Intermediate Level (HSAIL), a set of virtual instructions for parallel programs

    • similar to [ according to whom? ] LLVM and SPIR intermediate representation (used by OpenCL and Vulkan)
    • tweaked to specific instruction set by JIT compiler
    • late to decide on which core(s) to run task
    • explicitly parallel
    • supports exceptions, virtual functions and other high-level functions
    • debug support

    HSA memory model

    • compatible with C++11, OpenCL, Java and .NET memory models
    • 9124 relaxed sequence

      9124 designed to support both managed languages ​​(e.g. Java) and unmanaged languages ​​(e.g. C)

    • will greatly simplify the development of third-party compilers for a wide range of heterogeneous products programmed in Fortran, C++, C++ AMP, Java, etc.

    HSA dispatcher and runtime

    • designed to provide heterogeneous task queuing: work queue per core, work queuing, load balancing by stealing work
    • any core can schedule work for any other, including itself
    • Significant reduction in kernel scheduling overhead

    Mobile devices are one area of ​​application for HSA where it provides improved energy efficiency. [6]

    Block diagrams

    The following block diagrams provide general illustrations of how HSA works and how it compares to traditional architectures.

    • Standard architecture with a discrete GPU attached to the PCI Express bus. Zero copy between GPU and CPU is not possible due to different physical memory.

    • HSA provides unified virtual memory and makes it easy to pass pointers over PCI Express instead of copying all data.

    • In split main memory, one portion of system memory is allocated exclusively to the GPU. As a result, a null copy operation is not possible.

    • A single main memory made possible by a combination of GPU and HSA capable processor. As a result, you can perform zero-copy operations. [8]

    • Both CPU MMU and GPU IOMMU must meet HSA hardware specifications.

    Software Support

    AMD GPUs contain certain additional function blocks for use within the HSA. On Linux, the amdkfd kernel driver provides the necessary support. [9] [10]

    Some of the HSA-specific features implemented in hardware must be supported by the operating system kernel and specific device drivers. For example, support for AMD Radeon and AMD FirePro graphics cards and APUs based on Graphics Core Next (GCN) has been merged with version 3.19The main line of the Linux kernel, released on February 8, 2015. [10] Programs do not interact directly with amdkfd, but queue their jobs using the HSA runtime. [11] This very first implementation, known as amdkfd, focuses on the «Kaveri» or «Berlin» APU and works in conjunction with the existing Radeon core graphics driver.

    In addition, amdkfd supports Heterogeneous Queuing (HQ), which aims to simplify the distribution of computational jobs across multiple CPUs and GPUs from a programmer’s point of view. Support for heterogeneous memory management ( XM ), suitable only for graphics hardware with AMD IOMMU version 2, has been adopted into the main Linux kernel version 4. 14. [12]

    Integrated support for HSA platforms has been announced for the Sumatra release. OpenJDK, maturing in 2015. [13]

    The AMD APP SDK is AMD’s proprietary software development kit for parallel computing, available for Microsoft Windows and Linux. Bolt is a C++ templating library optimized for heterogeneous computing. [14]

    GPUOpen understands a couple of other HSA related software tools. CodeXL version 2.0 includes the HSA profiler. [15]

    Hardware support

    AMD

    As of February 2015, AMD «Kaveri» A-series APUs only (see Kaveri desktop processors and Kaveri mobile processors) and Sony PlayStation 4 allowed the integrated GPU to access memory through version 2 of the AMD IOMMU. Early APUs (Trinity and Richland) included IOMMU version 2 functionality, but only for use by an external GPU connected via PCI Express. [ citation needed]

    Post-2015 Carrizo and Bristol Ridge APUs also include IOMMU version 2 functionality for the integrated GPU. [ citation needed ]

    The following table shows the features of AMD with APU (see also: List of AMD accelerated processors).

    9 a b DRM (Direct Rendering Manager) is a component of the Linux kernel. The support in this table is for the most recent version.

    ARM

    ARM Bifrost microarchitecture implemented in Mali-G71, [30] fully complies with the technical specifications of HSA 1.1 hardware. As of June 2016, ARM has not announced support for software that uses this hardware feature. 9 «Architecture of the ARM Bifrost GPU». May 30, 2016

    External link

    • Overview of the HSA heterogeneous system HSA on YouTube Author: Vinod Tigparage in SC13 in November 2013
    • HSA
    • 9135 900 900 900. 23 boards at CES 2019

      Heading: Tips

      No new platform chips have spurred motherboard makers to redesign their lines, but that hasn’t stopped ASUS, Gigabyte and ASRock from rolling out intriguing, and in some cases, borderline nut-boards.

      • Pojohn Burek
      • January 12, 2019 11:00 am EST
      • January 12, 2019

      PCMag reviews products on its own, but we may earn affiliate commissions from purchasing links on this page. Terms of Use.

      The two major PC-tech trade shows CES and Computex are usually the places where Intel and AMD launch new platform processors. And that usually means, at the same time, motherboard manufacturers roll out fresh motherboards based on the new chipset or chipset support.

      Kes 2019 saw its fair share of processor action, especially around AMD’s announcement of the advent of the third generation of Ryzen desktop processors, but the timeline is far off. This meant that the action of the motherboard was light.

      Still, that hasn’t stopped the three core developers from showing a bunch of very recent boards, some value-oriented (B365 and B450 chipset), some in odd sizes, and some that were extremely bold (ranging from Extreme to Xtreme, Alpha to Omega). These are the most intriguing ones we’ve seen.

    • Z390M-STX for ASRock MXM motherboard: micro STX soldier
    • A300M-STX for ASRock motherboard: tiny Ryzen
    • ASRock B450M became a legend B365
    • ASUS Rig Dominus Extreme
    • Asus Rig Mister Extreme (RAM Detail)
    • ASUS Rig Zenith Extreme Alpha
    • ASUS laptop motherboard Rampage VI Extreme Omega
    • ASUS laptop motherboard Rampage V Extreme Omega: slots and shielding
    • Best Kes 2019
    • Gigabyte Z390 Dual Extreme Browser Viewer: The Ultimate Enthusiast Motherboard

      90,003 Gigabytes actually rolled out this tip in the final weeks of 2018, but Kes was our first opportunity to test out their $900 Bisty. What you see here is an extended ATX board with a giant built in CPU water block that also includes liquid cooling for the Z39 chipset0

      Now, you ask, why would anyone pay that kind of money for advice that isn’t even on the high-end enthusiast X299 (Core X-series processor) or X399 (AMD Threadripper platform)? Good point, but browser view and command performance are all the things that modern motherboard design has to offer: RGB throughout, Thunder 3 and 10 Gigabit Ethernet (based on an Aquantia controller and augmented by an Intel processor), dual BIOS, and screen mounts and connections galore.

    • Gigabyte Z390 Dual Extreme Browser For Viewing And Working: Cooling ‘Monoblock’

      Gigabyte calls the huge cooler, which is spread across the center of the board, a «monoblock», like a liquid cooler’s unibrow. These are courses on CPU die area and IF. Obviously, this board is only workable with liquid cooling, so only hardcore tweakers need to be applied.

    • Gigabyte Z390 Dual Extreme Browser Viewer: Chipset

      Chipset cooling over this area is as much a design element as it is a practical necessity for most CR crafts. Also under the cooling/radiator cover is a trio with PCI Express M.2 slots.

      Beyond that, more reason for the price tag: you get tons of potential for board-controlled cooling and lighting. Among what to add? Eight fans, a host of temperature measurement points and sensor attachments, and four headers for RGB light strips (with two reference addressable RGB strips). Also, the board supports dual-BIOS transcription for inveterate junkies.

    • Gigabyte Z390 Dual Extreme Browser I/O Browser

      Here’s a quick look at the I/O panel, which is about the same as you’ll see on any flagship board. This is partly because the boards need to be able to use the on-chip graphics core as opposed to the X29.9 or X399 platforms. Note the antenna connection points for the built-in Wi-Fi (which works with advanced Intel CNVI processors), Lightning 3, and two Ethernet connectors. Also note that the I/O plate is an integral part of the board, and it glows in the RGB brilliance.

    • Gigabyte Z390 two extreme: dehumidifier browser version to view

      Next to any board , but in the browser to view, in Z390 two extreme will be the pinnacle of the board. It’s $289The .99 flagship Z390 (maybe it would be more accurate to call the mansion of the browsing browser aboard the ship ) is also very good value for its browser for viewing and working brother. It actually offers most of the same connectivity (including 10 gigs over Ethernet, a Thunderbolt 3 port, and all-in-one I/O), only without the built-in water block.

    • Z390M-STX for ASRock MXM motherboard: micro STX soldier at

      Now here is a complete departure from above. This board allows for the unusual micro-STX motherboard format that you’ll need to mount in Silverstone’s purpose-built micro-STX cases. (It’s far from a common format. ) But it’s intriguing, in that the platform has already finished here to the Z390; at this time last year, he had just moved to Coffee Lake/Z370. These are extremely niche platform soldiers, now with support for i5, i7, and i9s processors on the 9th Generation Core line.

      Part of the MXM, meanwhile, refers to the video cards this advice accepts. MXM is the standard for graphics modularity in laptops and the is essentially a stripped-to-a-PCB mini version of a desktop graphics card. In a unique location, ASRock will sell you the MXM module with the purchase of the board, as it is difficult for the end users of these lands themselves. You can get Intel MXM platform and GTX 1060 1080 flavors.

    • A300M-STX for ASRock Motherboard: Tiny Ryzen

      This is, on sale, the smallest Ryzen-compatible AMD motherboard you can buy. The processor socket you see is AM4 and is designed to work with chips up to Ryzen 5 (65W maximum). You need to select a chip with integrated graphics, such as «Bristol Ridge» or «Raven Ridge» APU here, as there is no provision for a graphics card. Also note that the board, as shown above, uses both laptop and non-DIMM modules, not more desktop memory.

      Using the basic A300 chipset, this board is also a micro-STX and focuses more on performance in a small box than graphics. ASRock offers in a case with the option for a low-profile cooler, the entire bare-bones system is dubbed the DeskMini A300 or A300W (the latter with Wi-Fi, the former without). It’s amazingly powerful and flexible little capacity, build with two M.2 PCI Express slots (one on top, one on bottom).

    • ASRock B450M became a legend

      The

      is designed as a cost-effective AM4 Ryzen board for PC builders who aren’t interested in advanced overclocking tools but want a dandy board that can punch in looks. (The B450 chipset is a step down from the X470 enthusiast platform.) The board sports a nifty digital camo pattern, and nice metal heatsinks on the chipset and power-supply components. It also comes in a microATX version.

    • ASRock Game Series B365 Phantom

      Along the same line of B450M legend steel, we have here the new game line B365 Ghost in microATX and Mini-ITX format forms, from left to right. The B365 is a minor evolution of the B360 buck chipset that works with the latest generation of Intel chips. In the B365/B360, the difference comes down to the process-switching technology that Intel made into manufacturing these chipsets last year.

    • ASRock Pro4 Series B365

      The

      B365 chipset is also in these new Pro4 boards. (ASRock offers Pro4 boards in the form of a Z390 chipset for the same Intel processors.) More conservative than phantom games, these B365-based tips are for advanced users and casual builders who don’t want to overclock.

    • ASUS Rig Dominus Extreme

      Now here is the mobo of the monster that did the Kes rounds! This 14-by-14-inch monster-class server is designed for the 28-core Intel Xeon processor teased in mid-2018. Details of the Xeon W-3175X processor portion, as well as official pricing, remain undisclosed, despite some apparent pricing leaks that have come in the $4,000 to $5,000 zone.

      This board has been shown in several mods and venues on the condition that more details about the chip be shared. Needless to say, given the large cans of DIMMs and the profusion of power real estate (note the size of the heat dissipation hardware along the top edge of the board!), this would be far from cheap.

    • Asus Rig Mr. Extreme (RAM detail)

      Here is a close-up of 12 RAM banks from a different angle. Notice also the status screen that will decorate the top of the I/O. This board will support a staggering 192GB maximum memory across its banks, and the PCI Express slots are stretchable to accept up to four dual-slot cards. You couldn’t see much room for other connections on board, but it does: two U.2 ports for storage drives, and supports two «DIMM.2» modules (mostly ASUS laptops-specific vertical riser boards that you can mount two M.2 SSDs each), among many more.

    • ASUS Horn Zenith Extreme Alpha

      Could this be the start and end for high-end X299 and X399 boards? The Zenith Extreme Alpha horn clearly can’t take another record high, so it should be the first and last word in high performance desktop (HEDT) boards. This is part of a two pair of boards dubbed Alpha and Omega.

      Alpha board created for X399/AMD Threadripper and breakpoint packages and other features designed for extreme overclockers, including competitive types of liquid nitrogen envelope pushes, as well as a software package that includes a mode for auto-tuning overclocking without tedious manual research . The board has additional active fans for power-supplying components, which plays a key role in liquid-cooled system designs, to reduce the actual airflow in the CPU area. You also get a fan extension card that adds a bunch of extra temperature mounts and fans for really extreme buildouts.

    • ASUS laptop motherboard Rampage VI Extreme Omega

      Of course, where there is Alpha there is Omega. On the Intel X299 taste of this board, the feature set is the same, with the expected dispersion in the chip slot and PCI Express lanes. But in essence, as we dubbed the duo, calling these boards our best in the motherboard show, they are not twins, but rather “brothers from another motherboard”: Mr. Extreme.

    • ASUS laptop motherboard Rampage VI extreme Omega: slots and shielding

      One thing to note is that the Omega has one smaller PCI Express x16 slot. Otherwise, the boards are very similar, with slight deviations in the SATA ports, 2 m. The number of slots and USB ports. Both boards should roll out in 1 block. If they turn out to be the last word in the X299 and X399 HEDT, the platforms will go out in style, rest assured.

    • Best Kes 2019

      For the very best of everything, check out our picks for the best in the show.

    0

    Did you like the article? Share with friends:

    Is

    the best variable refresh rate solution?


    When it comes to gaming, nothing beats the PC Master Race. Let me explain. The level of customizability as well as the raw power that a perfectly designed gaming rig can achieve is something one can only dream of. That being said, even PC games are prone to certain loopholes, and those loopholes can ruin the gaming experience. If you are an avid gamer or someone who follows gaming forums closely, you must have heard of one of the biggest problems for any gamer — screen tearing. While there is a traditional solution in the form of V-Sync, newer technologies have led to other solutions in the form of NVIDIA G-Sync and AMD FreeSync. Today, we’ll take a look at these two G-Sync vs FreeSync to see which one comes out on top. But first, let us shed some light on exactly what the problem is.

    What is screen tearing?

    If you’ve played on a rig that doesn’t have a very powerful monitor, you’ve probably come across this annoying screen tearing phenomenon. Screen tearing is an effect that occurs on a video source when 2 or more frames of video are displayed together in one frame, causing a tearing effect. You see, as GPUs become more and more powerful, they will want to push as many frames as possible in the shortest amount of time. While it sounds great, if your monitor’s refresh rate is fixed at, say, 75Hz, even if a few frames of animation pop up, your monitor won’t be ready for it.

    For example, imagine you are playing a game on a GPU capable of 100 frames per second. This means that the monitor updates 75 times per second, but the graphics card updates the display 100 times per second, which is 33% faster than the monitor. As a result, between screen updates, the video card renders one frame, and the third — another. This third of the next frame will overwrite the top third of the previous frame and then be displayed on the screen. The graphics card then completes the last two-thirds of that frame and renders the next two-thirds of the next frame, and then the screen is refreshed again.

    You will only see part of what is happening: part of the current frame and part of the next frame(s). As a result, it appears that the image on the screen is divided into several parts , which breaks the appearance of the game. Another reason this can be the case is when the system’s GPU is under pressure due to a lot of graphics processing or poor programming. When the GPU is under a lot of pressure, it will not be able to synchronize the output video, resulting in screen tearing.

    V-Sync and the need for an alternative

    For any gamer, screen tearing is an unfortunate event. A perfectly rendered title can be completely ruined by rough horizontal lines and frame stutter. The developers soon realized this problem and released V-Sync. Vertical Sync or V-Sync is designed to solve the screen tearing problem by using double buffering.

    Double buffering is a technique that alleviates the tearing problem by giving the system frame buffer and backup buffer. Whenever the monitor captures a frame to update, it pulls it from the framebuffer. The video card draws new frames in the back buffer and then copies them to the framebuffer when it’s done. According to predefined V-Sync rules, the backbuffer cannot be copied to the framebuffer until the monitor is updated . The back buffer is filled with a frame, the system waits, and after the update, the back buffer is copied to the frame buffer, and a new frame is output to the back buffer, effectively limiting the frame rate at the refresh rate.

    While all of this sounds good and helps eliminate screen tearing, V-Sync has its drawbacks . In V-Sync , the frame rate can only be equal to a discrete set of values ​​equal to (Refresh / N), where N is some positive integer. For example, if your monitor’s refresh rate is 60Hz, the frame rate your system will run at will be 60, 30, 20, 15, 12, etc. As you can see, the drop from 60fps to 30fps is big. Also, when using V-Sync, any framerate between 60 and 30 that your system can push will only be dropped to 30.

    Also, the biggest problem with V-Sync is input lag. As mentioned above, in the case of V-Sync, the frames that the GPU wants to push will first be held in the back buffer and only pushed to the framebuffer when the monitor gives it access. This means that any input you give to the system will also be stored in this back buffer, along with other frames. Only when those frames are written to the main frame will your input be shown. As such, systems can experience input lags of up to 30ms, which can really hinder your gaming experience.

    Alternatives: G-Sync and FreeSync

    You see, whether traditionally or via V-Sync, it was the monitor that always caused problems. The main power was always given to the monitors, and they misused it in order to limit the number of frames transmitted to them. No matter how many software layer changes you make, hardware will always have its limits. But what if there was another solution that would allow GPUs to get the ultimate power? Cue- Variable Refresh Rate Monitors .

    As the name suggests, variable refresh rate monitors are monitors with a refresh rate limit but no fixed refresh rate. Instead they rely on the GPU to change the refresh rate . This feat is now achieved with one of two technologies — NVIDIA G-Sync or AMD FreeSync.

    NVIDIA’s G-Sync, the launched back in 2013, aims to solve this problem by giving the GPU full power to decide how many frames will be displayed on the screen. The monitor instead of the fixed refresh rate adapts to the processing speed of the GPU and matches the output frame per second of the . So, for example, you are playing a game at 120 fps, then your monitor will also update at 120 Hz (120 times per second). And in case of high graphics processing requirements, when your GPU drops frames to 30fps, the monitor will change its refresh rate to 30Hz accordingly. Thus, there is no frame loss and the data is directly transferred to the display, thus eliminating any area for tearing or input lag.

    Now that NVIDIA is the king of gaming, its biggest competitor AMD is not far behind. So, when NVIDIA released G-Sync, how could AMD stay? To stay ahead of the competition, AMD has introduced its FreeSync solution for V-Sync technology. Created by the in 2015, the AMD FreeSync works on the same principle as the NVIDIA G-Sync , allowing the GPU to take over and control the monitor’s refresh rate. While the goal of G-Sync and FreeSync is the same, the difference between them lies in how they achieve it.

    G-Sync vs. FreeSync: how do they work?

    NVIDIA developed G-Sync to fix problems on both ends. G-Sync is the ‘s proprietary adaptive sync technology, which means using the optional hardware module. This additional chip is built into every supported monitor and allows NVIDIA to fine-tune settings based on characteristics such as maximum refresh rate, IPS or TN screens, and voltage. Even when your frame rate gets very low or very high, G-Sync can keep your game running smoothly.

    NVIDIA G-Sync module built into the monitor

    As for AMD’s FreeSync , this module is not required by . In 2015, VESA announced Adaptive-Sync as a component of the DisplayPort 1.2a specification. FreeSync uses the DisplayPort Adaptive-Sync protocols to allow the GPU to control the refresh rate. In addition, it later expanded its support for HDMI ports, which makes it attractive to more consumers.

    Ghosting

    In the display aspect of , ghosting is used to describe an artifact caused by the slow response time of . When the screen is refreshed, the human eye still perceives the previously displayed image; causing blur or blurring of the visual effect. Response time is a measure of how quickly a given pixel can change state from one color to another. If your display’s response time is out of sync with the frames the GPU is pushing, you’re more likely to experience ghosting. This effect is common to most LCD or flat screen panels. While it’s not essentially screen tearing, ghosting isn’t too far off the mark given the fact that new frames are superimposed on previous frames without completely disappearing from the screen.

    Because the NVIDIA G-Sync module works with an optional hardware module, it enables G-Sync to prevent ghosting by configuring how the module works on each monitor. With AMD FreeSync, these adjustments are made within the Radeon driver itself, delegating the task to the monitor. As you can see, this is a hardware and software control module, and NVIDIA wins easily here. Although halos are not common on FreeSync monitors, they are still present . On the other hand, since each monitor is physically set up and configured, the G-Sync does not experience any of the side effects of the on its panels.

    Flexibility

    In pursuit of screen tearing, we decided to provide maximum control over the GPU. But, as Uncle Ben once said, «with great power comes great responsibility.» In this case, the GPU takes all the power from the monitor, more or less. For example, you should take into account the fact that most monitors, in addition to the usual adjustment of brightness and contrast, also have their own functions that allow 9The 1479 display will dynamically adjust the settings of the based on the input it receives.

    EIZO Gaming Monitor Custom Color Settings

    Since the NVIDIA G-Sync uses an optional proprietary module, the removes this feature from the ‘s display screen, allowing dynamic GPU tuning. On the other hand, the AMD FreeSync does not make the same changes as the and allows the screen to have its own dynamic color adjustment feature. Having your personal modifications as an option is important for any manufacturer as it helps them gain an edge over other manufacturers. That is why many manufacturers prefer to choose FreeSync over G-Sync.

    G-Sync vs. FreeSync: Compatible Devices

    In order for any device to be compatible with NVIDIA’s G-Sync module, it must embed its own NVIDIA module chip into its displays. On the other hand, AMD FreeSync can be used by any monitor with a variable refresh rate and a DisplayPort or HDMI port.

    However, your GPU must also be compatible with the related technologies (yes, you can’t mix and match one manufacturer’s GPU with another manufacturer’s timing technology). Introduced nearly 2 years ahead of its competitor, NVIDIA G-Sync has quite a few GPUs under the supported G-Sync tag. All mid-range and high-end GPUs from 600 to 1000 series are marked with G-Sync.

    In comparison, at the time of this writing, AMD only supports 9 GPUs, using FreeSync technology, compared to NVIDIA 33. this feature is currently missing. by AMD FreeSync.

    Codename Server Base Toronto
    GTX 600 Series GTX 700 Series GTX 900 Series GeForce GTX 965M GeForce GTX 1060 GeForce GTX Titan X
    GeForce GTX 670 GeForce GTX 760 GeForce GTX 970 GeForce GTX 1070 GeForce GTX Titan Xp
    GeForce GTX 680 GeForce GTX 770 GeForce GTX 970M GeForce GTX 1080 GeForce GTX Titan ZNVIDIA’s 0018

    G-Sync uses additional hardware, which basically means that display manufacturers have to free up more space inside the monitor case. Although it may seem insignificant, creating a custom product design for one type of monitor greatly increases development costs. On the other hand, AMD’s approach is much more open, in which display manufacturers can incorporate this technology into their existing designs.

    To show you the bigger picture (no pun intended), LG’s 34-inch FreeSync-enabled ultra-wide monitor will cost you just $397. While one of the cheapest ultra-wide monitors currently available, LG’s G-Sync-enabled 34-inch variant will set you back $997. That is almost a $600 difference from the , which could easily be the deciding factor when making your next purchase. .

    G-Sync vs. FreeSync: Best Variable Refresh Rate Solution?

    Both NVIDIA G-Sync and AMD FreeSync successfully fix the screen tearing issue. While G-Sync is definitely more expensive, it is supported on a wider range of GPUs and also provides zero ghosting. AMD FreeSync on the other end aims to provide a cheaper alternative, and while the number of monitors that support it is quite large, not many mainstream GPUs are supported at the moment.

    2024 © All rights reserved