Xeon phi review: Xeon Phi — Latest Articles and Reviews on AnandTech

Supermicro SYS-5038K-I-ES1 — Intel Xeon Phi x200 Developer Workstation Review

Supermicro SYS 5038K I ES1 Internal

Today we have our review of the Supermicro SYS-5038K-I-ES1 developer workstation for the Intel Xeon Phi x200 series. This review took us longer than most product reviews in terms of the number of hours it took to complete. As standard on STH we will go into the hardware. We will also take a look at why the developer workstation is awesome as we have had the chance to work with several companies who purchased them. We also have experience managing larger systems, namely the popular 4 nodes in 2U systems that are the defacto standard cluster compute blocks of today. As such we are going to specifically review this machine in the context of being a developer workstation either next to one’s desk or in a data center as a prelude to a larger cluster.

Supermicro SYS-5038K-I-ES1 – “Ninja” Developer Workstation Hardware

For some context on the Supermicro SYS-5038K-I-ES1 it is a system meant to come in (configured) at just below $5000 which is a departmental procurement threshold in many organizations. They are very much an entry point into Knights Landing without having to spend tens of thousands on a standard 2U 4 node machine. Intel and Supermicro built these systems to be office friendly and exhibit panache when open. The chassis itself is based on the Supermicro CSE-GS50-000R gaming chassis. Since this is the official Intel developer workstation, the red accents have been replaced with blue.

Supermicro SYS 5038K I ES1 Front Three Quarters

Overall, the outside is subtle enough to look like a fancy office tower workstation but there is little signage to tell an office visitor that underneath the steel is a water-cooled 64 core behemoth.

Cooling is provided by a CoolIT Solutions closed loop system with a combo pump and water block. LGA3647 is a big change for cooling as we highlighted in our LGA3647 socket post. The radiator has two fans and vents atop the chassis. That configuration works well for desktop use but it is a bit awkward to mount on a shelf in the data center. Again, we have seen these systems, in the same data center we have our DemoEval lab in, and that system has now been replaced by a Xeon Phi- Omni-Path cluster. These development systems lead to clusters.

Supermicro SYS 5038K I ES1 Internal

There are two other fans to provide internal airflow with all four being PWM controlled fans which keep noise down. To add to this, Supermicro adds sound deadening material to the chassis panel we removed which keeps the water pump hum to a minimum.

Zooming in, the water cooler looks, well, “cool” flanked by six 16GB DDR4 RDIMM modules. If you had this in your office, it is likely to invoke Phi Envy when you open it up to show co-workers.

Supermicro SYS 5038K I ES1 CoolIT Systems And RAM

Around the CPU socket, there are six DDR4 RDIMM slots to fill the Xeon Phi x200’s memory channels. The standard system comes with 16GB DDR4 RDIMMs but you can add 32GB RDIMMs to the system to hit 192GB RAM. The fact that your many-core compute chip has direct access to six DDR4 channels and 96 or 192GB of RAM is an advantage that Intel touts with its Xeon Phi x200 series processors.

Supermicro SYS 5038K I ES1 Storage Bays

The chassis has a ton of drive bays. There are 6x 3.5″ internal bays, 4x 2.5″ internal bays and 2x 5.25″ external bays which perfectly match the system’s 10x SATA III connectors. We do wish that in a future iteration we get hot-swap external drive bays even if that lowers the total count. They are significantly more convenient if you need to sneakernet large datasets from the data center to a remote office. Our advice is to use an external NAS device for 3.5″ storage to keep power consumption and noise down on the KNL tower.

Supermicro SYS 5038K I ES1 SATA Ports

With a total of three PCIe slots on the internal Supermicro K1SPE motherboard there is a decent amount of expansion opportunities onboard. The PCIe 3.0 x4 slot is perfect for a NVMe PCIe SSDs. The two PCIe 3.0 x16 slots do support dual width cards. One is likely to be used for networking. Be forewarned if you are thinking about adding a GPU to the system, you will need to adjust the factory default BIOS settings. Options such as Above 4G encoding need to be changed to support modern GPUs. If you try this, and get a hang at DXE PCIe enumeration you know what went wrong.

Supermicro SYS 5038K I ES1 PCIe

Power is provided by a Seasonic 600w 80Plus Gold PSU with modular cables. The system does come with the cables that are not already pre-wired in the event you want to add drives or PCIe devices that require additional power. At STH we prefer Seasonic PSUs and use them in virtually every workstation we build.

Supermicro SYS 5038K I ES1 Seasonic Gold PSU

The rear of the system has a standard server-I/O pattern including dual 1GbE, USB 3.0, VGA out and an IPMI management port.

Supermicro SYS 5038K I ES1 Rear

We suspect most users will utilize IPMI management to perform console actions as it is much easier than finding a VGA video monitor. The KNL system firmware had features such as the HTML5 iKVM so you do not need to use a Java applet to access the remote console. You can even reboot the system and access the console via mobile IPMIview as we did on a recent international trip over VPN.

Supermicro SYS 5038K I ES1 Remote Management

Overall, the Supermicro SYS-5038K-I-ES1 strikes a good balance providing a subtlely insane Xeon Phi server clothed in an office acceptable desktop tower.

What Would We Recommend Adding?

One of the first things we noticed with the Supermicro SYS-5038K-I-ES1 is that the system is shipped with dual 1GbE networking. This actually makes sense on the developer workstation since the included Intel i350 NICs work on just about every OS out of the box. Realistically, just about everyone will want to add some sort of higher-speed networking. With the Intel Xeon Phi x200 series there are only a few options:

  • 10GbE – only  if that is your only network option
  • 25/40/50GbE and FDR Infiniband – better options to feed data and if you are going to use MPI
  • 100GbE/ EDR Infiniband – better yet
  • Omni-Path – While you can get external Omni-Path cards, it may be worth seeing if you can get an on-package solution

We saw even doing lower-end data algorithms (e. g. training on Wikipedia data sets) that 1GbE was getting saturated and choked by the Xeon Phi 7210. We have the advantage of using a lab in the same data center as one of the major hyper- scaler’s AI lab. Before their full Omni-Path enabled 4-node-in-2U cluster went live, they had a high-speed network card in their Supermicro SYS-5038K-I-ES1 so they could access a faster fabric before their Omni-Path switches arrived.

Supermicro SYS 5038K I ES1 SSDs Added

Internal storage wise, the hard drive that comes with the unit we wish was swapped out for an SSD. CentOS load times were not quick with the spinning media and we noticed significant boot speed improvement by using a lower-cost SSD. This is important because developers are likely to want to change the core communication and MCDRAM settings on the box which requires a reboot each iteration. Colfax does include a P3520 SSD which is super.

What OSes Can the Xeon Phi x200 Series Run?

We did quite a bit of testing regarding which operating systems will work with the Intel Xeon Phi x200. In terms of Linux, CentOS 7 came pre-installed on the system’s hard drive. We first ran Ubuntu 16.04.1 and that had no issue. These two OSes also supported Docker so we were able to run Alpine Linux containers on the Xeon Phi 7210 without issue. You can see that were able to add the system easily to our Rancher based Docker Swarm and push containerized workloads directly to the machine.

Supermicro SYS 5038K I ES1 Rancher Docker Swarm

If you are using Linux, this is awesome as it means you can run services and push workloads (potentially with different compiler flags) directly to the machine. We highly recommend using Docker as the Xeon Phi x200 is compiler sensitive and it makes life much easier on a developer workstation like the Supermicro SYS-5038K-I-ES1. If you are not comfortable with the Docker CLI, you can use a GUI management tool. We recently looked at Rancher, Shipyard and Portainer Docker management tools so you can get a feel for each.

Outside of Linux, we were able to get the Intel- Supermicro SYS-5038K-I-ES1 to boot an existing Windows Server 2012 R2 installation and run programs. This was an awesome feat of compatibility.

Along with the Windows Server 2012 R2 we were also able to get Windows Server 2016 data center working on the KNL system.

With DPDK, a super fast 16GB addressable MCDRAM memory pool, a large number of cores, and other features, we have seen many FreeBSD shops start working on KNL systems. We tried four popular FreeBSD versions out, FreeBSD 11, FreeBSD 10.3, pfSense 2.3 and FreeNAS 9.10.

Our key takeaway with the Intel Xeon Phi x200 and FreeBSD is that you will want to start development using FreeBSD 11 and later. The FreeBSD 10.x OSes failed to boot. That is fine as FreeBSD 11 will replace older versions as time passes.

Overall, we were extremely impressed with the breadth of what worked on the KNL developer workstation. It speaks a lot to out-of-the-box code compatibility. The biggest catch is that one needs proper development tools (e.g. icc) for an environment and there is still a large gap between gcc and icc performance.

Performance

The Supermicro SYS-5038K-I-ES1’s Intel Xeon Phi 7210 with its 64 cores and 256 threads is an interesting machine. To get the most out of it, it does require utilizing the onboard 16GB of high speed MCDRAM as well as its AVX512 vector units.

KNL is still a platform that is rapidly getting better. AVX512 instructions are going “mainstream”. GCC supports automatic AVX512 vectorization and can even provide vectorization reports via the “-fopt-info-vec-all” option. If you are interested in getting started with AVX512 development, we highly recommend Colfax’s guide. They have a great table that breaks down Xeon Phi versus future Xeon (e.g. Skylake-EP) AVX512 instructions and compiler flags. Here is the summary table you need to know to start KNL optimization:

Intel Compilers GCC Flags
Cross-platform -xCOMMON-AVX512 -mavx512f -mavx512cd
Xeon Phi processors -xMIC-AVX512 -mavx512f -mavx512cd -mavx512er -mavx512pf
Xeon processors -xCORE-AVX512 -mavx512f -mavx512cd -mavx512bw -mavx512dq -mavx512vl -mavx512ifma -mavx512vbmi

You can see the four gcc flags in lscpu output for the Intel Xeon Phi 7210 cpu:

Intel Xeon Phi 7210 CPU Flags

Single core performance of the Knights Landing CPU is far from excellent. The Intel Xeon Phi 7210 has a base clock speed of 1.3GHz (Turbo 1.5GHz), low by modern standards. In fact, if you have a <44 thread workload that cannot use AVX512, we suggest simply using standard Xeon E5 CPUs. Our first attempts at porting code from our Intel Xeon E5 benchmark suite to Xeon Phi was bad (okay, really bad.) Here are our tips for getting better:

  1. Use icc over gcc if possible. We saw performance increase in some tests by 30% or more. That is huge. Intel Parallel Studio XE Cluster Edition
  2. Intel provides tools like Intel Python and Intel Caffe that provide a speedup. Use them!
  3. Intel is releasing MKLs. If you are using a deep learning framework (e.g. Tensorflow) you will see a fairly large out of the box boost when using the appropriate MKLs.
  4. If you have existing code, run automatic vectorization reports. If you have code that is not going to work well on AVX512 (and do not want to port it) use Intel Xeon E5 chips instead.
  5. Adjust the MCDRAM and cluster modes.

You may be wondering what is meant by the MCDRAM and cluster modes. MCDRAM is 16GB high-speed on package memory that offers relatively high performance such that it occupies a performance tier between on-die cache and the hex channel DDR4 DRAM. There are four different options to set the system to, Flat, Cache, Hybrid and Auto. We are going to cover the first two. Flat means that the 16GB MCDRAM becomes byte addressable such that you can run your application using 16GB of extremely fast RAM. If you have 96GB of DDR4 installed in the system and then set it to Flat you will see 112GB addressable memory. Cache gives something akin to a huge L3 cache that is slower than on-die but is still significantly faster than DDR4 system memory.

Supermicro SYS 5038K I ES1 Uncore Memory Mode Set

Beyond this, there are six different options for the on-die core cluster configuration. These options essentially describe how the on-die core resources get distributed.

Supermicro SYS 5038K I ES1 Uncore Cluster Mode Configuration

Since we ran different workloads and different sets of code, we ended up running them through three different memory mode configurations times five different cluster mode configurations for a total of fifteen iterations per. Although the system is easy to power cycle and access BIOS remotely, changing these values does require a system reboot. We do hope that Supermicro adds a reboot to BIOS feature in the future as the initialization time of KNL platforms is quite long. If you are thinking that as a developer you will be rebooting the server often to test core cluster modes and MCDRAM modes, that is likely during the learning process. We had a simple AVX2 workload that we ran out of the box on the system before we even touched AVX512 optimizations.

Xeon Phi 7210 AVX2 Sample Out Of Box Cluster MCDRAM 2×2

In the end, we got the fastest performance from All2All – Flat once we made the slight change to use the MCDRAM in Flat mode but Cache had a big impact on the out-of-box run performance. Each iteration (and the 11 more not shown here) took a few minutes during the BIOS change and reboot. The process of testing different cluster and MCDRAM configurations is very common.

There are a ton more tips in terms of performance and there are also experts in the field who are far better than we are. As a result, we are going to publish a few figures just based on what we were able to achieve relative to dual Intel Xeon CPUs using a few of the tricks discussed here. We are still profiling our AI/ Machine learning benchmarks so these are a bit of a work-in-progress. It is a huge effort where we profile many different types of systems. Here is an example when our Docker Swarm cluster was only 1024 cores and 17 machines (and that does not include the 18 GPUs in the cluster.) The Xeon Phi machines are requiring additional effort for benchmarking but we have about 13,000 runs completed. That is also why we have become big fans of using Docker as a framework for machine learning development.

1024 Cores On The STH AI Benchmark Validation Cluster

Suffice to say, we are still in progress but these runs can take upwards of 24 hours each so iterations can be slow.

We are going to use Intel Xeon E5-2698 V4’s. Each Intel Xeon E5-2698 V4 has a TDP of 135W versus a 215w TDP for the Xeon Phi 7210. Likewise, each Xeon E5-2698 V4 lists for around $3200 compared with approximately $2400 for the Xeon Phi 7210.

Development Task: Linux Kernel Compile Benchmark

Our standard gcc Linux Kernel Compile Benchmark is an interesting use case here. The Supermicro SYS-5038K-I-ES1 is meant to be a developer workstation. Despite the platform having a HPC chip, users are likely to spend quite a large amount of time compiling different software builds on the machine.

Xeon Phi 7210 PyKCB Comparision

As you can see, we added a few comparison points to the benchmark so you can get a feel regarding where the system falls. The Intel Xeon D-1587 is only a 65w TDP chip, however, the suggested selling price is around 2/3 of the Xeon Phi 7210 making them reasonably close competitors in terms of price. Looking at the dual Xeon E5 results the dual Xeon E5-2698 V4 platform is a racecar in comparison. It also uses about twice the power during benchmark runs. The Intel Xeon E5-2630 V4 is closer in terms of power per run when housed in a 1U system and is slightly faster. For compile jobs that take a matter of minutes, KNL compile performance is very reasonable. We also wanted to highlight that switching from Quadrant to All2All shaved an appreciable 8% off of the compile time.

Stream MCDRAM Testing

We did want to provide a quick look at the stream performance of the MCDRAM in flat mode to compare it to the hex channel DDR4. Our Intel Xeon Phi 7210 is limited to 2133MHz DDR4 RDIMM operation which is a small handicap Intel put on its lowest end part. We used gcc with -DSTREAM_ARRAY_SIZE=6400000. The results were somewhat shocking.

Xeon Phi 7210 MCDRAM Stream Impact

We knew the MCDRAM was going to be fast, but this is was shocking. As we raised the thread count we were able to get over 300GB/s. Using icc provides significant performance boosts, especially at lower thread counts with Stream and Intel publishes >450GB/s figures. Still, the bandwidth potential of the KNL chip onboard is awesome. In a follow-up piece focused on the Xeon Phi 7210 we will have more optimized icc results but we just wanted to show what is possible.

Torch – Intel MKL – Component Failure Prediction

We were able to allow a local Silicon Valley startup on the Supermicro SYS-5038K-I-ES1 to try out their model. They are doing component failure prediction using images taken of assemblies. Hopefully, we can talk more about their application in the coming quarters, but we wanted to share their results on the machine.

Xeon Phi 7210 Component Failure Prediction Model Training

The result was somewhat predictable. In terms of speed, once icc and the AVX512 vectorization happened in their code the Xeon Phi speedup was apparent. We also should note that it is not just raw performance that was impressive. The water-cooled tower was using about half the power of the slower dual Intel Xeon E5-2698 V4 setup in the process.

After giving our test machine a try in the STH lab, the startup bought two of these for their cramped office.

Environmental – Power Consumption and Sound

While we do not publish sound levels for data center gear, we do for products that are intended for habitable space. We have seen plenty of these machines on rack shelves in data centers but they are clearly targeted at office space.

  • Idle: 109w
  • Average Workload: 248w
  • Max Observed: 331w

In terms of sound levels, the system hovered in the 33-37dba range during testing. There were no high pitched noises or jet engine sounds as you would hear with a 2U 4-node system sitting in your office.

Final Words

Overall, we were extremely impressed by the Intel Xeon Phi “Ninja” developer platform based on the Supermicro SYS-5038K-I-ES1. If you do not have budget (yet) for a 2U 4-node solution, this is a great solution. It is more than many single CPU Xeon solutions, but it is just about the lowest cost option to get into Knights Landing that there is. We also like the fact that this comes pre-assembled as LGA3647 is harder to work with than older generation sockets. The fact that upon opening the chassis you have something that you can show off to lustful colleagues due to the clean water cooling solution makes adds to the cachet of the workstation. At the same time, the system looks sleek but is far from some of the outlandish gaming towers we have seen. The fact that it is black and blue, and not overly loud also can make the system inconspicuous if that is desired. We would have liked to see external hot swap bays for sneakernet multi-TB data loads. We would have also liked to see the radiator placement moved to the front or the rear of the chassis so it would be easier to rackmount. We have seen several in data centers now and it is not uncommon to see a quarter rack taken up by the configuration. Still, this is the best sub $5,000 system out there for developers to start working with KNL and its associated MCDRAM memory and AVX512 capabilities.

Intel Xeon Phi Performance — Xeon Phi: Intel’s Larrabee-Derived Card In TACC’s Supercomputer

Skip to main content

When you purchase through links on our site, we may earn an affiliate commission. Here’s how it works.

Throughout its press day, Intel repeated over and over the importance of optimized core when comparing the performance of a CPU to an accelerator. One of the company’s first examples involved a bit of Fortran code. First, we saw results from the unoptimized single-threaded code, followed by a simple Xeon Phi port. The difference showed the Phi to be somewhere around 300x faster. Then, the Intel team demonstrated why its first comparison was flawed. When the same code was re-run on dual Xeon E5s, the Phi was only about twice as fast.

The purpose of this exercise seemed to be expectation management. It’s in the best interest of companies like Nvidia to run parallelized code in a single thread as a baseline, and then run the same code on a graphics processor to claim more than two orders of magnitude improvement. But if you allow optimized code to take advantage of a multi-core CPU’s resources, the real delta between them is much smaller.

Then, Intel shared some of the real-world performance improvements seen from comparisons between dual-socket Xeon-based machines and Xeon Phi.

Financial services professionals are probably salivating at these numbers. Monte Carlo models are often used to solve problems using a bunch of unknown inputs and probability. I’ve personally used them to suggest the risk and financial impact of large projects and product programs. And, after the 2001 dot-com crash, Black-Scholes became a preferred option valuation method. This was a huge deal in the mid-2000s because Silicon Valley companies that gave employees options instead of higher salaries were under increased pressure to pin a value on those options.

Intel also brought in representatives from Altair, a software and technology provider, to suggest how easy it was for them to port code to the Xeon Phi architecture and show examples of workloads like crash test simulations, which generally saw a 2.5x performance improvement.

In lieu of hardware and software we can test ourselves, Intel’s discussion of performance is plausible. Optimization can move the needle in one direction or the other, and certain applications are going to realize more gain from what Intel is doing with Xeon Phi than others. But, with that said, a 2-2.5x improvement seems reasonable in environments able to benefit from parallelized computing.

Current page:
Intel Xeon Phi Performance

Prev Page Intel Xeon Phi Hardware

Next Page The Value Proposition Of Xeon Phi: Optimization

Get instant access to breaking news, in-depth reviews and helpful tips.

Contact me with news and offers from other Future brandsReceive email from us on behalf of our trusted partners or sponsors

Tom’s Hardware is part of Future US Inc, an international media group and leading digital publisher. Visit our corporate site .

©
Future US, Inc. Full 7th Floor, 130 West 42nd Street,
New York,
NY 10036.

Intel will release processors with 528 cores next year? The description of the new Xeon CPUs strikes

Intel seems to have decided to give AMD a serious fight in the market, where the latter has been actively pushing the former for the past few years. In the coming years, Intel will release several lines of server Xeon CPUs, in one of which top models can get up to 528 cores!

Recall that the Sapphire Rapids that entered the market can have up to 60 cores, and even then only one model. Against the background 9The 6-core Epyc Genoa and the upcoming Epyc Bergamo with their 128 cores are very few. Actually, tests show that this is not enough not only in terms of numbers. But next year, Intel may release Sierra Forest Xeon CPUs, which will have many times more cores. Actually, almost on the order. True, not true.

According to recent data, Sierra Forest can get four tiles of 86 cores, that is, a total of 344 cores in the maximum configuration. But perhaps Intel will also release a variant with 132 cores per tile, that is, with 528 cores. This is an incredible amount even for an Arm-based CPU, not to mention x86-compatible solutions. But you need to understand that we are talking about small nuclei. However, it’s still incredibly large. nine0003

Of course, such CPUs will be more specialized than solutions with large cores. The same Epyc Bergamo, as AMD itself explained, will also be aimed at certain tasks.

More classic Intel CPUs will also get a lot of cores. The Xeon Granite Rapids-SP line will only have large cores based on the Redwood Cove+ architecture. Such processors will include up to three tiles with 44 cores each, that is, there can be up to 132 cores in total.

In addition, Granite Rapids-WS will be released for the consumer market and workstations. Such processors will have two tiles, that is, up to 88 cores, but supposedly real configurations will only offer up to 86 cores. In any case, this is also a lot. However, it is precisely in this segment that Intel may not eventually outperform AMD, since the Threadripper 7000, according to available data, will receive the same 96 cores, like the related Epyc Genoa.

All these processors supposedly should be released next year, and some may appear at the end of this year. However, the nuance is that they must be built on the Intel 3 process technology, and this despite the fact that now the company has mastered only Intel 7, and there will also be Intel 4 between them. Moreover, the latest Intel roadmap for 2023 did not contain The Meteor Lake CPU is based on Intel 4, which means that even these processors could carry over to 2024. And then it turns out that in 2024 the company needs to release products on two different technical processes at once. You can also recall that CPU Sapphire Rapids was generally delayed from the company’s original plans by more than two years. Thus, it is not yet clear how much you can believe that the incredible Intel CPUs described above will be released next year. nine0003

DELL PRECISION M7520 (Intel Xeon E3-1505M v6 3000 MHz/15.6″/3840×2160/16Gb/2256Gb HDD+SSD/DVD no/NVIDIA Quadro M2200/Wi-Fi/Bluetooth/Windows 10 Pro)

  • HDD configuration +SSD
  • Discrete video card type
  • NVIDIA QUADRO M2200
  • Screenshot permit 3840×2160
  • Type Lutbook
  • Windows 10 Pro
  • 900 900 900 900

  • Kaby LAKE processor ker v6 3000 MHz
  • Number of processor cores 4
  • Show all

/Bluetooth/Windows 10 Pro)

  • All 20
  • Reviews 6
  • Tests 1

Specifications DELL PRECISION M7520 (Intel Xeon E3-1505M v6 3000 MHz/15.

6″/3840×2160/16Gb/2256Gb HDD+SSD/DVD no/NVIDIA Quadro M2200/Wi-Fi/Bluetooth/Windows 10 Pro)
nineStorage Devices
*

The total volume of drives 2256 GB
Driver Interface Serial ATA
Optical
Confiburations

Conversation

Conversation

Confimitations0057

Flash card reader yes

Power supply
*

Battery capacity 91 Wh

Input devices
*

Positioners PointStick and Touchpad

Sound
*

Built-in microphone yes
Built-in speakers Yes

Optional
*

Weight 2. 8 kg
Webcam yes
Additional information support TPM0.1.; support optional contactless smart card/fingerprint reader or fingerprint reader; support for two storage devices: one M.2 PCIe SSD and one 2.5″ M.2 PCIe/SATA drive; 4 DIMM slots: DDR4 ECC/non-ECC 2400MHz up to 64GB or DDR4 2667MHz up to 32GB
Features Kensington lock slot
Dimensions (LxWxD) 378x261x27.76 mm

* Check with the seller for exact specifications.

Other models

  • DELL Vostro 3491 (Intel Core i5-1035G1 1000MHz/14″/1920×1080/8GB/256GB SSD/1000GB HDD/DVD no/Intel UHD Graphics/Wi-Fi/Bluetooth/Linux)

    3

    3

    DELL Latitude 7410 (Intel Core i7 10610U 1800MHz/14″/1920×1080/16GB/512GB SSD/DVD no/Intel UHD Graphics/Wi-Fi/Bluetooth/Windows 10 Pro)

  • DELL INSPIRON 5570 (Intel Core i5 7200U 2500MHz/15.