[Official] NVIDIA RTX 4090 Owner’s Club | Page 302
JavaScript is disabled. For a better experience, please enable JavaScript in your browser before proceeding.
Last Updated: January 22, 2023
Note: This content is licensed under Creative Commons 3.0. This means that you are free to copy and redistribute this material, but only if the following criteria are met: 1) You must give appropriate credit by linking back to this thread. 2) You may not use this material for commercial purposes or place this on a for-profit website with ads. 3) You cannot create derivative work based on this material.
NVIDIA GeForce® RTX 4090
⠀⠀RTX 4080 Owner’s Club
→ RTX 4090 Owner’s Club
Click here to join the discussion on Discord or join directly through the Discord app with the code kkuFR3d
Source: NVIDIA
SPECS (Click Spoiler)
Rich (BB code):
Architecture Ada Lovelace Chip AD102-300 Transistors 76,300 million Die Size 608 mm² Manufacturing Process 4nm CUDA Cores 8192 (16384) TMUs 512 ROPs 176 SM Count 128 Tensor Cores 512 GigaRays -- GR/s Core Clock 2230 MHz Boost Clock 2520 MHz Memory 24GB GDDR6X Memory Bus 384-bit Memory Clock 1313 MHz / 21008 MHz Memory Bandwidth 1008 GB/s External Power Supply 12-Pin TDP 450W DirectX 12. 2 Ultimate OpenGL 4.6 OpenCL 3.0 Vulkan 1.3 CUDA 8.9 Interface PCIe 4.0 x16 Connectors 1x HDMI 2.1, 3x DisplayPort 1.4a Dimensions 304 x 137mm (3-Slot) Price $1599 US Release Date October 12, 2022
Rich (BB code):
RTX 4090 | AD102-300 | 4nm | 608mm² | 76.3 BT | 16384 CCs | 512 TMUs | 176 ROPs | 128 SMs | 2520 MHz | 24GB | 2048MB x 12 | GDDR6X | 384-bit | 1008 GB/s | 450W⠀⠀ RTX 4080 | AD103-300 | 4nm | 379mm² | 45.9 BT |⠀ 9728 CCs | 304 TMUs | 112 ROPs | ⠀76 SMs | 2505 MHz | 16GB | 2048MB x 8 | GDDR6X | 256-bit | ⠀716 GB/s | 320W⠀⠀ RTX 3090 Ti | GA102-350 | 8nm | 628mm² | 28.3 BT | 10752 CCs | 336 TMUs | 112 ROPs | ⠀84 SMs | 1865 MHz | 24GB | 2048MB x 12 | GDDR6X | 384-bit | 1008 GB/s | 450W⠀⠀ RTX 3090 | GA102-300 | 8nm | 628mm² | 28.3 BT | 10496 CCs | 328 TMUs | 112 ROPs | ⠀82 SMs | 1695 MHz | 24GB | 1024MB x 24 | GDDR6X | 384-bit | ⠀936 GB/s | 350W⠀⠀ RTX 3080 Ti | GA102-250 | 8nm | 628mm² | 28. 3 BT | 10240 CCs | 320 TMUs | 112 ROPs | ⠀80 SMs | 1665 MHz | 12GB | 1024MB x 12 | GDDR6X | 384-bit | ⠀912 GB/s | 320W⠀⠀ RTX 3080 | GA102-200 | 8nm | 628mm² | 28.3 BT |⠀ 8704 CCs | 272 TMUs | 96 ROPs | ⠀68 SMs | 1710 MHz | 10GB | 1024MB x 10 | GDDR6X | 320-bit | ⠀760 GB/s | 320W
Note: Gaming performance on Ampere and later, do not scale linearly with CUDA core count when compared with previous generations.
Rich (BB code):
RTX 2080 Ti | TU102-300 | 12nm | 754mm² | 18.6 BT |⠀ 4352 CCs | 272 TMUs | 88 ROPs | ⠀68 SMs | 1635 MHz | 11GB | 1024MB x 11 | GDDR6 | 352-bit | ⠀616 GB/s | 250W RTX 2080 S | TU104-450 | 12nm | 545mm² | 13.6 BT |⠀ 3072 CCs | 192 TMUs | 64 ROPs | ⠀48 SMs | 1815 MHz | 8GB | 1024MB x 8 | GDDR6 | 256-bit | ⠀496 GB/s | 250W RTX 2080 | TU104-400 | 12nm | 545mm² | 13.6 BT |⠀ 2944 CCs | 184 TMUs | 64 ROPs | ⠀46 SMs | 1710 MHz | 8GB | 1024MB x 8 | GDDR6 | 256-bit | ⠀448 GB/s | 215W GTX 1080 Ti | GP102-350 | 16nm | 471mm² | 12. 0 BT |⠀ 3584 CCs | 224 TMUs | 88 ROPs | ⠀28 SMs | 1582 MHz | 11GB | 1024MB x 11 | GDDR5X | 352-bit | ⠀484 GB/s | 250W GTX 1080 | GP104-400 | 16nm | 314mm² | 7.2 BT | ⠀2560 CCs | 160 TMUs | 64 ROPs | ⠀20 SMs | 1733 MHz | 8GB | 1024MB x 8 | GDDR5X | 256-bit | ⠀320 GB/s | 180W GTX 980 Ti | GM200-310 | 28nm | 601mm² | 8.0 BT |⠀ 2816 CCs | 172 TMUs | 96 ROPs | ⠀22 SMs | 1076 MHz | 6GB | 512MB x 12 | GDDR5 | 384-bit | ⠀336 GB/s | 250W GTX 980 | GM204-400 | 28nm | 398mm² | 5.2 BT |⠀ 2048 CCs | 128 TMUs | 64 ROPs | ⠀16 SMs | 1216 MHz | 4GB | 512MB x 8 | GDDR5 | 256-bit | ⠀224 GB/s | 165W GTX 780 Ti | GK110-425 | 28nm | 551mm² | 7.1 BT |⠀ 2880 CCs | 240 TMUs | 48 ROPs | ⠀15 SMs | 928 MHz | 3GB | 256MB x 12 | GDDR5 | 384-bit |⠀ 336 GB/s | 250W GTX 780 | GK110-300 | 28nm | 551mm² | 7.1 BT | ⠀2304 CCs | 192 TMUs | 48 ROPs | ⠀12 SMs | 900 MHz | 3GB | 256MB x 12 | GDDR5 | 384-bit |⠀ 288 GB/s | 250W GTX 680 | GK104-400 | 28nm | 294mm² | 3.5 BT |⠀ 1536 CCs | 128 TMUs | 32 ROPs | ⠀8 SMs | 1058 MHz | 2GB | 256MB x 8 | GDDR5 | 256-bit | ⠀192 GB/s | 200W GTX 580 | GF110-375 | 40nm | 520mm² | 3. 0 BT | ⠀512 CCs | 64 TMUs | 48 ROPs | ⠀16 SMs | 772 MHz | 1.5GB | 128MB x 12 | GDDR5 | 384-bit | ⠀192 GB/s | 250W
ASUS
AsusTek Computer (stylised as ASUS) was founded in Taipei, Taiwan in 1989, currently headquartered in Taipei, Taiwan.
Model | Length | Slot | Fan | HDMI | BIOS | Power Limit | PCB | PWM | GPU Stage | VRAM Stage | MPN |
---|---|---|---|---|---|---|---|---|---|---|---|
Strix OC | 358mm | 3.50 | 3 | 2 | 2 | 500/600W | Custom | MP2981 | 24×70A (1680A) FDMF3170 | 4×70A (280A) FDMF3170 | 90YV0ID0-M0NA00 |
TUF OC | 349mm | 3.65 | 3 | 2 | 2 | 450/600W | Custom | MP2888A | 18×70A (1260A) TDA21570 | 4×50A (200A) SIC639 | 90YV0IE0-M0NA00 |
COLORFUL — Not available in Europe or North America
Colorful Group (referred to as CFG) was founded in Shenzhen, China in 1995, currently headquartered in Shenzhen, China.
Model | Length | Slot | Fan | HDMI | BIOS | Power Limit | PCB | PWM | GPU Stage | VRAM Stage | MPN |
---|---|---|---|---|---|---|---|---|---|---|---|
Neptune OC | 254mm | 2.00 | AIO | 1 | 2 | 550/630W | Custom | uP9512U | 24×55A (1320A) AOZ5311NQI-03 | 4×55A (220A) AOZ5311NQI-03 | N/A |
Vulcan OC | 349mm | 3.50 | 3 | 1 | 2 | 515/550W | Custom | uP9512U | 24×55A (1320A) AOZ5311NQI-03 | 4×55A (220A) AOZ5311NQI-03 | N/A |
NB EX | 327mm | 3.35 | 3 | 1 | 1 | 450/480W | Reference | uP9512U | 14×55A (770A) AOZ5311NQI-03 | 3×55A (165A) AOZ5311NQI-03 | N/A |
EVGA
EVGA Corporation was founded in California, United States in 1989, currently headquartered in California, United States.
Model | Length | Slot | Fan | HDMI | BIOS | Power Limit | PCB | PWM | GPU Stage | VRAM Stage | MPN |
---|---|---|---|---|---|---|---|---|---|---|---|
FTW3 Ultra | 318mm | 3.65 | 3 | 1 | 2 | 500W/600W | Custom | uP9512U | 24×50A (1200A) SiC653A | 4×50A (200A) SiC653A | Prototype |
GALAX | KFA2 — Not available in North America
GALAXY was founded in Hong Kong, China in 1994, GALAXY and its European brand KFA2 (Kick Friggin Ass) merged in 2014 to form GALAX as a single unified brand, the name KFA2 still exist for the European market but all designs are GALAX, currently headquartered in Hong Kong, China.
Model | Length | Slot | Fan | HDMI | BIOS | Power Limit | PCB | PWM | GPU Stage | VRAM Stage | MPN |
---|---|---|---|---|---|---|---|---|---|---|---|
HOF | 344mm | 3. 85 | 3 | 1 | 2 | 550/666W | Custom | XDPE10281B | 28×70A (1960A) TDA21570 | 4×70A (280A) TDA21570 | 49NXM5MD6PHE |
HOF | 344mm | 3.85 | 3 | 1 | 2 | 550/666W | Custom | XDPE10281B | 28×70A (1960A) TDA21570 | 4×70A (280A) TDA21570 | 49NXM5MD6PHE |
SG | 336mm | 3.70 | 3 | 1 | 1 | 450/510W | Reference | uP9512U | 18×50A (900A) NCP302150 | 4×50A (200A) NCP302150 | 49NXM5MD6DSG |
SG | 336mm | 3.70 | 3 | 1 | 1 | 450/510W | Reference | uP9512U | 18×50A (900A) NCP302150 | 4×50A (200A) NCP302150 | 49NXM5MD6DSK |
GIGABYTE
GIGA-BYTE Technology (stylised as GIGABYTE) was founded in Taipei, Taiwan in 1986, currently headquartered in Taipei, Taiwan and California, United States.
Model | Length | Slot | Fan | HDMI | BIOS | Power Limit | PCB | PWM | GPU Stage | VRAM Stage | MPN |
---|---|---|---|---|---|---|---|---|---|---|---|
Xtreme Waterforce | 238mm | 2.00 | AIO | 1 | 2 | 450/600W | Custom | uP9512U | 24×50A (1200A) SiC653A | 4×50A (200A) SiC653A | GV-N4090AORUSX W-24GD |
Master | 359mm | 3.75 | 3 | 1 | 2 | 450/600W | Custom | uP9512U | 24×50A (1200A) SiC653A | 4×50A (200A) SiC653A | GV-N4090AORUS M-24GD |
Gaming OC | 340mm | 3.75 | 3 | 1 | 2 | 450/600W | Custom | uP9512U | 20×50A (1000A) SiC653A | 4×50A (200A) SiC653A | GV-N4090GAMING OC-24GD |
Windforce | 331mm | 3. 50 | 3 | 1 | 2 | 450/480W | Custom | uP9512U | 14×50A (700A) SiC653A | 4×50A (200A) SiC653A | GV-N4090WF3-24GD |
INNO3D
InnoVISION Multimedia was founded in Hong Kong, China in 1989, primarily recognized for its graphic cards marketed under the Inno3D brand, acquired by PC Partner in 2008, currently headquartered in Hong Kong, China.
Model | Length | Slot | Fan | HDMI | BIOS | Power Limit | PCB | PWM | GPU Stage | VRAM Stage | MPN |
---|---|---|---|---|---|---|---|---|---|---|---|
iCHILL Frostbite | 200mm | 2.00 | Water | 1 | 1 | 450/450W | Reference | uP9512U | 14×55A (770A) AOZ5311NQI | 3×55A (165A) AOZ5311NQI | C4090-246XX-1833FB |
iCHILL Black | 280mm | 2.00 | AIO | 1 | 1 | 450/450W | Reference | uP9512U | 14×55A (770A) AOZ5311NQI | 3×55A (165A) AOZ5311NQI | C4090-246XX-18330005 |
iCHILL X3 | 334mm | 3. 00 | 3 | 1 | 1 | 450/450W | Reference | uP9512U | 14×55A (770A) AOZ5311NQI | 3×55A (165A) AOZ5311NQI | C40903-246XX-1833VA47 |
X3 OC | 336mm | 3.0 | 3 | 1 | 1 | 450/450W | Reference | uP9512U | 14×55A (770A) AOZ5311NQI | 3×55A (165A) AOZ5311NQI | N40903-246XX-18332989 |
MSI
Micro-Star International (stylised MSI) was founded in Taipei, Taiwan in 1986, currently headquartered in Taipei, Taiwan.
Model | Length | Slot | Fan | HDMI | BIOS | Power Limit | PCB | PWM | GPU Stage | VRAM Stage | MPN |
---|---|---|---|---|---|---|---|---|---|---|---|
Suprim Liquid X | 280mm | 2.15 | AIO | 1 | 2 | 480/600W | Custom | MP2981 | 26×70A (1820A) MP86957 | 4×70A (280A) MP86957 | V510-007R |
Suprim X | 336mm | 3. 90 | 3 | 1 | 2 | 480/520W | Custom | MP2981 | 26×70A (1820A) MP86957 | 4×70A (280A) MP86957 | V510-001R |
Gaming X Trio | 337mm | 3.85 | 3 | 1 | 2 | 450/480W | Custom | MP2981 | 18×50A (900A) NCP303151A | 4×50A (200A) NCP303151A | V510-006R |
Ventus OC | 322mm | 3.10 | 3 | 1 | 1 | 450/450W | V510-023R |
NVIDIA
Nvidia Corporation (stylised nVIDIA) was founded in California, United States in 1993, currently headquartered in California, United States.
Model | Length | Slot | Fan | HDMI | BIOS | Power Limit | PCB | PWM | GPU Stage | VRAM Stage | MPN |
---|---|---|---|---|---|---|---|---|---|---|---|
Founders Edition | 304mm | 3. 05 | 2 | 1 | 1 | 450/600W | Custom | MP2981 | 20×70A (1400A) MP86957 | 3×70A (210A) MP86957 | 900-1G136-2530-000 |
PALIT | GAINWARD — Not available in North America
Palit Microsystems (stylised PaLiT) was founded in Taipei, Taiwan in 1988, acquired the Gainward brand and company in 2005, currently headquartered in Taipei, Taiwan.
Model | Length | Slot | Fan | HDMI | BIOS | Power Limit | PCB | PWM | GPU Stage | VRAM Stage | MPN |
---|---|---|---|---|---|---|---|---|---|---|---|
GameRock OC | 330mm | 3.60 | 3 | 1 | 2 | 450/500W | Custom | uP9512U | 16×50A (800A) NCP302150 | 3×50A (150A) NCP302150 | NED4090S19SB-1020G |
Phantom GS | 330mm | 3.50 | 3 | 1 | 2 | 450/500W | Custom | uP9512U | 16×50A (800A) NCP302150 | 3×50A (150A) NCP302150 | NED4090S19SB-1020P |
PNY
PNY Technologies was founded in New York, United States in 1985, currently headquartered in New Jersey, United States.
Model | Length | Slot | Fan | HDMI | BIOS | Power Limit | PCB | PWM | GPU Stage | VRAM Stage | MPN |
---|---|---|---|---|---|---|---|---|---|---|---|
XLR8 OC | 332mm | 3.55 | 3 | 1 | 1 | 450/450W | Custom | uP9512U | 14×50A (700A) NCP302150 | 3×50A (150A) NCP302150 | VCG409024TFXXPB1-O |
Uprising | 351mm | 3.15 | 3 | 1 | 1 | 450/450W | Custom | uP9512U | 14×50A (700A) NCP302150 | 3×50A (150A) NCP302150 | VCG409024TFXMPB |
Verto | 337mm | 2.95 | 3 | 1 | 1 | 450/450W | Custom | uP9512U | 14×50A (700A) NCP302150 | 3×50A (150A) NCP302150 | VCG409024TFXPB1 |
ZOTAC
ZOTAC is under the umbrella of PC Partner, and was founded in Hong Kong, China in 2006, currently headquartered in Hong Kong, China.
Model | Length | Slot | Fan | HDMI | BIOS | Power Limit | PCB | PWM | GPU Stage | VRAM Stage | MPN |
---|---|---|---|---|---|---|---|---|---|---|---|
AMP Extreme | 356mm | 3.60 | 3 | 1 | 2 | 450/495W | Custom | uP9512U | 24×55A (1320A) AOZ5311NQI-03 | 4×55A (220A) AOZ5311NQI | ZT-D40900B-10P |
Trinity OC | 356mm | 3.60 | 3 | 1 | 2 | 450/495W | Custom | uP9512U | 14×55A (770A) AOZ5311NQI-03 | 4×55A (220A) AOZ5311NQI | ZT-D40900J-10P |
TECHPOWERUP | GPU-Z
Download TechPowerUp GPU-Z
NVIDIA | NVFLASH
Download NVIDIA NVFlash
BIOS | ROM
TechPowerUp BIOS Collection < Verified
TechPowerUp BIOS Collection < Unverified
OVERCLOCKING | TOOLS
Download ASUS GPUTweak III
Download Colorful iGame Center
Download Gainward EXPERTool
Download Galax/KFA2 Xtreme Tuner Plus
Download Gigabyte AORUS Engine
Download Inno3D TuneIT
Download MSI Afterburner
Download Palit ThunderMaster
Download PNY Velocity X
Download Zotac FireStorm
See less
See more
1
Reply
Save
Like
Reactions:
5
6021 — 6040 of 13459 Posts
vigorito said:
So if we clock vram over +1200 scaling is bad in games and after that number there is no impact or benefits for gaming? i cant see any fps increase with +2000 vram and +199 core over stock settings
Click to expand. ..
Its probably diminishing returns at play — at some point, the cores doing all they can and more bandwidth doesn’t help. Still, if your card can do it, and you 100% confirm it’s artifact free, why not enjoy your extra 1 or 2 fps lol. Just gotta make sure its stable, cause i think running an unstable memory OC for a long duration may cause permanent damage.
Reply
Save
Like
vigorito said:
So if we clock vram over +1200 scaling is bad in games and after that number there is no impact or benefits for gaming? i cant see any fps increase with +2000 vram and +199 core over stock settings
Click to expand…
@+2000 you are almost certainly kicking in ECC and performance loss. Everyones card will be different, so only you can take the time needed to invest in running a certain bench and increasing mem clocks until score no longer improves or starts a negative fall in points. Easy to do, but very time consuming. Couple weeks back when i took delivery of my 4090 i started with a blank slate. I opened up port royal, and just increased memory slider in MSI AB +100 at a time LEAVING CORE SPEED ALONE, and jotted down the score. I did this 13 times until i got to +1300 and stopped cause i was so tired of seeing port royal running and to give me gpu a break but performance was still scaling as my scores were still climbing. try this with your card and report back
Reply
Save
Like
HERE THE NEPTUNE OC 630Watt BIOS!!!!!!!!!!!
I Upload it have fun
But the Fancurve is the same **** like the 600 watt MSI bios only 2400 RPM on 100%!!!!!!!!!!!
So only with waterblock this bios is good
Reply
Save
Like
its Artefact free i didnt notice anything all goes smoothly as hell 4k 144hz all ultra,native,7600x paired with 4090 strix,i thought that i can get 6-8% in gaming but obvious i cant,i wanted to test +2100 +2200 with precision X1 tonight but its pointless,i just leave it stock,for gaming i think ill even roll the slider to 70-80%
Reply
Save
Like
Guys is this score normal for my clock’s?
Or a bit low?
Reply
Save
Like
Reactions:
2
So my preliminary testing/playing around is telling me that these things aren’t near as power hungry as ampere. I pushed mine to the highest stable benchmark clocks I could get with the power limit and voltage slider maxed and was only hitting low-mid 500w. This similar to what everyone else is seeing? That Samsung 8nm must have been absolute trash. My 3090 Strix is power limited no matter what bios I had on it save for the 1000w
Reply
Save
Like
Azazil1190 said:
Guys is this score normal for my clock’s?
Or a bit low?Click to expand…
Normal
Reply
Save
Like
Reactions:
1
mirkendargen said:
Click to expand…
Nice collab — ty! Definitely something still limiting power consumption — presumably voltage although gpu-z reads perf cap reason as pwr. Wouldn’t go past 106% on this bios when it should go up to 114%. 582.5w max in furmark, up 5 watts from the gigaoc 600w bios.
Reply
Save
Like
Reactions:
2
have you guys tried Miles Morales with every setting cranked up (including Ray Tracing) without DLSS and DLSS 3 (aka «frame generation»)? There are scenes where it dips below 60fps with the 4090!
Reply
Save
Like
Reactions:
1
Baasha said:
have you guys tried Miles Morales with every setting cranked up (including Ray Tracing) without DLSS and DLSS 3 (aka «frame generation»)? There are scenes where it dips below 60fps with the 4090!
Click to expand. ..
It’s cpu bound. Turn on frame gen. Alleviates cpu bottleneck and you’ll hit 120fps with the gpu still not breaking a sweat (under 300w for me). For someone thinking I wouldn’t be bottlenecked by a 5800x3d for at least a year or so, frame gen has been a godsend. Allows to turn off super sampling completely and often times add dlaa- all with nearly double the fps
Reply
Save
Like
Reactions:
3
what vbios are you guys with suprims’s running? the 600w liquidX worth it or does itr mess up fan speeds?
or strix poss?
Reply
Save
Like
Fire2 said:
what vbios are you guys with suprims’s running? the 600w liquidX worth it or does itr mess up fan speeds?
or strix poss?Click to expand…
I haven’t seen a 600W LiquidX bios. The one on TPU is 530W. Supposedly the 600W only shipped on some early cards.
Reply
Save
Like
Neptune BIOS wouldn’t run Crysis 3 Remaster graphics maxed out at +188, +1553 memory. DXGI errors.
Original V1 Strix BIOS runs it at +220 +1607.
Reply
Save
Like
mirkendargen said:
Certainly should, I flashed it with NVflash.
Now the bad news about it, it didn’t change my effective clocks or power consumption noticabley if at all, so the theory of there being some phantom power limiter is a nope, or it’s something at a lower level than the BIOS.
Click to expand…
Flashed it on my Inno3D X3 OC and it worked as expected, it allowed for at least 20W more in games compared to 600W Liquid X BIOS (1 next to mV indicates power limit is hit)
Problem on air is that the fan RPM is capped to 2400 RPM so the card cannot cool itself now (left picture is at 3200 RPM or thereabouts), but that will go away with waterblock, and much lower temps will cause lower power draw as well, so maybe it will stop throttling.
There are definitely some phantom limiters though as it doesn’t really want to past that 102% PL point, it always stops at 102% in both in games and Furmark even when set to 114%. On MSI 600W BIOS it would stop at 113% no matter what, even though max was 125%.
It works better than Gigabyte and MSI BIOSes on my card for one other reason though and that is the fact that it is not losing signal during boot. I have external HDD connected and there is a black screen with just «-» for some seconds before entry boot screen appears, and original Inno3D and Neptune BIOS show it properly, while Gigabyte and MSI would lose signal twice during that period and miss the screen where you can enter BIOS for that reason.
See less
See more
1
Reply
Save
Like
Reactions:
2
dr/owned said:
I haven’t seen a 600W LiquidX bios. The one on TPU is 530W. Supposedly the 600W only shipped on some early cards.
Click to expand…
VGA Bios Collection: MSI RTX 4090 24 GB | TechPowerUp
Reply
Save
Like
Reactions:
2
Fire2 said:
what vbios are you guys with suprims’s running? the 600w liquidX worth it or does itr mess up fan speeds?
or strix poss?Click to expand. ..
Folks have reported it doesn’t max out the fan speeds at 100%. Try the Strix BIOS. Note that with that BIOS you will lose one of the DP outputs.
Reply
Save
Like
Reactions:
1
KedarWolf said:
Neptune BIOS wouldn’t run Crysis 3 Remaster graphics maxed out at +188, +1553 memory. DXGI errors.
Original V1 Strix BIOS runs it at +220 +1607.
Click to expand…
+0 is 30mhz higher on the Neptune BIOS, talk in absolute clocks not offsets. And +188 and +220 don’t make sense, it changes in 15mhz increments so you’re actually +195 and +225, which is 30mhz.
I’m surprised multiple people are seeing higher power usage. Did you actually have percap reason — power before? No matter what I do with a 600W-630W power limit my perfcap reason is reliability voltage and I max out ~560-570W.
Reply
Save
Like
Reactions:
1
Got the free cable from Seasonic today
Used this link the release day of 4090:
https://seasonic. com/cable-request/
See less
See more
1
Reply
Save
Like
Reactions:
5
bmagnien said:
It’s cpu bound. Turn on frame gen. Alleviates cpu bottleneck and you’ll hit 120fps with the gpu still not breaking a sweat (under 300w for me). For someone thinking I wouldn’t be bottlenecked by a 5800x3d for at least a year or so, frame gen has been a godsend. Allows to turn off super sampling completely and often times add dlaa- all with nearly double the fps
Click to expand…
Yes, playing it with DLSS3 is an amazing experience and hardly feel any induced-latency that people keep talking about (reviewers etc.).
Still, with a 12900KF @ 5.20Ghz, having a game dip below 60fps with the 4090 at 4K seems insane. I don’t think the 13900K would fare that much better (?).
Reply
Save
Like
Nizzen said:
Got the free cable from Seasonic today
Used this link the release day of 4090:
https://seasonic. com/cable-request/Click to expand…
That cable looks MUCH nicer than the Corsair cable!
Reply
Save
Like
Reactions:
1
6021 — 6040 of 13459 Posts
Top
Graphics Performance – Razer Goes Ampere
by Brett Howseon March 11, 2021 9:30 AM EST
- Posted in
- Laptops
- Gaming
- Razer
- Razer Blade
- Ampere
44 Comments
|
44 Comments
Introduction & DesignSystem PerformanceGraphics Performance – Razer Goes AmpereDisplay AnalysisBattery Life and Charge TimeWireless, Audio, Thermals, and SoftwareFinal Thoughts
Just announced at CES in January, NVIDIA’s latest laptop graphics card lineup is now based on their latest generation Ampere platform, built on the 8 nm Samsung process. Razer is one of the first out of the gate to ship laptops with the new GPU.
When NVIDIA launched Pascal two generations ago, they were very happy to conclude that the laptop and desktop variants were similar enough in performance to drop the M badge (for Mobile) on the laptop-destined GPUs. Thanks to the low TDP and high efficiency of Pascal-based GPUs, performance was similar, even if the power output would be near the top of what a laptop could handle. NVIDIA also released Max-Q versions, which are the same basic GPU, binned a bit better, and run at a lower power limit.
With the heat limitations of laptop chassis, matching performance of laptops and desktops is a tough proposition. With Ampere, it gets even harder, as the power limits for the higher-end parts have gone up dramatically – a RTX 3070 desktop GPU has a higher TDP than a RTX 2080, all the while laptop cooling is all but unchanged. So, for this generation, NVIDIA has gone back to explicitly designating their laptop GPUs – literally calling them «GeForce RTX 30[xx] Laptop GPU» – indicating that the parts are destined for a thermally constrained environment. And, with less than half the TDP of the desktop cards, the performance of this generation of laptop parts is going to more widely diverge from the desktop than in past generations.
There are ways to combat this TDP discrepancy. The ridiculously parallel nature of graphics workloads means that NVIDIA would, in theory, be able to build wider GPUs, which run at a lower frequency, and don’t incur the exponential increase in power output with voltage. This is something they even did with the Pascal launch, with the RTX 1070 for laptops offering a few more CUDA cores than its desktop counterpart, but instead, the laptop GPUs offer far less CUDA cores than the desktop models.
The news is not all bad. Despite the Ampere-based laptop GPUs not being able to compete against their desktop counterparts, they are still a big upgrade over the RTX 20-series. Across the board, the new Ampere GPUs are about twice as wide as the outgoing models, and are based on the latest GPU architecture. It can easily get forgotten that while Ampere desktop parts reaped a good deal of their performance gains from increased power consumption, the laptop variants are still very much an upgrade over the outgoing RTX 20-series.
NVIDIA GeForce RTX 30 Series Laptop Specifications | |||||
RTX 3080 Laptop GPU |
RTX 3070 Laptop GPU |
RTX 3060 Laptop GPU |
|||
CUDA Cores | 6144 | 5120 | 3840 | ||
Boost Clock | 1245 — 1710MHz | 1290 — 1620MHz | 1283 — 1703MHz | ||
Memory Bus Width | 256-bit | 256-bit | 192-bit | ||
VRAM | 8GB / 16GB GDDR6 | 8GB GDDR6 | 6GB GDDR6 | ||
TDP Range | 80 — 150W+ | 80 — 125W | 60 — 115W | ||
GPU | GA104 | GA104 | GA106 | ||
Architecture | Ampere | Ampere | Ampere | ||
Manufacturing Process | Samsung 8nm | Samsung 8nm | Samsung 8nm? | ||
Launch Date | 01/26/2021 | 01/26/2021 | 01/26/2021 |
The Razer Blade 15 comes with RTX 3060 and RTX 3070 option in the Base model, and RTX 3070 or RTX 3080 options in the Advanced model. Our Base model comes with the RTX 3070. Unfortunately, we don’t have an RTX 2070 laptop on-hand for comparisons. As usual, we’ll start with a few synthetics, then some games.
3DMark
UL’s 3DMark suite offers a range of tests of varying complexity. The Razer Blade 15 with the RTX 3070 finished about mid-pack compared to some of the other gaming systems we’ve seen. Gaming systems can be thin and light designs like the Acer Predator Triton 500, and the Razer Blade series, or more of a desktop replacement such as the Clevo and MSI models shown here, which will have more headroom for thermals. The RTX 3070 paired with the Core i7-10750H seems to be able to be fairly close to the RTX 2080 models despite their higher thermal capacities.
GFXBench
GFXBench added a DirectX 12 suite with the launch of version 5, and is cross-platform as well, although bit-wise the PC runs at 32-bit compared to 16-bit in the smartphone industry. In both tests, the RTX 3070, despite being in a thin and light design in the Razer Blade 15, was able to keep up with the RTX 2080 in much beefier laptops.
Tomb Raider
Although several years old now, the first game in the rebooted Tomb Raider franchise can still be demanding on laptops, but not with anything as powerful as this. Even at our maximum settings on 1920×1080, the laptop can easily out-pace its 165 Hz display refresh. Although we don’t normally test at 2560×1440 mostly due to most gaming laptops not supporting this resolution, the Razer Blade 15 would still hit 138 FPS average at the higher resolution. No issues here.
Rise of the Tomb Raider
The second game in the series added DirectX 12, and was far more demanding on the graphics side. Here the Razer Blade 15 dips below all of the RTX 2080 laptops, although still can easily handle this game. At the native QHD resolution, the Razer Blade still managed 94 FPS average.
Shadow of the Tomb Raider
The most current game in the franchise is even more demanding, but whether due to drivers, or the new Ampere platform, the RTX 3070 is able to slot in again with the RTX 2080 slightly behind in Max-Q guise, and slightly ahead in the larger form factor gaming laptops. What is holding it back the most though is the CPU at this resolution, with the benchmark being GPU bound only 28% of the time. Bumping it up to QHD changes that to 82% GPU bound, with an average framerate of 79 FPS.
Strange Brigade
Another DirectX 12 game in the suite is Strange Brigade, set in ancient Egypt. This is one of the newest games in the suite, so we have the least amount of data for it. It is not the most demanding game, with the Razer Blade hitting 118 FPS even at QHD resolution.
Far Cry 5
Ubisoft’s Far Cry franchise is one of their most successful, and even though there is a new Red Dawn version of this game, it is based on the same engine. Far Cry can also be somewhat CPU bound, which shows why the Clevo with its desktop processor is so far ahead, but the Razer Blade still does very well here, outperforming several RTX 2080 laptops. At QHD, the framerate only dips to 91 FPS average, showcasing again that this game is very much CPU limited.
Shadow of War
The results in Shadow of War are very familiar, with the Razer Blade 15’s RTX 3070 really in the mix with the RTX 2080 laptops from a year ago. At QHD, the framerate dropped to a very playable 78 FPS average.
GPU Conclusion
Officially the NVIDIA naming scheme for the latest generation of Ampere GPUs bound for notebook computing is the RTX 30-Series Laptop GPUs. After a couple of generations of NVIDIA dropping the extra branding on their laptop GPUs, the difference in power requirements are just too large to ignore. Even with Pascal and Turing, desktop cards would always outperform their laptop brethren, but for Ampere NVIDIA has switched back to explicit laptop designations as the resulting performance is too far apart to be bundled together.
That being said, the RTX 3070 has shown itself to be very capable, mixing it up with the bigger, more expensive RTX 2080 from last generation. The Razer Blade Base model we are testing is also somewhat held back with the Core i7-10750H, as most of the time we are sampled the very top spec. It is kind of refreshing for Razer to offer the Base model, as it does highlight the differences. Although the less-expensive Razer Blade 15 Base model is a thin and light design, and although it does not offer the biggest, fastest CPU, the hex-core i7 and RTX 3070 are still a great combo, and fit the laptop very well.
Razer’s inclusion of a QHD display in the review unit seems to be the perfect fit for this combo as well, and although the Advanced model also offers a 1080p unit with an even higher refresh rate, the RTX 3070 Laptop GPU feels like it is a perfect fit with a high-refresh QHD panel. Even at the native resolution, framerates are more than acceptable.
System Performance
Display Analysis
Introduction & DesignSystem PerformanceGraphics Performance – Razer Goes AmpereDisplay AnalysisBattery Life and Charge TimeWireless, Audio, Thermals, and SoftwareFinal Thoughts
PRINT THIS ARTICLE
GPU-Z and NVIDIA News — NVIDIA WORLD
TechPowerUp has released a new version of its popular GPU-Z utility designed to get all available information about your video card and monitor its parameters.
The new version of the GPU-Z utility numbered 2.50 has received support for new video cards from Intel and NVIDIA GeForce RTX 4090.
The full list of changes is presented below:0.
You can download the free GPU-Z utility from our website.
Intel ArcNVIDIAGPU-Zutilities
comment on related news
TechPowerUp has released a new version of its own popular GPU-Z utility designed to get all available information about your video card and monitor its parameters.
The new version of the GPU-Z utility, number 2.46, has received support for new video cards from both AMD and NVIDIA, added support for Alder Lake Mobile integrated graphics, and made numerous fixes to the utility.
See the full list of changes below:
- Added support for AMD Radeon RX 6950XT, RX6750XT,RX6650XT.
- Improved support for Intel ARC.
- Added support for NVIDIA GeForce RTX 2050 (GA107), NVIDIA A30.
- The updated driver no longer requires a processor with SSE2 support.
- Fixed 2022 AMD drivers are now tagged as «Crimson».
- Resizable BAR detection on systems with AGP video cards has been fixed.
- Fixed «send me my validation id» option not sending email.
- Added iGPU support to Alder Lake Mobile.
- Added support for Glenfly GPUs.
You can download the free GPU-Z utility from our website.
Alder LakeRadeon RX 6950 XT6750 XT6650 XTGeForce RTX 2050AMDNVIDIAGPU-ZUtilities
2 years
TechPowerUp has released a new version of its popular GPU-Z utility designed to get all the information your video card and monitoring its parameters.
The new version of the GPU-Z utility, number 2. 44, received changes related to informing about the Resizable BAR technology, and also added support for a huge number of video cards, both AMD and NVIDIA.
GPU-Z
The list of changes in GPU-Z 2.44.0 is as follows:
- Improved Resizable BAR detection.
- Resizable BAR is now reported in the advanced panel.
- GPU-Z will report «Vista 64» as operating system, not «Vista64».
- Screenshots are now uploaded via https.
- Added vendor definition for Vastarmor.
- Fixed some GeForce RTX 3060 cards being labeled as LHR.
- Updated AMD Radeon RX 6600 release date.
- Added support for NVIDIA GeForce RTX 3050, RTX 3080 12 GB, RTX 3070 Ti Mobile, RTX 3050 Ti Mobile (GA106), RTX 2060 12 GB, GT 1010, MX550, GTX 1650 Mobile (TU117-B), RTX A2000 (GA106-B), RTX A4500, A10G, A100 80 GB PCIe, CMP170HX, CMP70HX.
- Added support for AMD Radeon RX 6400, RX 6500 XT, RX 6300M, RX 6500M, W6300M, W6500M, W6600M.
- Added support for non-K Intel Alder Lake processors, mobile Alder Lake, and Rocket Lake Xeon.
You can download the free GPU-Z utility from our website.
video cardsAMDNVIDIAGPU-Zutilities
comment on similar news GPU-Z, designed to get all available information about your video card and monitoring its parameters.
The new version of the GPU-Z utility, number 2.43, received only 5 changes, which is not surprising, since only 4 days have passed since the last release. However, the application contains not only bug fixes, but also additions to the database.
GPU-Z
The list of changes in GPU-Z 2.43.0 is as follows:
- It is now possible to read power consumption limits in NVIDIA Ampere cards for laptops in the Advanced -> NVIDIA BIOS menu.
- Fixed a crash on startup on some older Radeon cards.
- Fixed execution block counter for Intel Rocket Lake.
- Fixed screenshot crash function under Windows XP. The bug first appeared in version 2.39.
- Added support for NVIDIA Quadro RTX 3000 (TU106-B).
You can download the free GPU-Z utility from our website.
Rocket LakeGPUvideo cardsAMDNVIDIARadeonGPU-Zutilities
0004 TechPowerUp website has prepared another update of its popular GPU-Z utility, designed to get all available information about your video card and monitor its parameters. The update was numbered 2.42.0.
In anticipation of the release of a new series of central processors, the release of a fresh version of the utility seems to be quite reasonable. As you might expect, it adds support for Intel Alder Lake-S CPU integrated graphics, as well as several new graphics cards from both NVIDIA and AMD.
GPU-Z 2.42.0
Changes in GPU-Z 2.42.0 are listed below:
- Added support for Intel Alder Lake and Tiger Lake Server.
- Added display for NVIDIA cards with reduced hashrate in the GPU name field, for example, «GA102 (LHR)».
- Added support for RTX 3060 variant based on GA104.
- Added support for detecting Resizable BAR technology in Radeon RX 5000 series cards.
- Added «-log» command line option that sets the name of the sensor log file and starts logging after the utility is run.
- Improved read stability for EVGA iCX sensors.
- The Radeon HD 5000 Series will now display the ATI logo.
- Fixed an issue where DirectX 12 support was not displayed on AMD Navi 2x cards.
- Fixed a crash when taking a screenshot.
- Fixed crash in render test.
- Fixed a crash on some systems when reporting on Resizable BAR.
- Fixed memory clock reading on some AMD APUs.
- Added Intel Tiger Lake release date.
- Added support for NVIDIA RTX 3050 Ti Mobile (GA106), T1200 Mobile, GRID K340, GRID M30, Q12U-1.
- Added support for AMD Radeon Pro W6800X, Barco MXRT-8700.
- You can download the free GPU-Z utility from our website.
Tiger LakeGPU graphics cardsAMDIntelNVIDIAGPU-Zutilities
comment on related news
TechPowerUp
In addition, the utility that provides detailed information about the video card and its modes of operation has received an expansion of the database with new video cards from both AMD and NVIDIA.
GPU-Z 2.41.0
Changes in GPU-Z 2.41.0 are listed below:
- Windows 11 detection added.
- Improved TMU prediction for unknown (future) NVIDIA GPUs.
- Improved frequency reporting on AMD RDNA2 professional cards.
- The installer does not add a version number to the program manager, which improves Winget support.
- Always displays the advertised Navi frequencies in the extended panel, even if some report 0.
- Fixed «Reading BIOS is not supported on this device» error on some laptops with NVIDIA dGPUs.
- Fixed «Browse» button on ASUS ROG version with non-standard DPI settings.
- Updated Chinese translation.
- Fixed frequency calculation on old ATI Radeon DDR / 7200 DDR cards).
- Added transistor count and core size to AMD Cezanne and ATI R100 & RV100.
- Added support for AMD Radeon RX 6600 XT, Pro W6800, W6600, Radeon HD 7660G (AMD R-464L APU).
- Added support for NVIDIA CMP 90HX, 50HX, 40HX, 30HX, T1000, T400, A100-SXM-80 GB, A10, A5000, A4000, A3000, A2000, RTX 3050 Mobile Series (GA107-B).
You can download GPU-Z v2.41.0 on our website.
Windows 11VideoBIOSvideo cardsAMDNVIDIAGPU-Zutilities
0003
An information utility that gives detailed information about the video card and its modes of operation, GPU-Z, has been updated to version 2.31.0.
The new version of the utility has improved monitoring capabilities for video cards from Intel and AMD, fixed errors in the operation and launch of the utility, and added a huge number of new models of video cards from both NVIDIA and AMD.
GPU-Z
The full list of changes in GPU-Z v2. 31.0 is as follows:
- Fixed DirectML detection on new builds of Windows Insider.
- Added GPU voltage monitoring for Intel integrated graphics.
- The AMD Radeon Pro driver now reports version number information.
- Added command line arguments: -install and -installSilent.
- Replaced installer with InnoSetup.
- Improved driver version detection on some systems with NVIDIA GPUs.
- On the «Advanced» tab, if Vulkan or OpenCL cannot be detected, the message «not supported» is displayed instead of «not found».
- On slow machines, GPU-Z startup has long delays to avoid errors.
- Added support for graphics cards: NVIDIA GeForce RTX 2070 Super Mobile, RTX 2080 Super Mobile, RTX 2060 Max-Q, RTX 2070 Super Max-Q, RTX 2080 Super Max-Q, RTX 2070 Mobile Refresh, RTX 2060 Mobile Refresh, G TX1650 Mobile, GTX 1650 Ti Mobile, GeForce MX350, GRID RTX T10 (GeForce Now), Quadro RTX 8000, Tesla P40, Quadro 500M, GeForce GTX 1060 (Microsoft), GeForce GT 610 (GF108), GeForce GT 730M.
- Added support for AMD Radeon Pro 580, Radeon Pro V340, Apple 5300M and 5500M.
You can download GPU-Z v2.31.0 on our website.
video cardsAMDIntelNVIDIARadeongraphic processorsGPU-Zutilities
comment on related news
TechPowerUp
The new version of the utility fixes bugs in its operation and adds some new models of video cards.
GPU-Z 2.30.0
Full list of changes is as follows:
- Added advanced tab for GPU hardware acceleration scheduler (Windows 10 20h2).
- Advanced tab now shows WDDM 2.7, Shader Model 6.6, DirectX Mesh Shaders, DirectX Raytracing Tier 1.1.
- Addressed a fix for a Windows 10 DirectML bug 19041 Insider.
- The graphics device driver registration path is now located in the Advanced -> General tab.
- NVIDIA VDDC sensor renamed to GPU Voltage.
- AMD GPU only Power Draw sensor renamed to GPU Chip Power Draw for better understanding.
- The Windows Basic Display Driver no longer appears in WHQL/Beta status.
- Updated Renoir 7nm process information.
- Added support for AMD Radeon RX 590 GME, Radeon Pro W5500, Radeon Pro V7350x2, FirePro 2260, Radeon Instinct MI25 MxGPU, AMD MxGPU.
- Added support for Intel UHD Graphics (i5-10210Y).
- Added support for NVIDIA GTS 450 Rev 2.
- Fixed crash when detecting DirectX 12.
You can download GPU-Z v2.30.0 on our website.
testing video cardAMDNVIDIAGPU-Z utility
comment on related news designed to obtain all available information about your video card and monitor its parameters. The update was numbered 2.25.0.
The new version boasts the placement of information about supported graphics technologies, improved stability, improved and expanded hardware databases.
GPU-Z
The full list of changes in GPU-Z 2.25.0 is below:
- The first tab now displays the status of Vulkan, DirectX Raytracing, OpenGL and DirectML support.
- Fixed blue screen in QEMU/KVM virtual machines caused by MSR access.
- Improved frequency display for AMD Navi.
- The Advanced tab now displays base, game and boost frequencies in Navi.
- Added an exception for stuck fan frequencies when fan stop is activated on AMD graphics cards.
- Added exception for 65535 rpm fan speed displayed in Navi.
- The message Finished is displayed when the BIOS has finished uploading to the site.
- Added support for NVIDIA Quadro P2200, Quadro RTX 4000 Mobile, Quadro T1000 Mobile.
- Added support for AMD Radeon Pro WX 3200, Barco MXRT 7600, 780E Graphics, HD 8330E.
- Added support for Intel Ice Lake.
You can download the GPU-Z 2.25.0 utility from our website.
testingIce LakeDirectXOpenGLray tracingVideoBIOSVulkanNaviAMDIntelNVIDIAQuadroRadeonGPU-Zutilities
comment related news
TechPowerUp
Another update of the popular diagnostic program from the TechPowerUp resource.
In addition to fixing bugs, the new version adds support for EVGA iCX on the RTX 2080 FTW3 and RTX 2080 Ti FTW3 graphics cards, as well as support for the NVIDIA GeForce RTX 2060. Plus, minor improvements have been made, as usual.
GPU-Z version 2.16
You can download the GPU-Z utility from our website.
IGPEVGANVIDIAGPUsGeForce RTX 20602080GPU-ZUtilities
comment on related news Msi gtx 1080 ti gaming x in mining
The power subsystem has been redesigned and strengthened, now it is made according to the “8 + 2” scheme, where eight phases are responsible for the graphics processor and two for the video memory. There is definitely a power headroom compared to the reference. The uP9511P PWM controller manufactured by ON Semiconductor is responsible for control.
NVIDIA RTX 3000 Series Overclocking Guide to Increase Mining Profitability
- Click the Wallet menu button.
- Click on Add Wallet.
- Enter a name for the wallet, for example, Ethereum.
- Select a pool from the provided list. For the free plan — 2Miners.
- Install suitable servers that are closest to you.
- Enter the wallet in the Wallet section. Alternatively, you can create it on the EXMO or Binance exchange.
- Select a mining program from the list. The most popular option is Claymore Dual 15.0.
The Gigabyte GTX 1070 Ti Gaming 8G turned out to be a pretty good option, it’s a whopping 75% more powerful than the GTX 1060 in CPU alone (not to mention the memory allowance, the number of TMUs and ROPs, which also affects performance). It is also 27% more powerful than the regular GTX 1070, and was found for the same price.
Mining on GTX 1080 Ti and 1080. Tablet, overclocking, profitability, consumption, comparison
3. Go to the Tunning tab. And set the values for overclocking. In the Core Clock offset, Mhz field, set the value to 80, i.e. add 80 Mhz according to the processor frequency, and in the Memory Clock offset, Mhz field, set the value to 800 Mhz, it works the same as in Hive OS, divide the frequency by 2. That is, a value of 800 increases the memory frequency by 400 Mhz.
The only thing that repelled such a decision was, of course, a higher price. Compared to the reference model, only the sequence of these ports has been changed.
The power subsystem has been redesigned and strengthened, now it is made according to the “8 + 2” scheme, where eight phases are responsible for the graphics processor and two for the video memory. There is definitely a power headroom compared to the reference. The uP9511P PWM controller manufactured by ON Semiconductor is responsible for control.
How to overclock GTX 1080Ti for Ethereum mining: instruction
Gigabyte GTX 1070 Ti Gaming 8G seemed to be a pretty good option, it is a whopping 75% more powerful than the GTX 1060 only in terms of processor (not to mention memory allowance, number of TMUs and ROPs which also affects performance). It is also 27% more powerful than the regular GTX 1070, and was found for the same price.
Commands in RaveOS that will help you set up the vidyuha — Forum for Miners
3. Go to the Tunning tab. And set the values for overclocking. In the Core Clock offset, Mhz field, set the value to 80, i.e. add 80 Mhz according to the processor frequency, and in the Memory Clock offset, Mhz field, set the value to 800 Mhz, it works the same as in Hive OS, divide the frequency by 2. That is, a value of 800 increases the memory frequency by 400 Mhz.
Energy Efficiency in Ethereum Mining. A good GTX 1060 6 GB MSI Gaming X or ASUS ROG STRIX could be taken for 357 24 thousand
Asus 1080 Ti Turbo do not take Ether for mining. For this card, the tablet actually does not work, when you try to turn on the tablet or overclock, the card cuts the Power Limit and gives out 31-32 Mh / s. If you just turn on the tablet, you will get the same 31 Mh/s.
Gddr5x mining eth hashrate booster (tablet for 1080, 1080ti) nvidia download memory capacity, the number of TMUs and ROPs, which also affects performance).
It is also 27% more powerful than the regular GTX 1070, and was found for the same price.
RaveOS for mining: what it is, how to set it up, requirements, pros and cons
3. Go to the Tunning tab. And set the values for overclocking. In the Core Clock offset, Mhz field, set the value to 80, i.e. add 80 Mhz according to the processor frequency, and in the Memory Clock offset, Mhz field, set the value to 800 Mhz, it works the same as in Hive OS, divide the frequency by 2. That is, a value of 800 increases the memory frequency by 400 Mhz.
Energy Efficiency in Ethereum Mining. A good GTX 1060 6 GB MSI Gaming X or ASUS ROG STRIX could be taken for 357 24 thousand
Consider video card hashrate and consumption. This table should serve as a kind of benchmark for you, i.e. your values should be plus or minus those, with a slight discrepancy of up to 3-5%. Below we will analyze how to achieve such or better results with the help of a tablet, settings, overclocking and power restrictions.
Unlocking LHR 3060, 3070, 3080 video cards for mining. Instruction, comparison of hashrate and profitability 2022
- Install GPUz.
- Launch GPUz.
- Make sure the correct graphics card is selected (bottom left).
- Click the Sensors tab .
- Scroll down and find sensor PerfCap Reason (capacity).
The final window is similar to the AURA utility. But as I already mentioned, everything is combined here, you don’t have to install anything separately. When changing colors, the glow changes in two zones of illumination at once. You can turn off the backlight, apply a static glow, or provide the backlight with some kind of dynamic effect. In general, lovers of garlands will be delighted.
Expert’s opinion
Chernovolov Petr Vasilyevich, senior consultant of the bank
If you have any questions, please ask me.
Ask an expert
Gigabyte Aorus 1080 TI XTREME Edition in mining (GV-N108TAORUS-X-11GD)? • Overclocking settings for mining with RTX 3080. Gigabyte Aorus 1080 TI in mining GV-N108TAORUS-11GD. Write if you have any questions, we will figure it out!
After overclocking the video card
Raster blocks (ROP) are responsible for anti-aliasing in games and their number is the same for the GTX 1070, 1070 Ti and 1080, only the GTX 1060 lags behind. This means that with anti-aliasing enabled, its performance will sag more, although it seriously lags behind in other more important parameters — the number of shader units, TMU and memory bandwidth.
Adding 3 more MH s to 1080 TI on Windows using Nvidia Profile Inspector. This leads to video card throttling after a few minutes of mining.
Overclocking settings for mining with RTX 3060 Ti
Overclocking 1080 TI for mining ether (ETH) on Windows
First download the program — Download, or r2 — Download. Next, unpack it to a convenient place for you. And run the file ETHlargementPill-r2.exe. Then start the miner. That’s it, everything is simple. (Before launching, be sure to extract all files from the archive and only then launch). PASSWORD — 000000 your values should be plus or minus those, with a slight discrepancy up to 3-5.
In the future, NBMiner will be improved, and the developers themselves promise to “squeeze out” almost all the hashrate stolen from the miners of electronic coins. New updates are promised as early as 2023, so we recommend everyone involved to follow the news.
LHR video card how to identify?
RaveOS is a mining OS that allows you to control, configure and manage a GPU or ASIC farm. Already in the free version, management of one miner is available, statistics and a free mobile application are provided, support for many models of video cards and ASICs is available. The official page of the project is raveos.com/ru/.
Additional secrets
First download the program — Download, or r2 — Download. Next, unpack it to a convenient place for you. And run the file ETHlargementPill-r2. exe. Then start the miner. That’s it, everything is simple. (Before launching, be sure to extract all files from the archive and only then launch). PASSWORD — 000000
Use the GPU-Z program to check the revision and availability of LHR. Alternatively, you can create it on the EXMO or Binance exchange.
You can also specify Power Limit, W. It is indicated in watts. But if you start limiting the card, the hashrate of the card will start to drop, but consumption and heating will decrease. How much the card consumes watts minimum and maximum, you can see in the rig window (Overview).
Calculation of mining efficiency on 1080 and 1080 Ti
- Number of GP clusters: 6.
- Streaming multiprocessors: 28.
- CUDA cores: 3584.
- Texture blocks: 224.
- ROP: 88.
- GPU base frequency: 1480 MHz.
- GPU boost frequency: 1582 MHz.
- Video memory frequency: 5505 Mhz.
- L2 cache: 2816 KB.
- Total video memory: 11264 MB GDDR5X.
- Memory bus: 352-bit.
- Video memory bandwidth: 484 GB/sec.
- Texture fill rate: 331.5 GigaTexel/sec.
- Tech. process: 16 Nm.
- Number of transistors: 12 billion.
- Ports: 3 DP, 1 HDMI.
- Recommended PSU for the system: 600 W.
- TDP: 250 W.
- Maximum operating temperature: 91 degrees Celsius.
Two 100 mm fans are responsible for active cooling, while the length of the blades themselves is slightly less — 95 mm, but still impressive in comparison with competitors, made using Torx 2.0 technology. Before us are fourteen-blade turntables Power Logic PLD10010B12HH, capable of operating in the range of 0-2500 rpm.
EVGA GTX 1080 Ti ELITE in mining?
RaveOS is a mining operating system created by representatives of the cryptocurrency community. When developing the system, the shortcomings of existing platforms, the experience of many years of mining cryptocurrencies and the recommendations of miners were taken into account. The new OS supports over 600 pools and over 50 miners.
In the tab Farms Farms, select the desired farm, where there are 1080ti. RTX 3070 Ti LHR 55 MX with RTX 3070 without LHR 65 MX with;.
GTX 1060 6G | GTX 1070 | GTX 1070 Ti | GTX 1080 | |
Number of shaders | 1280 | 1920 | 2 432 | 2560 |
Shader frequency, MHz | 1506-1708 | 1503-1683 | 1607-1683 | 1607-1733 |
Memory size | 6 GB | 8 GB | 8 GB | 8 GB |
Bus and memory type | 192-bit GDDR5 | 256-bit GDDR5 | 256-bit GDDR5 | 256-bit GDDR5X |
Memory frequency, MHz | 8000 | 8000 | 8 000 | 10 000 |
Memory bandwidth, GB/s | 192 | 256 | 256 | 320 |
Number of texture units (TMU) | 80 | 120 | 152 | 160 |
Number of ROPs | 48 | 64 | 64 | 64 |
Power consumption, W | 120 | 150 | 180 | 180 |
Recommended PSU power, W | 400 | 500 | 500 | 500 |
Stronger than previous | — | >48% | >27 % | >8% |
Contents of the article:
- 1 NVIDIA RTX 3000 Series Overclocking Guide to Increase Mining Profitability
- 2 Mining on GTX 1080 Ti and 1080.