Comparatif Carte Graphique
Top 5 cartes graphiques
Vous pouvez voir l’évaluation complète de la carte vidéo sur la page d’évaluation.
NVIDIA GeForce RTX 4090
NVIDIA GeForce RTX 4080
AMD Radeon PRO W7800
NVIDIA GeForce RTX 4070 Ti
AMD Radeon RX 7900 XTX
Aller au classement de la perfomance des cartes graphiques →
Top 5 cartes graphiques au meilleur rapport prix-qualité
NVIDIA TITAN Xp
NVIDIA Tesla P40
NVIDIA GeForce GTX 1070 SLI (portable)
NVIDIA Tesla P4
Aller au classement des cartes graphiques au meilleur rapport prix-qualité →
Voici quelques-unes des comparaisons de cartes graphiques les plus populaires de ces derniers temps.
1060 6 GB
3050 8 GB
Cartes graphiques populaires
Ces cartes graphiques attirent la plus grande attention depuis quelques mois.
Comparatif / 24 cartes graphiques testées Juillet 2023
Comparatif / 24 cartes graphiques testées Juillet 2023 — Les Numériques
Publicité, votre contenu continue ci-dessous
- Ordinateurs & Composants
- Carte graphique
Bien choisir : les points clés, nos tests
Par Guillaume Henri
La carte graphique est l’élément essentiel pour afficher les jeux vidéo de manière fluide. Mais choisir le bon modèle est souvent difficile : performances 3D, consommation et bruit… quelle carte réunit au mieux ces différents critères ?
AMD et Nvidia se partagent le marché avec leurs gammes GeForce RTX, GeForce GTX et autres Radeon RX — Intel y reviendra d’ici peu avec sa gamme Xe. L’offre est large et la bataille a lieu à plusieurs niveaux avec, en premier lieu, les performances dans les jeux. À chaque génération, les performances des puces augmentent, les MHz augmentent et la mémoire graphique évolue. Pour autant, ces cartes graphiques de plus en plus rapides n’augmentent pas la consommation du système. La course au rendement énergétique est un enjeu dont bénéficie le consommateur : moins de consommation, c’est moins de refroidissement et donc plus de silence.
La montée de la puissance graphique est nécessaire avec la démocratisation de la 4K. Jouer en UHD à une cadence d’images plus élevée et avec plus de détails nécessite une puissance particulièrement importante. Parallèlement à cette hausse des performances, AMD et Nvidia proposent également des avancées technologiques permettant d’obtenir des jeux vidéo toujours plus réalistes : ray tracing, amélioration de l’image par IA ou encore gestion améliorée des ombres et autres éléments fins (cheveux, herbe, etc.). Le choix est par ailleurs pléthorique, les cartes gaming d’AMD et Nvidia étant proposées à des prix variant entre une centaine d’euros et plus de mille euros. Notre comparatif de carte graphique est là pour vous aiguiller dans votre choix, que ce soit pour jouer sur un écran Full HD ou Ultra HD. Toutes les gammes de GPU (comme Ampere de Nvidia) sont testées dans notre laboratoire de manière à vous aider lors de votre achat pour réaliser le meilleur choix lors de l’achat d’une nouvelle carte graphique Nvidia ou AMD.
La procédure de test
Toutes les cartes graphiques qui passent dans notre laboratoire de test subissent le même protocole de test. Nous évaluons tout d’abord les capacités vidéoludiques en relevant le débit dans 11 jeux. Ces jeux sont exécutés dans trois définitions différentes allant de la Full HD à la 4K. En fonction des titres, nous effectuons ces tests en rendu classique (rasterisation), puis avec le raytracing d’activé et, quand cela est possible, avec le DLSS ou le FSR. Avec un matériel spécifique, nous ne manquons pas de relever la consommation électrique de chaque modèle, dans les jeux puis au repos. Enfin, à l’aide de notre sonomètre, nous évaluons les nuisances sonores du système de refroidissement.
Le comparatif continue ci-dessous
Publicité, votre contenu continue ci-dessous
Comparer les Cartes graphiques
N’afficher que les produits disponibles
Par date de test
≤ 1 mois
≤ 3 mois
≤ 1 an
≤ 2 ans
Prix minimum (€)
Prix maximum (€)
Performances dans les jeux
Indice de performance
Produits populairesMeilleure notePrix croissantPrix décroissantDate croissanteDate décroissante
Résultats filtrés par :
Nvidia GeForce RTX 4070
À partir de 635. 89 €
Nvidia GeForce RTX 4060
À partir de 327.98 €
Nvidia GeForce RTX 4060 Ti 8 Go
À partir de 439.99 €
Nvidia GeForce RTX 4070 Ti 12 Go
À partir de 898.89 €
Nvidia GeForce RTX 3060 Ti
À partir de 385.94 €
AMD Radeon RX 6700 XT
À partir de 348.89 €
Nvidia GeForce RTX 3080 Ti
À partir de 1445. 99 €
AMD Radeon RX 6600 XT
À partir de 449.95 €
Nvidia GeForce RTX 3060
À partir de 307.9 €
Nvidia GeForce RTX 3080
À partir de 900.01 €
Nvidia GeForce RTX 3070
À partir de 458.98 €
AMD Radeon RX 6800 XT
À partir de 583.89 €
AMD Radeon RX 7600
À partir de 294. 99 €
Nvidia GeForce RTX 4090
À partir de 1708.89 €
AMD Radeon RX 6600
À partir de 213.94 €
AMD Radeon RX 7900 XTX
À partir de 1105.94 €
Nvidia GeForce RTX 4080
À partir de 1298.98 €
Nvidia GeForce RTX 3090
À partir de 1872.6 €
AMD Radeon RX 7900 XT
À partir de 970. 98 €
AMD Radeon RX 6900 XT
À partir de 929.98 €
Nvidia GeForce RTX 3070 Ti
À partir de 543.77 €
Nvidia GeForce RTX 3050
À partir de 228.89 €
Nvidia GeForce RTX 3080 12 Go
À partir de 1197.07 €
AMD Radeon RX 6800
À partir de 587.47 €
AMD Radeon RX 6500 XT
À partir de 190 €
Comparatif & Guides d’achat
Guide d’achat de la rédaction
Quelles sont les meilleures cartes graphiques ?
Quelles sont les meilleures cartes graphiques AMD Radeon ?
Quelles sont les meilleures cartes graphiques à moins de 500 € ?
Quelles sont les meilleures cartes graphiques Nvidia GeForce RTX ?
Voir tous les guides
Laissez-vous guider par nos experts
Voir la comparaison
Sélectionnez 2 produits à comparer.
Publications qui peuvent vous intéresser
NEWS : Tablette tactile
Apple : un nouvel iPad Air serait dans les tuyaux
NEWS : Carte graphique
Les cartes graphiques Radeon RX 7700 et 7800 d’AMD attendues pour la Gamescon
NEWS : Ordinateur Portable
Les premiers Mac embarquant des SoC Apple Silicon M3 attendus cet automne
Soldes / bon plan
NEWS : Imprimante
Soldes / Bon plan – L’imprimante HP Smart Tank 7006 «4 étoiles» à 189,00 € (-24%)
NEWS : Tablette tactile
Honor succombe à la tendance des tablettes XL avec sa MagicPad 13
NEWS : Ordinateur
Intel arrête les frais sur ses mini PC NUC
Test : Ordinateur Portable
MSI Stealth 14 Studio : une véritable petite boule de nerf
Test : Souris
Razer Cobra Pro : une petite souris gaming avancée, mais imparfaite
NEWS : Informatique
Que sait Google sur vos activités et comment supprimer l’historique
- Mentions légales
- Politique de protection des données personnelles
- Conditions Générales d’Utilisation
- FAQ — Vos choix concernant l’utilisation de cookie
- Tout droits réservés © 2023
Best Video Card for Video Editing and Rendering
Buying a new graphics card (GPU) can be a daunting task, especially if you’re not familiar with all the tech jargon.
Most consumers in the graphics card market need only know how the graphics card will perform in their favorite games and their purchase decision will be made. But if you’re looking to buy a GPU for, say, video editing or 3D rendering, finding relevant information will be significantly more difficult.
Particularly in the case of 3D rendering, you will save a lot of time and money if you can render as quickly as possible .
We are here to help you make the right choice.
If you already feel like you know everything you need to know, you can also click the button below to continue with our recommendations.
Jump straight to the rankings
What makes a graphics card different for rendering
If you’re new to graphics technology, chances are you have a lot of questions to answer.
This assumes a basic level of familiarity, i.e. you know that GPU means GPU and graphics card is an expansion card containing one or more GPUs.
Professional vs. Consumer GPUs
Nowadays, consumer GPUs and professional GPUs look more similar than ever before.
Some GPUs even blur the lines a bit, like AMD’s Radeon VII or Nvidia’s Titan line.
After all, these are video cards with overpriced and unrealistic amount of video memory .
Let’s take a look at Nvidia RTX GPUs today. What is the difference between consumer GeForce RTX card and professional Quadro RTX card?
They both use the same hardware architecture and can sometimes have the same characteristics down to the cores and video memory, but Quadro costs several times more… why?
Let’s see :
The big difference between consumer and professional GPUs is software .
Nvidia’s Quadro cards and AMD ‘s FirePro cards are optimized specifically for high performance applications with extremely complete guaranteed compatibility with industry-leading applications.
In addition, they are backed by years and are viewed as a long-term investment, which is not the case with consumer graphics cards.
Professional GPUs tested with industry applications and drivers optimized to perform at their best. Many high-level industry applications, such as the popular Solidworks CAD application, have special features (such as RealView in Solidworks) , which are only supported with a professional GPU.
Some software vendors will only support you and help with bugs or maintenance on your workstation if you are using a professional graphics processor.
This is critical for large companies where server or workstation uptime is paramount to keep their costly staff on functional PCs at all times.
Companies with sufficient funds buy Quadro GPUs. Software developers support companies with sufficient funds. These companies also usually have specialized IT staff with sufficient experience.
What would you say, «less hassle and more efficient»:
- The Solidworks support guy is talking to an IT person in the company who can then troubleshoot some issues with Solidworks on all Company PC.
- Many Solidworks support people talk to hundreds of individual users who have no understanding of PC/Technology/IT.
When you buy a Pro-GPU, you buy a Pro-Support. (In addition to some hardware features)
Consumer GPU is great for games and consumer applications .
They can also be really good at photo and video editing. Consumer GPUs are also great for graphics rendering, as graphics rendering engines usually don’t have features that only professional GPUs can run.
However, a professional GPU will generally be… is not great at gaming but is superior to for editing, rendering and just about any other professional level task but they will cost much more for the same performance .
Therefore, if you know that the software you have chosen does not use features that require a professional GPU, and you do not need a huge amount of video memory in professional GPUs, 9A 0003 consumer GPU will almost always be the best choice for a , especially when looking at the performance/ruble ratio.
Let’s go into some details.
CUDA Cores or Why Choose Nvidia
CUDA Cores refers to the special compute cores inside Nvidia GPUs, and is exclusive to Nvidia.
CUDA stands for Compute Unified Device Architecture and these cores inside Nvidia GPUs essentially serve pure computing power , not pure graphics power.
This is why they are used to implement resource intensive effects in supported games such as Nvidia HairWorks where a single GPU is not enough to do the job.
For editing and rendering, CUDA cores are an indispensable source of additional computing power.
Most editing and rendering applications are somehow optimized to use CUDA cores, so having more applications on your system will allow you to achieve better and faster rendering of your models, videos and more.
Some popular GPU rendering engines such as Octane and Redshift are built on top of Nvidia’s CUDA, which means that you can only use them if you have an Nvidia GPU. In such engines, rendering performance is almost linearly dependent on the number of CUDA cores available to your GPU.
Some applications, such as Adobe After Effects or Premiere Pro, support both Nvidia and AMD GPUs, but often run faster on Nvidia GPUs.
GeForce or Quadro?
GeForce gives you the best value for money when it comes to things like video editing and raw 3D application performance.
However, since GeForce is a brand aimed at gamers and general consumers, some features that high-end professionals may need may be missing.
Quadro can provide high performance in many applications , but the main attraction is the software support for enterprise users.
Any performance gain comes at the cost of a significant price premium over GeForce.
Whether you need the ECC (Memory Error Corrector) or the absolute best drivers for your professional applications, the Quadro is the best choice.
GeForce is aimed at games and consumers, Quadro is aimed at business users and enterprise users.
In addition, Quadro will have a much larger number of CUDA cores and video memory, and will also sometimes have exclusive features such as ECC, which we will discuss in more detail later.
We recommend the Quadro to users who :
- Don’t mind the high prices
- Need to use ECC, more video memory, higher floating point precision, higher monitor resolution
- Requires special software features that are only supported on professional grade GPUs (e. g. Solidworks, AutoCad…)
- Need regular maintenance and support from software vendor
- Rigorous hardware testing required for reliability and stability in a corporate or server environment, even with 24/7 uptime
We recommend GeForce users who :
- Do not use features that are only supported on PRO GPUs
- Want more bang for your buck
- Do not use a lot of VRAM or ECC
- Does not rely on regular software support
- Want to play from time to time
Do I need an RTX
Nvidia’s Turing architecture was the first to introduce the RTX , which adds several new features on top of the CUDA cores, namely the RT core and Tensor .
RT cores are designed for ray tracing purposes and are designed exclusively for this.
For professional rendering, having a more powerful ray-traced GPU can significantly speed up the workload, at least in supported applications .
If you don’t need a ray-traced GPU (especially if you’re focused on, say, video editing rather than 3D rendering), then having RT cores is unlikely to make much of a difference.
Tensor cores are a different story and are getting a little more interesting.
The consumer GPUs use Tensor Cores to achieve things like DLSS (Deep Learning Supersampling), which uses artificial intelligence to improve image quality.
For professional use tensor cores can be used due to their excellent FP16/FP32 and INT4/8 capabilities, making them ideal for neural networks, deep learning, artificial intelligence, etc.
If these areas seem like something your business wants to explore, the Quadro RTX might be what you’re looking for.
RT cores can also slightly speed up rendering, at least in supported render engines. For example, Octane and Redshift are working on implementing RayTracing cores.
To sum up, RT and Tensor cores add some great features that may or may not affect your workload.
However, we still recommend using RTX GPUs over legacy Nvidia GPUs because even without these additional compute cores , the latest RTX GPUs boast a significant performance improvement over their non-RTX predecessors.
What do you need from a GPU for editing?
Fortunately, video editing requires a much less demanding GPU than professional rendering.
Even Nvidia’s base consumer GPUs with CUDA cores can handle this task, especially if you’re the only content creator doing freelance work or posting to sites like YouTube.
Take a look at the following Premiere Pro video editing tests that clearly show where the GPU’s sweet spot lies: Low or mid-range RTX .
If your needs are a little more enterprise-level (eg 4K/8K HDR video), you can opt for a more powerful GeForce RTX GPU.
What do you need from a GPU for rendering?
For rendering — namely 3D rendering in a professional environment — you will need much more from the GPU.
The most important thing you need from a GPU for rendering (assuming it is compatible with the render engine you are using) is the maximum possible number of CUDA cores and video memory.
The time it takes to render an average frame on your GPU is almost linearly inversely proportional to is the number of CUDA cores your GPU has .
However, the GPU can only use its huge performance of the CUDA core if the 3D scene data fits into its VRAM (video memory on the GPU) .
This means that if you know you have very complex scenes with millions of polygons, sub-poly offset or things like high resolution textures, you will need a lot more video memory than if your scenes were fairly simple with a few objects.
Most GeForce RTX GPUs already have a decent amount of VRAM, typically 8GB to 11GB, but if you need even more, you’ll need to use a Quadro RTX GPU with VRAM up to 48GB .
On Quadro GPUs, you will also get ECC which we will cover in a moment.
ECC — what is it and why you may need it
ECC memory detects and corrects data errors that naturally occur during long and intensive workloads.
It is these errors that cause seemingly random events such as data corruption or system crash and should be avoided at all costs when dealing with fairly fragile data.
This is why ECC is most commonly used on servers and corporate PCs to prevent these errors from occurring when they can cause the most damage.
In GPUs, ECC is exclusive to professional GPUs from Nvidia and AMD.
In the case of Nvidia, they are only present on Nvidia Quadro GPUs and are required to prevent fatal errors in certain scenarios.
However, most consumers and authors not integrated into the corporate workflow can safely ignore ECC .
GPU performance rating
The first and best way to evaluate the performance of this GPU is to study benchmarks .
Consumers typically look at benchmarks of games and other applications to get a better idea of how a given component can perform.
You just need to know which tests you should be looking at .
For GPU rendering, we can distinguish such tests of popular engines as OctaneBench , Redshift and VRAY-RT .
Other resources may help, including Passmark GPU Compute Benchmark (for DirectCompute/OpenCL performance).
In addition to the benchmarks, there are also key specifications that we list below for each of our options.
The main specs we’ll be looking at are:
- CUDA cores — corresponds to raw processing power (excellent for 3D rendering performance)
- Tensor Cores — compliant with deep learning/AI as well as FP32/16 workloads
- RT Cores — Compliant with ray tracing performance, which can speed up 3D rendering in supported render engines
- VRAM — for managing large scenes, etc. no memory overflow
- GPU frequency — GPU core speed measurement
Should I bother with Dual GPU?
If you were building a game system , our answer would be very simple: no, absolutely not .
When it comes to gaming, multi-GPU support has been heavily deprecated and is not recommended.
But working performance… that’s another story.
While games must use standards such as SLI to use multiple GPUs to render a single scene, most editing and rendering applications are built with considering distributed workloads .
This means that not only do you not need GPUs to work in perfect harmony to benefit from running two cards at the same time, but you will see a linear 2x performance improvement when you add another GPU to your workstation!
A little about Nvidia’s NVLINK:
You will need higher-end GPUs than the RTX 2070 Super to share NVLINK memory. Also, you cannot share memory between more than two GPUs at the same time using NVLINK, and Render Engine support is required to use these features.
You will need NVLINK bridges to connect your two cards.
Dual or multi-GPU setup for video editing in Adobe Premiere Pro?
Premiere Pro does not use multiple GPUs on your system, so you will not benefit from more than one GPU.
Best graphics card for editing and rendering — our pick
Keep in mind : Here we recommend GPU options like the RTX 2060 Super. There are many different board partners that offer GPUs based on this chip from Nvidia. Partners include MSI, Gigabyte, EVGA, Asus and others.
GPUs based on the same chip will perform about the same, so you can easily get the EVGA RTX 2060 Super and expect it to perform exactly the same as the MSI RTX 2060 SUPER, plus or minus 2-3%.
The main differences here come down to cooling solutions, factory overclocking, RGB and appearance, and monitor connectors. However, the underlying chip is the same.
Entry level graphics card for editing and rendering: Nvidia RTX 2060 Super
If you’re on a tight budget but still want good editing and rendering performance for your money, the RTX 2060 Super is your first choice.
Compared to other GPUs in this price range, it offers superior performance across the board for both gaming and professional work. (However, for gaming, the AMD RX 5700 is by far the best option.)
With a modest number of RT and Tensor cores with a fairly significant number of CUDA cores, the RTX 2060 Super is more than enough for 1080p and 1440p video editing.
In OctaneBench, the card scored approximately 205 points, which is significantly better than the Quadro RTX 3000 (149) and Quadro RTX 5000 (184).
This means that the raw processing power available in the 2060 Super surpasses even Quadro RTX cards that cost several times more, which, is certainly not bad.
These scores place the RTX 2060 Super in the mid-range of single GPU graphics cards for compute performance.
For those just getting started with editing and rendering, or who don’t yet have the financial headroom to invest in hardware, RX 2060 Super is the perfect place to start .
High performance graphics card for editing and rendering: Nvidia RTX 3090
If you’re on a budget but don’t need ECC and don’t want to sell kidneys to afford a graphics card, get the Nvidia RTX 3090.
For gaming purposes, the RTX 3090 is not really suitable in terms of performance per dollar compared to lower-end counterparts.
However, the boasts quite a significant raw performance boost on the over its non-Ti counterparts, making it a more attractive option for editing and rendering.
In OctaneBench, the RTX 3090 scores a good 661, placing it in first place among single GPU cards in terms of single GPU performance.
All this makes the RTX 3090 our top pick at this Tier . If you’re going to work with 1440p/4K video or perform complex rendering tasks on a regular basis, the 3090 is one of the best cards for the job.
High-End Professional GPU: Nvidia Quadro RTX 6000
Last but not least, let’s take a look at Quadro. In terms of raw performance, the Quadro RTX 6000 won’t be much better than the RTX 2080 Ti except in video memory-limited scenarios.
This is reflected in OctaneBench, which shows just 308 compared to 302 for the 2080 Ti — an incredibly small difference. But if you read the article, you probably already understood this part.
The main impetus for purchasing a Quadro RTX card is enhanced software support, stability, and ECC RAM support.
If you’re looking for something in this price/performance range, but the Quadro RTX 6000 doesn’t offer exactly what you’re looking for, consider the three alternatives below.
If this card looks like a bit of memory overload for your purposes, choose the Quadro RTX 8000 instead. Most of the characteristics remained the same, but the amount of video memory has doubled.
Performance differences in non-VRAM scenarios are incredibly small.
If ECC doesn’t matter to your workload, you can also go for a little savings and get the Nvidia Titan RTX , which has pretty much the same specs.
If ECC doesn’t matter to your workload and you don’t mind paying about the same price, consider also Titan V . It doesn’t have RT cores, but it has plenty of processing power — the best of any GPU, according to OctaneBench — and it still has plenty of Tensor and CUDA cores to work with. However, it has less video memory.
That’s it! What GPU or other PC parts are you planning to buy?
GPU Rendering — Blender Manual
Rendering GPU allows you to use your graphics card for rendering instead of the CPU. This can greatly speed up rendering, since modern video cards are designed to perform many of the same type of calculations. On the other hand, they also have some limitations in rendering complex scenes due to less available memory, as well as problems with interface responsiveness if the same video card is used for both normal work and rendering.
Cycles supports two GPU rendering modes: CUDA which is preferred for Nvidia graphics cards, and OpenCL which supports rendering on AMD graphics cards.
To enable visualization on the GPU, open the window Settings and on the tab System select the used Calculation device . Then, for each scene, in panel Rendering you will be able to set up the use of rendering on the CPU or on the GPU.
is supported for GPU rendering with Nvidia graphics cards.
We support graphics cards starting from GTX 4xx (computing capability from 2.0 to 6.1).
Cycles requires the latest Nvidia drivers for all operating systems.
List of CUDA maps with shader model (in English).
OpenCL is supported for GPU rendering with AMD graphics cards.
(We only support graphics cards with GCN architecture 2.0 and above).
To make sure your GPU is supported checkout
this Wikipedia page.
Cycles requires the latest AMD drivers for all operating systems.
Supported features and limitations
For an overview of supported features and comparison of technologies, see the corresponding section.
- CUDA limits:
- The maximum number of individual textures is limited to 88 integer textures (
JPEGand so on) and 5 floating point textures (
TIFFand others) on GTX 4xx/5xx series cards. Later cards do not have this limitation.
Why does Blender stop responding during rendering?
When the graphics card is busy rendering, it cannot redraw the user interface, causing Blender to become unresponsive. We’re trying to work around this issue by taking control of the GPU as often as possible, but we can’t guarantee a completely smooth experience, especially in heavy scenes. This is a limitation of graphics cards and there is no one hundred percent working solution for it, although we will try to improve this point in the future.
If you can, it’s better to install more than one GPU and use one of them for rendering, and use the rest for rendering.
Why is the scene rendered on the CPU not rendered on the video card?
There are many reasons for this, but the most common one is that your video card does not have enough memory. At the moment, we can only render scenes that fit into the memory of the video card, which is usually less than the memory available to the CPU. Note that, for example, 8k, 4k, 2k, and 1k texture images take up 256Mb, 64Mb, 16Mb, and 4Mb of memory, respectively.
We intend to add a system to support scenes larger than the graphics card’s available memory, but it won’t be soon.
Can multiple graphics cards be used for rendering?
Yes, go to User Preferences ‣ System ‣ Compute Device Panel and configure the devices to your liking.
Can multiple graphics cards increase available memory?
No, each graphics card only has access to its own memory.
Which renderer is faster: Nvidia or AMD, CUDA or OpenCL?
Currently Nvidia with CUDA is rendering fastest, but this really depends on the hardware you buy.
Currently, CUDA and OpenCL are about the same in the newest mid-range GPUs.
However, CUDA is the fastest in the respect of high-end GPUs.
Unsupported GNU version! gcc 4.7 and up are not supported! (Unsupported GNU version! gcc 4.7 and older are not supported!)
On Linux, depending on your GCC version you might get this error. There are two possible solutions:
- Use an alternate compiler
If you have an older GCC installed that is compatible with the installed CUDA toolkit version,
then you can use it instead of the default compiler.
This is done by setting the
CYCLES_CUDA_EXTRA_CFLAGSenvironment variable when starting Blender.
Launch Blender from the command line as follows:
CYCLES_CUDA_EXTRA_CFLAGS="-ccbin gcc-x.x" blender
(Substitute the name or path of the compatible GCC compiler).
- Remove compatibility checks
If the above is unsuccessful, delete the following line in
#error -- unsupported GNU version! gcc 4.7 and up are not supported!
This will allow Cycles to successfully compile the CUDA rendering kernel the first time it
attempts to use your GPU for rendering. Once the kernel is built successfully, you can
launch Blender as you normally would and the CUDA kernel will still be used for rendering.
CUDA Error: Invalid kernel image
If you get this error on 64-bit MS-Windows, make sure you are using a 64-bit build of Blender and not a 32-bit one.
CUDA Error: Kernel compilation failed
This error can occur if you have a new Nvidia card that is not yet supported by your version of Blender and you have the CUDA toolset installed. In this case, Blender may try to dynamically build the kernel for your graphics card and not succeed.
In this case you can:
- Check if the latest Blender version
(official or experimental builds)
supports your graphics card.
- If you’ve built Blender yourself, try downloading and installing the latest CUDA developer toolkit.
Regular users do not need to install the CUDA toolkit because Blender already ships with precompiled kernels.
CUDA Error: Out of memory
This error usually means that there is not enough memory on the video card to store the scene. Currently, we can only render scenes that fit into the graphics card’s memory, which is usually smaller than the memory available to the CPU.