The processor is one of the most important components of any PC, so much so that, in fact, it can create a significant bottleneck if it is not powerful enough, and it will end up affecting other components. In this sense, one of the most frequent and well-known bottlenecks is the one that occurs in games when the processor weighs down the graphics card, making it unable to develop its full potential.
However, we must be clear that a bottleneck at the processor level is not always bad, and that basically it is something almost impossible to avoid when we talk about games. This has an explanation, and that is that video game developments start, today, from the consoles of the previous generation. These consoles are equipped with low-power AMD Jaguar processors, whose IPC is at the level of Intel Atom processors from 2013, and on top of that they work at a very low frequency: 1.6 GHz on PS4 and 1.75 GHz on Xbox One.
When a game is developed for a console with such poor hardware, this places a significant limitation on taking advantage of more advanced hardware, as the title has not been programmed to work optimally with today's high-performance processors that have a high number of cores and threads.
Click to enlarge. Control in 1440p, maximum quality, ray tracing and DLSS activated. As we can see, despite the fact that the CPU has low usage, it does not generate any bottleneck that harms the performance of the GPU, which hovers between 98% and 100% usage.
It is not difficult to understand, if you develop a game for a certain architecture, and you optimize it to work well on very limited hardware, in the end you have two options: create a different version, with improved optimization so that it is able to work well with more modern hardware, or introduce minor optimizations to make up a little the final result.
The first option requires significant work on the part of the developers, and requires changing numerous aspects that could make the console version look worse than the PC version. The second requires less effort, but on the other hand, it also introduces fewer improvements and presents a clearly lower optimization.
Think, for a moment, of Crysis. This game came to PC in 2007, and it was a demonstration of good technical skills on the part of Crytek. To run optimally, it required a dual-core processor, a generous amount of RAM, and a graphics card with unified shader architecture.
Its hardware demands were so high that a direct adaptation to PS3 and Xbox 360 was simply impossible, and it was necessary to wait many years until, in the end, a version fully adapted to both consoles could be developed. The attached video speaks for itself, and we must not forget that the version for both consoles had to give up the "Ascension" level, because it was not possible to keep it due to its technical limitations.
The graphic engine of a game, together with the different optimizations that are carried out during the development process, determines what type of use can be made of a processor, and this is linked to very important aspects of said game, such as animations, artificial intelligence and physics.
As we've anticipated, if you have to start from a very modest base, like that AMD Jaguar CPU on PS4 and Xbox One, you can't use advanced animations, physics and artificial intelligence on the PC version unless you make very deep changes, and you think a very different version of that. The problem is that developers have generally gotten used to making ports that can't really take advantage of current PC hardware.
However, this problem has not been something exclusive to the PC. Remember, for example, what happened in the generation of 128-bit consoles. Xbox was light years away from PS2, but the cross-platform titles looked practically the same on both consoles, due to that "laziness" of the developers, who limited themselves to programming the game on the less powerful base (PS2 in that case), and to take it without changes to all platforms, thereby generating a false sense of parity that was later seen totally broken with the Xbox exclusives.
On PC, a CPU-level bottleneck can occur in games for four main reasons:
Everything we have explained so far gives us the foundation we need to understand the bottleneck problem that has been occurring at the CPU level for the past few years. The first three points have no major problem and are very easy to understand. Currently, to be able to play demanding triple-A games optimally, we need a processor that meets, at a minimum, these requirements:
Click to enlarge. Battlefield 2042 on a Ryzen 7 5800X and an RTX 3080 Ti, set to 1440p, with maximum quality, ray tracing, and DLSS enabled. The CPU is an obvious bottleneck as the GPU usage is too low.
We already know what a CPU bottleneck is, it has become very clear why it can occur, and we have a background that allows us to easily understand everything that we are going to explain below. From the above, we could deduce that, to overcome a bottleneck at the CPU level, it would be enough to change the processor, but this is not, unfortunately, a universal truth.
Imagine the following scene. You check the requirements for Battlefield 2042, one of the biggest triple-A games of the moment, and discover that you need at least an Intel Core i7-4790 or Ryzen 7 2700X processor. You see that the equivalencies do not make sense, but since you have a Ryzen 7 3800X and you more than meet the requirements, you do not give it importance. You install it, start playing and discover, stunned, that despite everything your processor exerts a huge bottleneck on your graphics card. Why is this happening? It is very simple, from what I have told you in point 4, due to an optimization problem.
Click to enlarge. Another scene from Battlefield 2042, where we can see the same bottleneck. Same equipment, and same configuration, as in the previous image.
Take a look at the image just above these lines, it's a screenshot of Battlefield 2042 running on my PC, which is equipped with a Ryzen 7 5800X, 32GB of DDR4 RAM, and a GeForce RTX 3080 Ti. The graphics settings I used were 1440p resolution, active ray tracing, and maximum quality. Despite everything, the CPU usage was very low, just 34-35%, and this made the GPU usage hover around 50-70%.
Obviously, in that scenario, reducing the graphics quality doesn't make any sense, since the GPU is already being held back by the bottleneck that comes from low processor usage. When we raise the resolution to 4K, things change, but not because a miracle has occurred in terms of optimization, but because the graphics card has to work, based on a much larger number of pixels.
That low processor usage that is the bottleneck in Battlefield 2042 is, in short, due to poor optimization by the developers, who haven't bothered to introduce improvements at the programming level so that the game scales on multi-threaded CPUs. current, and yes, it is a consequence of this game having been developed as an intergenerational title (PS4 and Xbox One were the minimum bases at the hardware level). Other recent titles like Far Cry 6, for example, suffer from the same problem.
Click to enlarge. Days Gone is another good example of the impact resolution can have on GPU usage. These two images demonstrate what I have told you on other occasions, that raising the resolution can reduce or eliminate a bottleneck produced at the CPU level. The first image was captured with the game in 1440p on an RTX 3070 Ti and a Ryzen 7 5800X, and the second on the same equipment but with 1080p resolution. Notice how in the second the use of the GPU has dropped.
Current games continue to show a significant dependence on IPC and operating frequencies, rather than the number of total cores and threads. This means that, for example, a Ryzen 7 2700X, which has 8 cores and 16 threads, performs almost the same in games as a Core i3-10100, which only has 4 cores and 8 threads, and in some cases even has lower performance. It is understandable since, in the end, when we reach the barrier of 4 cores and 8 threads, or 6 physical cores, the single-thread performance of the processor becomes the protagonist.
If you have a powerful processor with many cores and threads, but a game doesn't go quite well and you confirm a bottleneck, don't worry, as you have seen in this article it is not your fault, it is a matter of inherent optimization to the game itself, which is unable to optimally parallelize on a CPU with a high number of cores and threads.
This simple comparison shows the important differences between the FPU of the PS5 CPU and that of a Zen 2 processor for PC.
We have been seeing this trend for many years, and unfortunately it does not seem that it will change in the short or medium term, since PS4 and Xbox One still have a lot of life ahead of them, and their combined base of more than 170 million consoles it's more than enough of an attraction for developers to keep them in mind. For those of us still stuck on a quad-core, eight-thread processor, this is good news as you will be able to continue to stretch your rig, but for those of us with a more powerful solution, it's yet another example of software coming a long way. slower than hardware.
If you are wondering when we will overcome this situation, the answer is not complicated: when the transition to the new generation is completed, and video game developments start to have, as a base platform, PS5 and Xbox Series XS. Both consoles have an 8-core processor based on Zen 2, although it is a cut version that is integrated into an APU, and that not only works at a lower frequency than a Zen 2 processor for PC, but also has notable cuts to internal level, including such important aspects as the L3 cache and certain instructions (absence of AVX256, for example).
The table shows the performance of the AMD 4700S chip, which is equivalent to the PS5 CPU, against processors based on Zen 2 and Zen 3. There is a huge difference.
To understand it better, it is enough to directly compare the Xbox Series X CPU with a Ryzen 7 3700X processor. The first one has 8 cores at 3.8 GHz, but it drops to 3.6 GHz if it works with 16 threads, it adds 4 MB of L2 cache and it has 8 MB of L3 cache. By contrast, the second has 8 cores and 16 threads at 4.2 GHz (turbo mode with all cores and threads active), also adds 4 MB of L2 cache, but has 32 MB of L3 cache. This means that, in short, the processor of the new generation consoles does not reach the level of a Ryzen 7 3700X.
The conclusion that we should know from what we have said in the last paragraph is very simple, and it is that, although the bottleneck caused by the processor that we have been suffering will improve with this transition to the new generation, we should not expect a miracle either, since that anything that is above the CPU that PS5 and Xbox Series X mount, such as a Ryzen 9 3900X for example, will continue to be wasted in games. If you have doubts about which processors are the best for games, do not miss this guide.
Click to enlarge. Batman: Arkham Knight has a very strong bottleneck, produced by the CPU, as we can see.