In the previous push, we have used RTX 3080 to compare different PCIe standards, and concluded that even in the case of 4K light chasing, PCIe 3.0×16 slots are basically enough, and compared with PCIe 4.0×16 slots, the difference is only about 2%, and the gap is smaller at lower resolutions. So...... Is this true of all new generation graphics cards? It's not really, but the most sensitive model, you may not think - is an entry-level small video memory product.

What do you say? Don't believe it, the publisher of the familiar detection software GPU-Z, TechPowerUp, just tested its performance under different PCIe standards when using AMD's RX 6400 to do ladder ranking testing, let's take a look.
The performance gap of as little as 14% to as much as 23% is obvious. The reason is also very simple, its 4GB of video memory, bandwidth is really not enough. On the other hand, the number of PCIe channels in the RX 6400 is only 4, and the bandwidth under the PCIe 3.0 standard is only 4GB/s, compared with the high-end graphics card even under PCIe 3.0, which has a bandwidth of 16GB/s, of course, it is quite bad, and it is completely insufficient for GPUs that are much thinner but also belong to the new generation of architecture. These are more pronounced in newer and graphics killer games.
As you can see, the RX 6400 even has a significant gap in 1080P resolution, because in these games it needs to call the PCIe bus more frequently than high-end graphics cards, exchange data with memory, and PCIe's bandwidth limitations are more obvious. In older games, perhaps 4GB of video memory is enough, perhaps the bandwidth of PCIe 3.0 is enough for assistance, so the gap with PCIe 4.0 is very small, which is also in line with RX 6400's positioning of more attention to power consumption, suitable for small PCs, and meeting the needs of light entertainment users.