Finding the Best DDR4 RAM for AMD Ryzen 9 3900X CPU
Ryzen did not make a good impression when it was first launched in March 2017 on the subject of memory compatibility and RAM overclocking. Many memory kits could only be used at low clock speeds and there were relatively few options in the BIOS to get Ryzen to work with fast memory. Since AGESA update 184.108.40.206, Ryzen works much better with many DDR4 RAM kits, and users have access to a very wide range of subtimings and higher RAM multipliers up to DDR4-4000, so Ryzen users with RAM problems should look for updated BIOS versions.
In 2017 we used a Ryzen system to analyze which settings work best for different RAM kits with Micron, Samsung and SK Hynix chips and how Ryzen performs when fully loaded and with 16-GiByte modules. With new 2018 hardware (Ryzen 2700X and X470), we have once again addressed the topic of RAM tuning in issue 09/2018: We show how well two and four sticks with different memory chips are suited for overclocking, which options help and what latency tuning including subtimings brings. In the PCGH-Plus article you will also get tricks for turbo-optimizing Ryzen CPUs.
With Ryzen 3000 (like the Ryzen 9 3900X), clock rates over DDR4-3600 can also be achieved relatively easily, but this is automatically switched to asynchronous mode, in which RAM (controller) and Infinitiy Fabric are no longer operated in a 1:1 ratio. This leads to performance losses in practice, which can hardly be compensated by RAM tuning, as we’ll show in this PCGH-Plus article. Therefore, it is best to limit the use to DDR4-3600 or make sure that synchronous operation is active when using higher RAM multipliers.
In our test we reviewed three RAM modules that offer great compatibility with the AMD Ryzen 9 3900X CPUs.
Test Results: Best DDR4 RAM for AMD Ryzen 9 3900X CPU
Ranking First: Corsair Vengeance RGB Pro
64GB DDR4-3600-RAM, 2 Dual-Rank-DIMMs
- Very fast RAM
- 64GB are very future-proof
- RGB included
- Best performance
Meanwhile the first desktop kits with 32 GiByte per module are available. Among them, the hardly available models with a release for DDR4-3600 are the spearhead. The Corsair kit can at least be pre-ordered for under 400 Dollar and offers a lot in return: in addition to a whopping 64 GiByte, the Vengeance RGB Pro Duo, which is approved for 18-22-22-42 at 1.35 volts, includes one of the most luminous RGB designs on the market. The dual-rank design also offers higher performance in everyday use, for which single-rank modules must be clocked higher or installed in double versions per memory channel. In addition, Corsair modules have considerable reserves, either for more stringent timing or overclocking to DDR4-3866 at the standard voltage of 1.35 Volt. However, you should not expect a tuning monster, the kit cannot cope with sharp timing. And don’t forget: Just for playing, 64 GiByte are a bit oversized, but there is nothing you cannot do with 64GB RAM.
Verdict: Best performing RAM for Ryzen 9 3900X
Ranking Second: Corsair Dominator Platinum RGB
32GB DDR4-4000-RAM, 2 Dual-Rank-DIMMs
- Fast 3200MHz RAM
- 32GB is great for any usecase
- RGB lighting
- Best price-performance ratio
- Timings settings
The two 16-GiByte bars are released for DDR4-4000 at the timings 19-23-23-45 and 1.35 Volt, which in combination with the dual-rank design helps to achieve great performance in all benchmarks. With 10 percent undervolting reserves and good tuning characteristics over the entire frequency range tested by us, it’s no wonder that under the hood are Samsung’s K4A8G085WB-BCPB chips, known as B-Die, which are still the best choice for high-end RAM with great tuning characteristics. But Corsair offers more than just well-selected components: a superbly crafted, solid heatsink efficiently dissipates the waste heat generated by the chips on both sides of the board thanks to the screw connection and thin thermal pads. In addition, you get above-average-bright, individually addressable RGB LEDs, behind which Capellix technology is used, which essentially results in a more compact design and increased light output per watt. But where there is a lot of light, there are also shadows: Corsair is well worth the price of this powerful overall package.
Verdict: Best price-performance ratio RAM for Ryzen 9 3900X
Ranking Third: Hyper X Predator RGB
16GB DDR4-3200-RAM, 2 Single-Rank-DIMMs
- 3200MHz RAM
- 16GB is great for gaming
- Fierce aluminum heat spreader
- XMP-ready profiles
- Can get a bit warmer
The two 8-GiByte bars are specified for the timings 16-18-18-36 at the clock stage DDR4-3200 and 1.35 Volt. However, they surprised us in the test with clock-friendly SK-Hynix chips based on 18 nm production (C-Die), which also managed the highest clock rate DDR4-3866 tested at 1.35 Volt stably at the timings 18-20-20-60. The sounded out undervolting potential of 15 percent undercuts the reserves of the installed chips, which are however only limitedly suitable for latency tuning. Nevertheless, at the price called up, the complete package is particularly interesting for fans of RGB LEDs. The lighting is one of the more uniform solutions on the market and emits its light primarily upwards. An unpleasant detail: The heat conduction pads on both sticks of our kit were too short and did not cover the respective outer chips so that they do not have direct contact to the heat sink. The temperatures we measured were nevertheless comparatively low.
Verdict: Great performing 16GB RAM for Ryzen 9 3900X
What is RAM?
The abbreviation RAM stands for Random Access Memory and describes a type of memory that is used as main memory in computers. With this memory type, individual memory cells are addressed directly via addresses. Common RAM modules are volatile, stored data is lost without a permanent power supply.
All programs currently running on a computer, associated data and at least parts of the operating system are stored in the main memory of a computer. The physical memory of a PC, server or notebook today consists of DRAM memory chips (Dynamic Random Access Memory) which are either located on exchangeable memory modules or soldered directly onto the computer’s mainboard. Dynamic” means that the content of the memory cells must be refreshed row by row at cyclic intervals. Logically, DRAM memories are interconnected in a matrix. Each memory cell is addressed by a row and a column address. The time required to address a memory cell is a factor for performance in addition to the clock frequency of the memory.
Currently, DDR3-SDRAM or DDR4-SDRAM are used as memory types in notebooks or PCs. They are standardized by JEDEC. The abbreviation SDRAM stands for Synchronous Dynamic Random Access Memory, i.e. DRAM with a clock rate determined by an external memory controller. With DDR-SDRAM (Double Data Rate), data is transferred on the rising and falling edges of the clock signal, which accelerates communication between the memory chip and the memory controller in the processor. With DDR3 SDRAM, the I/O clock is also doubled, while with DDR4 SDRAM the clock frequency of the memory chips has been increased compared to DDR3 SDRAM.
Today DDR3-SDRAM is primarily used in the low-power variant in notebooks. The memory types GDDR4 and GDDR5 used in current graphics cards are both modifications of DDR3-SDRAM. The different memory types are standardized by JEDEC. In desktop PCs the main memory is usually mounted in the form of DIMMs (Dual Inline Memory Modules), in notebooks more compact SO-DIMMs (Small Outline Dual Inline Memory Modules) are used. On very flat devices, the main memory is also soldered onto the mainboard and cannot be replaced or upgraded.
RAM Buyer’s Guide
We regularly update the test results listed here, including RAM recommendations, in order to provide you with the latest purchasing advice. In addition, we naturally also address price reductions of individual RAM kits and describe news that are important for the purchase decision.
GB versus GiB: What is the difference?
Many online shops and even manufacturers offer e.g. “16 GB DDR4 RAM”, but on closer inspection this is not quite correct. PCGH therefore uses “GiB” for “Gibibyte” instead of the capacity specification “GB” for “Gigabyte”. The reason is quite simple: Prefixes like “Kilo”, “Mega” or “Giga” denote multiples based on powers of ten (10 to the power of x). “Kilo” comes from the Greek and stands for “thousand” (or 10³), a kilobyte is accordingly 1,000 bytes. In data processing, however, powers of two are usually used, since a bit with 0 or 1 can only represent two different values and is a binary unit. What is commonly called “1 GB RAM” is therefore not 1,000 (10³) but 1,024 “megabytes” (correct: mebibyte). The following applies to RAM: A gibibibyte consists of 1,024 mebibytes, each of which is made up of 1,024 kibibibytes. These strange-sounding names result from the prefixes of the prefixes “Giga”, “Mega” and “Kilo” and the syllable “Bi”, which indicates the “binary” meaning.
DDR4 RAM: How much RAM (8 GiB / 16 GiB or more) do I need?
The mother of all questions is: How much RAM do I actually need? We recommend players at least 16 GiB RAM, less memory is more and more often a clear disadvantage. The differences are sometimes small in classic Fps benchmarks, but frametime benchmarks often show significantly fewer outliers when using 16 GiB than when using 8 GiB. So more RAM ensures a more even image output – watch out for the outliers in the Frametime benchmark below this text section. The minimum requirements of ARK Park, for example, are already at 16 GiByte – so the trend is clear. 32 GiByte compared to 16 GiByte in games, according to our research, bring partly measurable advantages, but usually no decisive advantages. If you use mods for open-world games or elaborate multimedia software for image/video editing, run virtualization, work with a large number of memory-hungry programs at the same time or simply don’t want to make any compromises, it’s probably worthwhile to go for 32 GiByte (or more). Unused memory can also be used for a RAM disk with some configuration effort.
There is another advantage of choosing a kit with a high memory capacity: you are more likely to get sticks with an internal dual-rank structure, which are a little faster than single-rank modules at the same clock rate. Tip: If chips are only installed on one side of the module, it is normally single-rank RAM. DDR4-bars with 4 GiByte are always single-ranked, with 8 GiByte meanwhile mostly also. DDR3 sticks with 8 GiByte and (currently still) DDR4 sticks with 16 GiByte are always dual-ranked.
RAM with DDR4 technology: What are the benefits of high clock frequencies and tight timing?
“Faster memory than DDR3-1600/DDR4-2133 is pure waste of money!” – Unfortunately, statements of this kind are heard again and again and are thus sweepingly formulated simply wrong. What is correct is: Whether high RAM clock frequencies lead to more performance in practice differs from case to case. There are certain applications where the RAM has an immense influence on the calculation speed. Compressing and encrypting an archive with the 7-Zip software is one of these cases. Processing a large archive with DDR4-2133 RAM can take over 40 or even 50 percent longer than with DDR4-3600 – mind you, with the same amount of memory and CPU speed.
In games it depends largely on whether the graphics card acts as a brake pad. Those who prefer to play with maximum details and image embellishment options such as edge smoothing in high resolutions are dependent on the performance of their graphics card. If the performance of the CPU and RAM does not have to be fully utilized, their acceleration is of course of little or no use. The situation is different if you want to achieve the highest possible frame rate in order to optimally drive a display with at least 120 Hz. For three-digit Fps values, the computing load usually has to be reduced by lowering the resolution/detail level, and the processor and RAM can show their strength. For clarification, you will find test results from The Witcher 3: Wild Hunt once in 1,280 × 720 and once in 3,840 × 2,160 (simply select)
If you have the choice between a higher clock frequency or lower main latencies, you should normally give preference to the clock rate, because with the same mathematical, theoretical advantage in practice the higher clock frequency provides the better result. Example: DDR4-2133/CL11 computationally achieves a CAS latency of 10.3 nanoseconds. DDR4-2666/CL16 is inferior on paper with 12.0 nanoseconds. But if one compares the results achieved in practice, one soon notices that the balance of power (not only in The Witcher 3) falls in favor of the DDR4-2666 solution.
Only in extreme cases (e.g. DDR4-2133 CL10 vs. DDR4-2400 CL18) it behaves differently. But: Depending on platform and processor the maximum clock frequency is limited. If you reach this limit, you can get even more performance out of the system by latency tuning. By the way, in our issues 01/2018 and 01/2019 we examined how much Coffee Lake (Core i7-8700K/Core i9-9900K), Ryzen (Ryzen 7 1700X/Ryzen 7 2700X), Skylake X (Core i7-7820X) and Threadripper (Ryzen TR 2950X) benefit from channels, clock and timings, as well as the number of modules and ranks in applications and games.
Prices and tests 2020: Slight price increase for DDR4 RAM
DDR4 RAM is currently available from around 5 Dollar per GiByte, so that the money savers will still have no problem finding it. For 2020, many forecasts originally assumed price increases, but the corona crisis makes the assessment more difficult. Chip manufacturers such as Micron report that memory chip production is hardly affected by the effects of the corona virus due to the high degree of automation, and they are taking precautions to maintain production. However, the corona crisis is affecting demand in various market segments such as smartphones (lower) and data centers (higher), so that production capacities are currently being adjusted accordingly. It is conceivable that prices will increase somewhat in the short term (as demand has increased due to teleworking, etc.) but will fall again in the medium term, as demand for DRAMs is likely to fall in a recession.
RAM in detail: Upgrade or extend main memory, RAM cooler
There are many other things to consider when dealing with the topic of RAM, including the topic of upgrading or extending RAM. Many users also wonder what to think of a RAM cooler. If you would like to upgrade the RAM, the following checklist will help:
- How many (free) slots for DDR3 RAM / DDR4 RAM are there on the motherboard?
- Do components protrude the slots, limiting the height of the RAM modules?
- What is the maximum amount of memory (GiByte/MiByte) allowed (total/per module)?
- Are there any other restrictions, for example regarding the memory clock rate and internal structure (memory ranks per module)?
On the subject of RAM coolers: Neither DDR3 nor DDR4 RAM for desktop for desktop PCs needs to be cooled. We recommend using modules with a heat sink or installing a RAM cooler from the accessories market if RAM overclocking is to be operated with increased voltages and increased temperatures in the housing are to be expected – for example due to overclocking of other components and the use of graphics cards with an open heat sink and axial fans.
Cooperation of CPU and RAM
The functioning of the human brain is much more similar to that of the computer than it might first appear. Just as we have long-term memory and short-term memory, the computer has mass storage and working memory that perform the same tasks. The brain also has a control unit that can call up information when needed, which in the computer is equivalent to the processor. There are also veins and the heart, which correspond to the data paths and the chipset on the motherboard. It is often said that the CPU is the heart in a computer. But if you look closely and see how the machine works, it becomes clear that it is actually the chipset that makes the system pulsate.
On the motherboard there is a very special vein, the aorta. This is the Frontside Bus (FSB), the data path between the processor and the main memory. The memory controller hub is located on this data path. If you have ever heard the term “Northbridge”: This refers to the memory controller hub, that is, the chip on the frontside bus. The memory controller hub contains the memory controller. The memory controller manages all lines from the main memory that lead to the processor or the AGP/PCI interfaces. The lines to caches that are not in the CPU are also addressed by the memory controller, but not via the frontside bus, but via the backside bus. For information that the processor stores in the external cache, the memory controller does not need to connect separately to the working memory again, but can use the information from the cache directly.
Cache Memory or simply Cache is so to speak the clipboard for the clipboard. It is therefore a degree of perfection in memory organization. This also corresponds to reality, because the cache is a relatively small memory bank that is as close as possible to its PU (Parent Unit). For the processor there can be two and theoretically even more caches on the board. However, it is not common to have more than one level on a board. At the same time it is common for a PC to have two processor caches. How can that be, when there is usually only one cache on the mainboard? The trick is to create a memory unit directly in the processor, which also functions as a cache.
Every cache that belongs to the CPU is classified by its level. The closer a cache unit is to the CPU, the lower the level. The lowest level is, of course, the cache in the CPU itself, namely 1. This cache is then called the L1 cache, where L stands for level. According to this convention, L2 cache stands for level 2 cache, L3 cache for level 3 cache. An example of information stored in the L1 cache is the keyboard codes, for example. Since the codes are the same each time and they are needed frequently, it seems logical to consider using a near-processor memory unit for storage.
You might now ask yourself why not simply build all the memory as close to the CPU as possible. Actually, this is what is being tried, but there is a very significant difference between memory and cache. Unlike memory modules that use DRAM, cache uses much faster SRAM. SRAM takes up much more space than DRAM, which is problematic because the cache is located directly on the motherboard. Considering the space for SRAMs, the memory banks for the DRAM modules are placed as close as possible to the CPU.
On second thought, however, it does not make sense to cache all the memory. The cache was designed to store highly frequented data. If the cache were to be increased, more data could be held, but it is questionable whether the frequency of use justifies storage in the cache at all. Exaggeration would cause the main advantage of speed to be reversed. The cache benefits precisely from its slimness, as hardly any addressing time is required due to its low capacity. Unlike DRAM, the cache does not address columns at all, but only reads out entire pages. The pages in the cache have their own name and are called cache lines.
Referring to the example with the desk, the comparison could look as follows: There are compartments on the desk that contain documents that are required on that day. There is also a single sheet of paper that is located directly next to the keyboard. This sheet of paper contains essential information that is required for almost every work step. This representation is therefore a refinement of the example we discussed at the beginning. The documents from the trays have to be removed each time for a work step, but the sheet of paper can be inspected without any effort. This example also shows why the cache should be kept small. If five sheets of paper were permanently on the desk, the main advantage would already be gone.
Before the system cycle became the standard of all things, the speed of a component was measured by its access times in the independent time input ns. Those days when each component did its own thing in the computer are long gone. The memory integrated itself into the system cycle with SDRAM, which later brought about an enormous progress. Since then, the speed of the memory is measured in MHz and no longer in ns. The same applies to the latency of the modules, which is given in the number of clock cycles that elapse during a particular action. This has the advantage that a comparison between two components can be made without conversions. The first SDRAM modules with 66MHz did not yet reveal the full potential of SDRAMs in practice, because even EDO RAMs, which were also clocked at 66Mhz, could still keep up with the modules. But already with the next faster modules, namely the PC100 modules, serious differences became apparent. The access times of 10ns for these modules would not have been usable without a central control unit.
The system cycle is not – as is often wrongly assumed – determined by the processor, but by the chipset, especially by the memory controller. The clock cycles set by the chip are roughly comparable to the pendulum of a wall clock. However, the clock does not change from left to right, but from top to bottom. The considerations for synchronization are as follows: It is possible that the connection to the memory was in vain, or that the system clock is a bit too slow for the actual performance of the modules. On the other hand, the waste of resources is much higher if the memory has to tell the chipset each time that new data is available. This consideration is also often found in reality. For example, a train stops at every station, regardless of whether someone wants to get on or off. Garbage cans are also emptied regularly, even if they are not completely full. Bulky waste, however, is only picked up on request, because the demand is much lower. Information from the memory is now permanently needed, so signalling that data is available is no longer practical.
Give me input …
The exact addressing of the main memory in the module is only relevant for the memory controller. All other components only assume a capacity that can be used. According to this logic, the processor cannot simply select any memory cells, but must leave this task to the memory controller. For the processor, this is literally a real killjoy, because a lot of time is lost due to the independent organization. On the other hand, this independent organization of the FSB is essential for a smooth distribution process and just as important as its synchronization. What’s more, the memory controller does not, of course, respond directly to a memory request. The processor must therefore wait until the response from the memory controller returns. However, the memory controller does not receive an immediate response to its memory request, but must wait a certain latency time until the result arrives.
You can imagine the architecture of a memory module like a table. The rows of this table are called banks – not to be confused with the plug-in bank on the motherboard – and contain other sub-tables, namely the memory chips, which are called arrays. In a table, each cell can be referenced by an index containing the column and row numbers. All modules – with the exception of Rambus – work with a combination of multiplexing and fast page mode. Multiplexing means that the address bits are sent sequentially, not simultaneously. Because RAM operates in Fast Page Mode, the RAS signal is first applied and a sequence of signal bits is sent, which then determine the bank and line. The first two bits of the signal define the bank. The idea that chips could be grouped into banks is very useful because it allows two or more operations to run in parallel within the module. All following bits represent the line number of this bank.
The addressed row is loaded into the Sense Amps (signal amplifiers) and is then available for selecting individual cells. How is the row loaded into the sense amps? For each column there are two parallel CAS lines which are first charged to a defined charge, the so-called reference level. This process is called RAS Precharge and the time required for the process is called RAS Precharge Time. When both CAS lines of a column are fully charged, the memory cell is connected to one of the lines. The task of the Sense Amp of this column is now to measure the difference between the two CAS lines, store the value and write it back to the cell. The time that passes until all sense amps have stored their data is called RAS-to-CAS delay. There is a maximum upper limit for this time, which guarantees the controller that the data is available and that the cells of this bank row can be referenced by CAS. When the CAS signal is sent, the corresponding value is shifted by the sense amps to the output latch, from where it can be fetched by the memory controller. The advantage of this procedure is obvious: If there is a page hit, the address logic no longer needs to connect to the array for all other cells of this bank row, but can directly access the index in the sense amps. Only in case of a Page Miss does the RAS have to be created again.
After this explanation you will also know what the three parameters of the PC specification stand for, which are noted after the operating frequency. A module of the type DDR with 133MHz operating frequency and the timing settings 222 means The CAS lines need 2 clocks to be preloaded, the sense amps another 2 to provide the data. The specification of 2 CL means that 2 clocks are needed to make the data available after a column request. You will certainly have noticed that the data are not equivalent, because in case of a page hit only the CL is of importance. In any case, this applies until a page miss occurs.
However, if you want to measure realistically, you cannot assume a page hit, which in reality often occurs, but not always. Nevertheless: At an operating frequency of 133 MHz, 6 latency cycles and the t_AC theoretically mean a query duration of 50.4 ns per query. This would correspond to a data throughput of about 152MByte/s. So how do the alleged transfer rates of more than 2GByte/s for such a module come about? With the specifications of the bandwidth it is not decisive what the module can actually make available, but how many bytes per second can theoretically pass the contact strip. Nevertheless, the yield is not as meager as it seems at first glance. With good settings, the transmission can easily be 15 – 30 times higher than in the “worst-case” calculation. However, this requires that the modules are of high quality and support various optimization functions.