Memory is a critical component of any computing system. IBM Power System is no exception to this.
There exist different forms of memory in Power Systems, from extremely small size CPU registers to buffered and modular DRAM. DRAM itself can be characterised by size and technology.
Technologies in memory are not just limited to throughput like DDR4 and DDR5 but encompasses advancement in reliability like Error Correcting Code (ECC) and improvement in data transfer to CPU by employing buffer chips in memory modules.
Originally in IBM Power10 system DDR4 based memory were provided, but DDR5 based dual inline memory modules (DDIMMs) now provide increased memory throughput and reduced latency in Power Systems like E1080.
An understanding of memory sub-systems and related terminologies used by hardware engineers or pre-sales engineers are not that familiar to many of the IBM Power System administrators.
This, I think, is because many Power System shops never had a need to increase the memory post initial installation of the machine. Then there can be instances where memory was increased as part of hardware maintenance contract and the system administrator had little role to play other than requesting additional memory and getting it installed.
In this article, I will use the example of an IBM Power System to explain some of the terminologies in simple terms and provide generic guidelines when installing or augmenting additional memory to Power Systems.
Before exploring the memory subsystem of IBM Power Systems, an architectural overview of a computer system employing a memory controller to request data transfer by CPU, is depicted.

Important terminologies to understand as part of the architectural diagram above are.
- Bandwidth: A good throughput from memory can be achieved by using memory types that can fully utilise the bandwidth of the memory bus. The bandwidth or width of memory bus is measure in bits (8, 16, 32, 64 bits).
- Speed – If memory speed can be synchronised with the Front Side Bus speed, then data transfer will be more efficient when utilising the full bandwidth of the memory bus.
- Memory Latency – The number of clock cycles spent by a CPU between a memory request sent and data received. The lower the memory latency, the fewer clock cycles and faster the data transfer.
- Memory Ranks – Refers to the groups of memory on a module that can be independently accessed by a memory controller. Ranks are also called sides. 1 Rank is a single sided DIMM and 2 Rank is a doubled sided DIMM.
- Memory Slots – The actual physical slots to install the memory module. The number of memory slots supported depends on the processor chip itself i.e. Each Power10 processor chip supports 16 DDIMMs, so a maximum of 64 DDIMMs in each Power E1080 CEC drawer can be supported.
Most important terminologies to remember is memory group or ranks. Now let us dive deeper into the IBM Power System memory Subsystem without mentioning MCU (Memory Controller Unit), processor and Channels as shown in architecture below.
During initial memory installation, memory slots are filled in pairs.
There can be rules specific to a particular system model and type, to populate memory slots filled four at a time instead of two after first “N” pairs of memory modules are already installed.
During memory augmentation, if additional memory slots are available, then these slots can be filled following memory plugging rules specific to the system. In case additional slots are not available then higher capacity memory features must be used and installed using plugging rules.
It is important to be aware of following information that would help an IBM Power System administrator plan successful memory installation or addition in a system.
- Memory feature codes are supported by the specific model of IBM Power System.
- Minimum and Maximum memory configuration is supported by the specific model and processor feature code of the IBM Power System.
- A pair of memory modules must be the same size, rank and memory density.
- Memory modules in each of the memory groups must be the same size, rank and memory density.
- Ensure that the Memory feature code is supported by the Power System Firmware. Be more careful when adding newer or higher capacity memory modules.
- If channel rules apply, check for the specific system model to see if the memory slots as part of a channel pair must have the same memory modules from same manufactures.
- When combining memory modules (i.e. DDR4 and DDR5) ensure that the memory within each system node is of the same type. Failure to observe this rule may result in an IPL stall. The service processor will detect a mixture of different speed DIMM within a single node.
- Extra rules against mixing memory of different ranks (1R or 2R) apply when a system contains fewer memory modules, then a threshold number of memory modules.
Keeping above generic terminologies in thoughts and referring to IBM model specific guidelines, I am optimistic that an IBM Power System administrator can easily plan for successful installation or augmentation of physical memory.
Leave a Reply