Building a Low Power Consumption Server - Part I

TL;DR: Building a low-power consumption server, including 128 GB DDR4 (non-ECC) memory, 6 cores and 12 threads, 2x SSDs/1x NVMe/7x fans, and a 10 Gigabit network + IPMI fitting in a 1U server case with 15.5 watts in idle, sounds too good to be true? Let's find out if this is possible - Spoiler: It does! - This blog post is part one of two where I'll write a bit about the hardware. The second post is going to focus more on the software part to achieve the mentioned 15.5 watts in idle.

Where the journey starts

As I mentioned in my last post, I've decided to upgrade my homelab. My personal focus was on building a new server, which should be efficient and low-power hungry at the same time. The journey began when I watched the YouTube video "Building a Power Efficient Home Server!" (The video is full of information and I can highly recommend it!) [1] by Wolfgang, the first time. With the information from the video in mind, I've spent several hours looking for the "perfect" server. First AMD-related CPUs and therefor motherboards were in my focus but this changed as several quite interesting boards was quite too expensive even used. As I wanted at least 128 GB RAM support, most cheaper boards only supported a maximum of 64 GB RAM. During my personal evening "research", I suddenly stumbled across the Supermicro X12SCZ-F [2] as I looked for some hardware on eBay. The lspci output below provides some insights about the hardware components built on the motherboard.

➜  ~ lspci
00:00.0 Host bridge: Intel Corporation Comet Lake-S 6c Host Bridge/DRAM Controller (rev 03)
00:08.0 System peripheral: Intel Corporation Xeon E3-1200 v5/v6 / E3-1500 v5 / 6th/7th/8th Gen Core Processor Gaussian Mixture Model
00:12.0 Signal processing controller: Intel Corporation Comet Lake PCH Thermal Controller
00:14.0 USB controller: Intel Corporation Comet Lake USB 3.1 xHCI Host Controller
00:14.2 RAM memory: Intel Corporation Comet Lake PCH Shared SRAM
00:16.0 Communication controller: Intel Corporation Comet Lake HECI Controller
00:16.3 Serial controller: Intel Corporation Comet Lake Keyboard and Text (KT) Redirection
00:17.0 SATA controller: Intel Corporation Comet Lake SATA AHCI Controller
00:1b.0 PCI bridge: Intel Corporation Comet Lake PCI Express Root Port #17 (rev f0)
00:1c.0 PCI bridge: Intel Corporation Comet Lake PCIe Root Port #1 (rev f0)
00:1c.5 PCI bridge: Intel Corporation Comet Lake PCIe Port #6 (rev f0)
00:1c.6 PCI bridge: Intel Corporation Comet Lake PCIe Root Port #7 (rev f0)
00:1f.0 ISA bridge: Intel Corporation Device 0697
00:1f.3 Audio device: Intel Corporation Comet Lake PCH cAVS
00:1f.4 SMBus: Intel Corporation Comet Lake PCH SMBus Controller
00:1f.5 Serial bus controller: Intel Corporation Comet Lake PCH SPI Controller
00:1f.6 Ethernet controller: Intel Corporation Ethernet Connection (11) I219-LM
01:00.0 Ethernet controller: Intel Corporation I210 Gigabit Network Connection (rev 03)
02:00.0 PCI bridge: ASPEED Technology, Inc. AST1150 PCI-to-PCI Bridge (rev 04)
03:00.0 VGA compatible controller: ASPEED Technology, Inc. ASPEED Graphics Family (rev 41)
➜  ~

Let's pick up where we left off

The board used a not-so-common LGA-1200 CPU socket, which could mean a CPU that fits on the socket could be easy to get and cheap, as no one wants it or the opposite. However, after some Internet searches, I've found a post on reddit [3] with a list of supported CPUs - this list was pure gold and helped me later to find the right supported CPU. The board itself even supported 128 GB DDR4-2933 ECC and non-ECC RAM and came with a dedicated IPMI LAN port (The latter is typical for a server motherboard, for example, the mentioned one.). Having a BMC/IPMI is nice on the first hand for maintenance/debugging reasons, but on the other hand, the board still consumes power even when the server is not running. In general, a BMC consumes four watts in idle. In the end, every decision has pros and cons. I decided to buy the board from eBay for around 90€.

➜  ~ lshw -c bus -quiet -sanitize
  *-core
       description: Motherboard
       product: X12SCZ-F
       vendor: Supermicro
       physical id: 0
       version: 1.01A
       serial: [REMOVED]
       slot: Default string
[...]
➜  ~

The CPU, where the real works happens

Meanwhile, while waiting to get the package with the board, I've looked on several platforms like kleinanzeigen to get an Intel i5-10400. This CPU supports 128 GB DDR4-2666 memory and has 6 cores and 12 threads, which is more than enough for my purpose, and is relatively easy to get for around 60€. The CPU also has an APU; not interesting for me but nice to have.

➜  ~ lshw -c cpu -quiet -sanitize
  *-cpu
       description: CPU
       product: Intel(R) Core(TM) i5-10400 CPU @ 2.90GHz
       vendor: Intel Corp.
       physical id: 30
       bus info: cpu@0
       version: 6.165.3
       serial: [REMOVED]
       slot: CPU
       size: 800MHz
       capacity: 4300MHz
       width: 64 bits
       clock: 100MHz
       capabilities: lm fpu fpu_exception wp vme de [...] arch_capabilities cpufreq
       configuration: cores=6 enabledcores=6 microcode=256 threads=12
➜  ~

I shortly considered looking for an Intel Xeon W-1250 because of the fact the CPU supports ECC but I've decided against this CPU due to the RAMpocalypse [4] . As I didn't want to build a NAS, ECC was not a big requirement for me; even Linus Torvalds [5] will not touch machines that do not have ECC memory.

I don't want to start any discussion here regarding the value of ECC memory or not; it's just my opinion, and because the memory prices are at an all-time high, it was also a question about budget. However, having ECC memory is nice, but for my use, not mandatory.

Memory, the art of forgetting

After I'd found a good deal for an i5-10400 the next on my list was memory. Since the board supports a maximum of DDR4-2933 but the CPU only supports DDR4-2666 I was looking for specific memory; for example, the Samsung (M378A4G43MB1-CTD) 32 GB DDR4-2666 non-ECC. During this step I was able to get the newer model with DDR4-3200 (Samsung (M378A4G43AB2-CWE)) for around 96€ per memory module. Although the CPU will lower the memory speed, I couldn't resist.

➜  ~ lshw -c memory -quiet -sanitize
[...]
  *-memory
       description: System Memory
       physical id: 1d
       slot: System board or motherboard
       size: 128GiB
     *-bank:0
          description: DIMM DDR4 Synchronous Unbuffered (Unregistered) 3200 MHz (0.3 ns)
          product: M378A4G43AB2-CWE
          vendor: Samsung
          physical id: 0
          serial: [REMOVED]
          slot: DIMMA1
          size: 32GiB
          width: 64 bits
          clock: 3200MHz (0.3ns)
[...]
➜  ~

NVMe or in other words, our boot device

In the past, I had good experiences with NVMe from Samsung, especially the PM981a [6] series. This series can be gotten used for relatively cheap. Furthermore, this series has not-so-buggy firmware and also supports Active State Power Management (ASPM). This will be important later when we would like to reduce the power consumption as much as possible. More on the upcoming part II of this blog series.

➜  ~ lshw -c storage -quiet -sanitize
  *-nvme
       description: NVMe device
       product: SAMSUNG MZVLB256HBHQ-00000
       vendor: Samsung Electronics Co Ltd
       physical id: 0
       bus info: pci@0000:01:00.0
       logical name: /dev/nvme0
       version: EXH7201Q
       serial: [REMOVED]
       width: 64 bits
       clock: 33MHz
       capabilities: nvme pm msi pciexpress msix nvm_express bus_master cap_list
       configuration: driver=nvme latency=0 [...] SAMSUNG MZVLB256HBHQ-00000 state=live
       resources: irq:16 memory:a1300000-a1303fff
➜  ~

SSDs, latency is the enemy

Picking up all the parts that are needed to build a server, I was once again on eBay and kleinanzeigen for looking for components. In this case, SSDs - I had some lying around here, but I wanted some SSDs that are made for 24/7. One SSD that comes to mind was the Western Digital Red SA500 with a TBW (Terabytes Written) of 350 TB more than enough for my purpose, as I wanted to use the disk only for VM/App Container disk. Large files like images, etc. will be stored on my NAS.

➜  ~ smartctl -a /dev/sda
smartctl 7.5 2025-04-30 r5714 [x86_64-linux-6.18.7-0-lts] (local build)
Copyright (C) 2002-25, Bruce Allen, Christian Franke, www.smartmontools.org

=== START OF INFORMATION SECTION ===
Model Family:     WD Blue / Red / Green SSDs
Device Model:     WDC  WDS500G1R0A-68A4W0
Serial Number:    [REMOVED]
LU WWN Device Id: 5 001b44 8ba5af196
Firmware Version: 411000WR
User Capacity:    500,107,862,016 bytes [500 GB]
Sector Size:      512 bytes logical/physical
Rotation Rate:    Solid State Device
Form Factor:      2.5 inches
TRIM Command:     Available, deterministic, zeroed
Device is:        In smartctl database 7.5/5706
ATA Version is:   ACS-4 T13/BSR INCITS 529 revision 5
SATA Version is:  SATA 3.3, 6.0 Gb/s (current: 6.0 Gb/s)
Local Time is:    Sat Jan 31 22:35:21 2026 CET
SMART support is: Available - device has SMART capability.
SMART support is: Enabled
[...]
➜  ~

Network - fast enough to disappear

As I mentioned above, the idea was to store large files on my NAS. For this reason a 10 Gigabit network card was needed. One network card that I read about several times during my "research" was the Intel X710-DA2. This card is quite popular within the homelab community, and this is for a good reason. The card is affordable and supports ASPM [7] which makes it a perfect candidate. I was able to get my first card for around 40€.

➜  ~ lshw -c network -quiet -sanitize
[...]
  *-network:1
       description: Ethernet interface
       product: Ethernet Controller X710 for 10GbE SFP+
       vendor: Intel Corporation
       physical id: 0.1
       bus info: pci@0000:02:00.1
       logical name: eth3
       version: 02
       serial: [REMOVED]
       size: 10Gbit/s
       capacity: 10Gbit/s
       width: 64 bits
       clock: 33MHz
       capabilities: pm msi msix pciexpress vpd bus_master cap_list rom ethernet physical fibre 10000bt-fd
       configuration: autonegotiation=off broadcast=yes [...] multicast=yes speed=10Gbit/s
[...]
➜  ~

Build - putting it all together

After talking about the components, we have to clarify where the hardware has to be put in. Since I do not have so much space in my current rack, I wanted the smallest factor that was possible. In this case, 1U, or ~4.3cm. Looking for cases with this form factor, I've found the Supermicro C512L [8] ; it's a 1U server case with a small factor (356 mm in depth). The case also comes with a front panel, including the Supermicro connector that fits perfectly on my board because it's also from Supermicro. I was able to get the case in good condition for around 50€ on eBay. The case also comes with a PSU but I replaced it - more on this later.

Airflow

The motherboard shipped with a heatsink that totally fits within a 1U case. The heatsink is designed to push the air that comes in front of the heatsink through it. The fans in front of the heatsink will then push cool air to the cooler, which will decrease the temperature. For this purpose, I looked for fans that are not too loud, as usually 1U server fans sound like a turbine. My first choice was the Noctua NF-A4x20 PWM. This is a 40x20mm super quiet fan; the only problem is the price. Although the Noctua was quiet, and the quality speaks for itself, I've looked for an alternative. At the end, I landed at the ARCTIC S4028-6K. These fans have a larger airflow (ARCTIC: 12.06 m³/h (7.1 CFM), Noctua: 9.4 m³/h (5.53 CFM)) in comparison to the Noctua. And the biggest advantage is the price! The pack of five only cost 30€. I planned to place the fans in front of the motherboard, so five of them were perfect.

To place the fans in order, the idea was to build a fan holder. I'm not the first one with these ideas, so I looked for 3D prints. Meanwhile, I've found the article from Clemens Leitner [9] which was built similar years ago. He published his files on GitHub [10] which allowed me to modify the model, as in the original one, only four fans could be held within the holder. The result can be seen below:

A friend of mine was so kind as to print me the fan holder, as I don't have a 3D printer yet. (I know, shame on me!) However, the model was originally built for Noctua fans, but it also holds the ARCTIC perfectly. The modified version can be found here. Through the mounting process, I was a bit too rude and broke the bracket that was made to mount the fan holder to the case. However, double-sided tape holds the fan in place. Some zip ties and cable holders later, the fan holder, including the fans, was mounted and looked very clean from my point of view.

Since the motherboard have 5x 4-pin fan headers and one CPU fan header, there were no issues installing all the fans. The fan headers FANA, FANB and FAN1-3 are nearby; the fan header FAN4 was used to place two Noctua NF-A4x20 to pull the air within the case out. I used the place where usually the PSU sits.

For the optimal cooling experience, I also built a custom air shroud that allows me to cool down the CPU on high to 55-56°C with approximately 4500 rpm:

Without the air shroud, the temperatures are around 60-64°C with the maximum of ~6500 rpm (The official specifications from ARCTIC said 6000 rpm, but it's more around 6500 rpm from my experience.), which can be quite loud.

The only drawback is that the temperature of the NVMe goes a bit up as the airflow will redirect to the heatsink. Without the air shroud, the left-side fan will also cover the NVMe and other parts of the motherboard. But the difference is around 4°C, and the advantage of having fans that spin lower and more efficiently is that it's worth it.

PSU - Hello PicoPSU!

As mentioned earlier in the post, the case came with a PSU; as the goal was to build a small, quiet, and energy-efficient server for my homelab I replaced the PSU with a PicoPSU with 90 watts. These have enough power to run the system, including seven fans, two SSDs, a 10 Gigabit network card, and an NVMe. The advantages that come with the PicoPSU are the small form factor, its being fanless, and its being very efficient at lower loads. I bought the PicoPSU-90W and the external power supply for around 50€ together.

Hold my beerSSDs

Usually the hard drives/SSDs are placed on the left upper corner of the case that I've used. This space is now "blocked" with the custom fan array. Fortunately, by removing the PSU that came with the case and replacing it with a PicoPSU, we now had a new place for the SSDs. Looking for the SSD bracket that held the SSDs, I've found the one created by Andreas Kirchlechner [11] .

To connect the SATAs, I used the flex SATA cable from Delock with a length of 70 cm. These cables were quite expensive; in fact, these were just some SATA cables. However, the fact that the cables were long and flexible at the same time makes it worth it.

As seen in the picture above, the motherboard comes with a 12V male MOLEX connector. I've decided to use this connector directly from the board rather than from the PicoPSU due to the cable management. The only problem was that I didn't find any female 4-pin MOLEX to 15-pin SATA adapters, only male MOLEX to SATA. Because the PicoPSU was delivered with a 4-pin (female) MOLEX and SATA adapter to connect an HDD and SATA drive, I've used this one and modified the input source to allow me to connect a further SATA drive. For this purpose, I have bought some SATA power pins typ 3811 for around 2€.

Riser Card + Network

Allowing me to add a network card to the system, a riser card was needed because of the form factor that also comes with a 1U case. I found a good deal for a Supermicro Riser Card 1U RSC-RR1U-E16 on eBay. Even though this was a x16 PCIe card and the selected network card (Intel X710-DA2) has only x8 PCIe this fits, of course.

If you take a closer look at the picture above, you will maybe think, "Why is the card in the x8 PCIe slot even though the x16 PCIe slot is free?". The answer is easy: the x16 PCIe bus is directly connected to the CPU (TL;DR: The SLOT4 on the motherboard (x4 PCIe but x8 PCIe input), is connected to the chipset instead of the CPU and allows it to go to C-State 10 (C10).), which means the card can use the full speed, but on the other hand, the card never goes down to C-State 10 (C10) [12] . More on this in the second part of this blog post.

The x8 PCIe slot is open at the end, allowing even cards that are longer than the slot. Although SLOT4 connected to x4 PCIe, the network card can reach approximately 10 gigabytes without any issues:

➜  ~ iperf3 -c 10.10.0.235
Connecting to host 10.10.0.235, port 5201
[  5] local 10.10.0.237 port 56394 connected to 10.10.0.235 port 5201
[ ID] Interval           Transfer     Bitrate         Retr  Cwnd
[  5]   0.00-1.00   sec  1.10 GBytes  9.42 Gbits/sec   20   1.52 MBytes
[  5]   1.00-2.00   sec  1.10 GBytes  9.42 Gbits/sec    0   1.81 MBytes
[  5]   2.00-3.00   sec  1.09 GBytes  9.41 Gbits/sec    0   1.84 MBytes
[  5]   3.00-4.00   sec  1.10 GBytes  9.42 Gbits/sec    3   1.86 MBytes
[  5]   4.00-5.00   sec  1.10 GBytes  9.41 Gbits/sec    0   1.87 MBytes
[  5]   5.00-6.00   sec  1.10 GBytes  9.41 Gbits/sec    0   1.87 MBytes
[  5]   6.00-7.00   sec  1.10 GBytes  9.42 Gbits/sec    0   1.88 MBytes
[  5]   7.00-8.00   sec  1.10 GBytes  9.42 Gbits/sec    0   1.89 MBytes
[  5]   8.00-9.00   sec  1.09 GBytes  9.40 Gbits/sec    0   1.93 MBytes
[  5]   9.00-10.00  sec  1.10 GBytes  9.41 Gbits/sec    3   1.93 MBytes
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-10.00  sec  11.0 GBytes  9.42 Gbits/sec   26            sender
[  5]   0.00-10.00  sec  11.0 GBytes  9.41 Gbits/sec                 receiver

iperf Done.
➜  ~

On top of the fact that the system can reach C10 by adding the card to the 4x PCIe slot, it has the side effect that the NVMe is not covered by the network card. Since the network card and the NVMe will give off heat, it's not the best idea to place them one above the other.

Power Consumption

Putting in such a lot of effort, one question is still unanswered. How is the power consumption? According to the motto:

Just to clarify, the picture above shows the power consumption of 15.5 watts in idle with low rpm and software-side improvements. These improvements will be discussed in part II where I will tell more about the settings I did, including the fan settings.

The system pulls 15.5 watts from the wall and was measured with a MECHEER WM07-DE power meter. Only a DAC cable was installed to the Intel X710-DA2 during the measure.

The final result

Now let's come to an end, and as always, a picture tells more than a thousand words. Here are some impressions of the final build:

Both pictures are without the custom air shroud. The NVMe also has a heatsink installed. Plans for the future are a 3D print for the network card profile as well as a dust filter. I also consider moving one Noctua NF-A4x20 fan in front of the network card, and one will be there where it still sits.

Parts (or how Scooter would say: How Much Is the Fish?[13])

After I have written about the hardware I choose during the build process, it's time to come to the important questions: "How much do the full build costs?". Since I have built three of the mentioned servers and I didn't pay the same for each part, I'll show the cheapest version as well as the expensive version (the first number is always the cheapest version) of the configured build:

When we add all prices together, we come 834€ (excluding the thermal paste, cable holders, screws, and zip ties) for the cheapest "version" and 1103€ for the expensive one. It's silly to see that a big part of the price comes from the memory, as the market is totally crazy these days.

In the second post of the mini-series, I will discuss the software/BIOS improvement I did, which allows me to reach these power consumption levels.

Resources

#EOF