Difference between revisions of "User:FlyingBlackbird"

From RCS Wiki
Jump to navigation Jump to search
(Add boot durations)
(Add open hardware tests and Ubuntu installation probs with SSD)
Line 61: Line 61:
 
|style="text-align:right;"|
 
|style="text-align:right;"|
 
|}
 
|}
 +
 +
Hardware being tested at the moment (but not yet fully successfully in the complete system due to NVMe problems when installing Linux)
 +
* NVMe M.2 SSD together with a SATA HDD and Ubuntu 19.10 Server
 +
** RaidSonic ICY BOX IB-PCI214M2-HSL M.2 to PCIe adapter
 +
** Samsung 970 EVO Plus (M.2 with NVMe) 2 TB
 +
 +
Test-Status:
 +
* Generally (independent of hardware): Ubuntu Server 19.10 Installation says:
 +
  <syntaxhighlight lang="text">
 +
  SQASHFS error: xz decompression failed, data probably corrupt
 +
  SQASHFS error: squashfs_read_data failed to read block 0x0
 +
  SQASHFS error: Unable to read metadata cache entry [0]
 +
  </syntaxhighlight>
 +
* NVMe SSD is recognized but the installation seems to use the wrong device name later:
 +
<syntaxhighlight lang="text">
 +
  Error: Could not stat device /dev/disk/by-id/wwn-eui.0025... - No such file or directory
 +
  ...
 +
  An error occured handling 'disk-nvme0n1': OSError - [Errno Failed to find device at path: %s] /dev/disk/by-id/wwn-eui.0025...
 +
  ...
 +
  Traceback:
 +
  ...
 +
  File "/snap/subiquity/1286/lib/python3.6/site-packages/curtin/commands/block_meta.py", line 182, in devsync
 +
</syntaxhighlight>
 +
 +
  TODO: Add link to workaround (sym link of by-id to nvme
 +
 +
TODO
  
 
= Measures =
 
= Measures =

Revision as of 07:43, 26 January 2020

Usage

I am using a Blackbird Desktop System with an IBM Power9 v2 (= stepping DD2.3) 8-core CPU as testing infrastructure for open source projects (since Jan, 2020).

Instable hardware watchpoints in stepping DD2.2 (erratum #1: DAWR [Data Access Watchpoint Register] feature is disabled on DD2.2) was the main reason for me to wait for DD2.3 to have reliable low-level debugging available.

System configuration

Component Brand Model Costs in EUR
Mainboard Raptor Blackbird Rev. 1.01
CPU IBM 8-Core POWER9v2 with 3U HSF
Desktop Case Fractal Design Define R6 USB-C 140
Power Supply be quiet! Straight Power 11 (650 W) 120
RAM Samsung (OEM by phs memory) 2 x M393A4K40CB2-CTD7Q 32 GB DDR 4 RDIMM 344
Video Card Aspeed AST 2500 onboard VGA 0
Optical Drive Asus BW-16D1HT Retail (Blu Ray Writer) 75
HDD Seagate IronWolf Pro 8 TB (ST8000NE0004) SATA III 300
Operating System Ubuntu Server 19.10 0
Total

Hardware being tested at the moment (but not yet fully successfully in the complete system due to NVMe problems when installing Linux)

  • NVMe M.2 SSD together with a SATA HDD and Ubuntu 19.10 Server
    • RaidSonic ICY BOX IB-PCI214M2-HSL M.2 to PCIe adapter
    • Samsung 970 EVO Plus (M.2 with NVMe) 2 TB

Test-Status:

  • Generally (independent of hardware): Ubuntu Server 19.10 Installation says:
  SQASHFS error: xz decompression failed, data probably corrupt
  SQASHFS error: squashfs_read_data failed to read block 0x0
  SQASHFS error: Unable to read metadata cache entry [0]
  • NVMe SSD is recognized but the installation seems to use the wrong device name later:
  Error: Could not stat device /dev/disk/by-id/wwn-eui.0025... - No such file or directory
  ...
  An error occured handling 'disk-nvme0n1': OSError - [Errno Failed to find device at path: %s] /dev/disk/by-id/wwn-eui.0025...
  ...
  Traceback:
  ...
  File "/snap/subiquity/1286/lib/python3.6/site-packages/curtin/commands/block_meta.py", line 182, in devsync
 TODO: Add link to workaround (sym link of by-id to nvme

TODO

Measures

Power Consumption:

  • 2.2 W plugged in
  • 68 W Idle
  • 186 W with 100 % CPU load (testet with stress --cpu 32 -t 180s)

Temperatures:

  • CPU idle: 45 degree Celsius
  • CPU 100 % load: 72 degree Celsius
  • DIMM 0 (32 GB ECC RAM): 44 Degree Celsius
  • DIMM 1 (32 GB ECC RAM): 49 degree Celsius

Noise Level:

  • Surprisingly quiet during idle and normal use (low CPU usage)
  • Fans spinning fast but not too loud during 100 % CPU load (slow RPM increase and decrease, no annoying sudden changes)

Boot-up durations

  • Cold boot (powered off)
    • Switching on the power supply switch: Time until the case power switch reacts to start Hostboot: 2 minutes
    • After cold boot: Time until Petitboot boot menu appears: 2 minutes
    • Booting Ubuntu 19.10 Server on a NVMe SSD: Time from Petitboot boot menu to Ubuntu lightdm login screen: 34 s
  • Soft boot (powered on but switched off via the case power switch)
    • Time until the Petitboot boot menu appears: about 100 seconds
  • Reboot from Ubuntu until Petitboot boot menu appears: about 4 - 10 seconds (after Ubuntu has shut down)

Summary:

  • A cold start takes quite long compared x86_64 architecture (about 4,5 minutes until the Ubuntu login screen appears)
  • A warm start does also take longer compared x86_64 architecture (about 2 minutes until the Ubuntu login screen appears)


Software

Ubuntu Server 19.10

  • Gnome installed via tasksel command:
    • choose "Ubuntu desktop" to install Gnome