Posts

HP Proliant “Hardware RAID support is disabled via NVRAM Configuration Setting”

I got my hands on a used HP Proliant server with P420i raid controller, however was unable to create a logical raid drive using the ACU. A message while booting shows “Hardware RAID support is disabled via NVRAM Configuration Setting“. It turns out the raid had been disabled by enabling hba mode.

This can be solved but it is a bit tricky. I used information from the following sources:

https://systemausfall.org/wikis/howto/Disable%20HP%20Proliant%20Hardware-RAID

http://downloads.linux.hpe.com/SDR/project/mcp/

https://wiki.debian.org/HP/ProLiant#HP_Repository

https://cdimage.debian.org/debian-cd/current-live/amd64/iso-hybrid/

  1. Connect one of the network ports to a switch on a LAN where you have a DHCP server and and Internet connection (so you don’t have to fiddle with manual network configuration).
  2. From another computer, download Debian Live ISO. I used the “standard” version (link above).
  3. Boot the server on Debian Live either by making a bootable stick or mounting it via ILO (I had to use Firefox, in Chrome the ISO was unmounted mid process)
  4. When booted on the Debian Live do:
    sudo nano /etc/apt/sources.list
  5. Add the line to the file and save it (CTRL-X):
    deb http://downloads.linux.hpe.com/SDR/repo/mcp jessie/current non-free
  6. Add the keys for the repository:
    sudo curl http://downloads.linux.hpe.com/SDR/hpPublicKey1024.pub | apt-key add -
    sudo curl http://downloads.linux.hpe.com/SDR/hpPublicKey2048.pub | apt-key add -
    sudo curl http://downloads.linux.hpe.com/SDR/hpPublicKey2048_key1.pub | apt-key add -
    sudo curl http://downloads.linux.hpe.com/SDR/hpePublicKey2048_key1.pub | apt-key add -
  7. Install ssacli:
    apt-get install ssacli
  8. Check the status (you will se hbamode true somewhere):
    ssacli controller slot=0 show
  9. Disable hbamode:
    ssacli controller slot=0 modify hbamode=off
  10. Now reboot the server, remove the USB stick or ISO using ILO and press F5 during boot to start the ACU. Now you should be able to create a logical raid drive.

How can I use a PC to recover data when my Synology NAS malfunctions?

Purpose

This article will guide you to recover data on your PC when Synology NAS malfunctions.

Notes:

The drives may not be able to mount the volume again after being migrated to a new NAS.

Environment

  • Available on DSM version 6.2.x and above.
  • Only applicable to ext4 or Btrfs file system.
  • Ubuntu version should be 18.04 and above.

Resolution

  1. Make sure your PC has sufficient drive slots for drive installation.
  2. Remove the drives from your Synology NAS and install them in your PC. For RAID or SHR configurations, you must install all the drives (excluding hot spare drives) in your PC at the same time.
  3. Prepare an Ubuntu environment by following the instructions in this tutorial.
  4. Go to the Files on the left bar and select Home.
  5. Right-click and select New Folder, and create one or more folders as mount points for accessing data.1
  6. Right-click on the new folder(s), click Properties, the parent folder with folder name is ${mount_point}.
    Example: If the parent folder is /home/ubuntu/ and the folder name is Test, the mount point will be /home/ubuntu/Test/.
  7. Go to Show Application in the lower-left corner > Type to search….
  8. Enter Terminal in the search bar and select Terminal.
  9. Enter the following command to obtain the root privileges.

sudo -i

  1. Enter the following commands to install mdadm and lvm2, both of which are RAID management tools. lvm2 must be installed or vgchange will not work.

apt-get update
apt-get install -y mdadm lvm2

  1. Enter the following command to assemble all the drives removed from your Synology NAS. The results may differ according to the storage pool configurations on your Synology NAS.

mdadm -Asf && vgchange -ay

  1. Enter the following command to get the information of ${device_path}.

cat /proc/mdstat
lvs

According to the output of pvs/vgs/lvs, the device paths are as follows:

${device_path}
No lvs output/dev/${md}2
With lvs output/dev/${VG}/${LV}3

Below are the samples of md status corresponding to its RAID and volume type:

Device PathsClassic RAID with single volume
cat /proc/mdstatroot@ubuntu:~# cat /proc/mdstat Personalities : [raid1] md4 : active raid1 sdc3[0] 73328704 blocks super 1.2 [1/1] [U] unused devices:<none>
lvsNo output
${device_path}/dev/md4
Device PathsSHR with single volume
cat /proc/mdstatroot@ubuntu:~# cat /proc/mdstat Personalities : [raid1] md3 : active raid1 sda5[0] 73319616 blocks super 1.2 [1/1] [U] unused devices:<none>
lvsroot@ubuntu:~# lvs LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert lv vg1000 -wi-a—– 69.92g
${device_path}/dev/vg1000/lv
Device PathsClassic RAID/SHR with multiple volume
cat /proc/mdstatPersonalities : [raid1] md3 : active raid1 sdc3[0] sdd3[1] 73328704 blocks super 1.2 [2/2] [UU] unused devices:<none>
lvsroot@ubuntu:~# lvs LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy% Sync Convert syno_vg_reserved_area vg1 -wi-a—– 12.00m volume_1 vg1 -wi-a—– 30.00g volume_3 vg1 -wi-a—– 30.00g
${device_path}/dev/vg1/volume_1
/dev/vg1/volume_3
  1. Enter the following commands to mount all the drives as read-only to access your data. Enter your device path (according to RAID and volume type in Step 12) in ${device_path} and mount point (created in Step 6)in ${mount_point}. Your data will be placed under the mount point.

$ mount ${device_path} ${mount_point} -o ro

  1. Check the data in Files > Home > the folders you created in Step 5.

If you still cannot recover the data through the above steps, refrain from trying any other methods to repair because it may cause more damage to your data. As your last option, please seek the help of a local data rescue company. Kindly understand that data retrieval is still not guaranteed.

Notes:

  1. A mount point is equal to one volume. If you have multiple volumes that need to be recovered, please create the same number of folders as the number of volumes.
  2. The number of md (array) will be listed in the result of cat /proc/mdstat.
  3. syno_vg_reserved_area can be ignored, the number of volume_x is equal to the number of volumes.