HP Proliant “Hardware RAID support is disabled via NVRAM Configuration Setting”

I got my hands on a used HP Proliant server with P420i raid controller, however was unable to create a logical raid drive using the ACU. A message while booting shows “Hardware RAID support is disabled via NVRAM Configuration Setting“. It turns out the raid had been disabled by enabling hba mode.

This can be solved but it is a bit tricky. I used information from the following sources:

https://systemausfall.org/wikis/howto/Disable%20HP%20Proliant%20Hardware-RAID

http://downloads.linux.hpe.com/SDR/project/mcp/

https://wiki.debian.org/HP/ProLiant#HP_Repository

https://cdimage.debian.org/debian-cd/current-live/amd64/iso-hybrid/

  1. Connect one of the network ports to a switch on a LAN where you have a DHCP server and and Internet connection (so you don’t have to fiddle with manual network configuration).
  2. From another computer, download Debian Live ISO. I used the “standard” version (link above).
  3. Boot the server on Debian Live either by making a bootable stick or mounting it via ILO (I had to use Firefox, in Chrome the ISO was unmounted mid process)
  4. When booted on the Debian Live do:
    sudo nano /etc/apt/sources.list
  5. Add the line to the file and save it (CTRL-X):
    deb http://downloads.linux.hpe.com/SDR/repo/mcp jessie/current non-free
  6. Add the keys for the repository:
    sudo curl http://downloads.linux.hpe.com/SDR/hpPublicKey1024.pub | apt-key add -
    sudo curl http://downloads.linux.hpe.com/SDR/hpPublicKey2048.pub | apt-key add -
    sudo curl http://downloads.linux.hpe.com/SDR/hpPublicKey2048_key1.pub | apt-key add -
    sudo curl http://downloads.linux.hpe.com/SDR/hpePublicKey2048_key1.pub | apt-key add -
  7. Install ssacli:
    apt-get install ssacli
  8. Check the status (you will se hbamode true somewhere):
    ssacli controller slot=0 show
  9. Disable hbamode:
    ssacli controller slot=0 modify hbamode=off
  10. Now reboot the server, remove the USB stick or ISO using ILO and press F5 during boot to start the ACU. Now you should be able to create a logical raid drive.

Zoneminder 1.37 no image – capturing but not analyzing

Zoneminder version 1.37.28. The installation had been running for years without problem but somehwere in the 1.37 version problems started to occur. Some cameras did not give visible streams in Zoneminder from time to time. Checking the cameras video directly, they where ok, so the problem was in Zoneminder. Some cameras was “capturing” but analyzing 0 fps.

In the log there was lines like:

Restarting capture daemon for 6 CAMERA NAME, no image since startup. Startup time was 1671521356 – now 1671521366 > 5

And mostly interesting:

You have set the max video packets in the queue to 100. The queue is full. Either Analysis is not keeping up or your camera’s keyframe interval 50 is larger than this setting.

The reason for this is that the buffer setting for Maximum Image Buffer size (frames) is too low compared to the key frame interval setting in the camera. The higher key frame interval (which means lower bandwidth) means Image Buffer Size must be higher, consuming more RAM. So what you save in bandwidth, you will pay in RAM. Key frame interval is how often the camera sends a full image and between them just the difference. More seldom (higher key frame interval) means lower bandwidth but you need a bigger buffer in the server.

I set my cameras to deliver 5 fps with a key frame interval of 5 as they are on a remote location (not where the server is) and all video is streamed over the Internet to the server.

I changed the options for each camera, Console -> click on source -> Buffers and change Image Buffer Size (frames) (in my case set it to 3) and Maximum Image Buffer Size (frames) (in my case I could actually decrease it from 100 to just 55 because I reduced the fps and decreased the key frame interval in the camera).

Observe how memory is used after this. You need to balance the buffer sizes the cameras fps and key frame interval

Still it did not completely solve the problem. Only after changing Options -> System -> WATCH_MAX_DELAY from 5 to 45 and restart Zoneminder, the system started to show images for all feeds. It seems like 5 seconds for the capture to start was to little so it keept restarting the capture processes.

/tmp

As a part of the investigation I had also changed /tmp into tmpfs according to some post I found online, so I did that also but it did not solve the problem. However, it improved performance so I let it be. Add the following line to /etc/fstab and reboot:

tmpfs /tmp tmpfs rw,nosuid,nodev

Disable unused cameras

Another important thing when it comes to Zoneminder performance is to disable unused cameras. You can leave the camera in the Zoneminder configuration for later use but if it is going to be offline for a longer period, it is a good idea to disable it because a camera that is offline will put unecessary load on the Zoneminder server.

Go to Console -> click on source -> Source and set Capturing to None

Remember to put it back when the camera is online again 😉

Nagios check_vnc without authentication

If you need to monitor a VNC service without logging in, the following check_command can be used.

Edit your nagios configuration file and add:

# VNC
define command{
command_name check_vnc
command_line $USER1$/check_tcp -H $HOSTADDRESS$ -p $ARG1$ -w 5 -c 8 -
e "RFB"
}

Then on the service you want to monitor use:

define service{
use generic-service ; Name of service template to use host_name
host_name MYHOSTNAME
service_description VNC
is_volatile 0
check_period 24x7
max_check_attempts 3
normal_check_interval 5
retry_check_interval 1
contact_groups MYCONTACTGROUPS
notification_interval 240
notification_period 24x7
notification_options c,r
check_command check_vnc!5910
}

If you need another port than the standard one, just replace “5900”

Ispconfig3 certbot is not renewing certificates (Ubuntu 20)

When creating a new site, a valid certificate was issued but when expiring they where never renewed. Investigating /etc/letsencrypt directories was missing the usual subfolders, like live for example.

It turned out the server had both acme.sh and certbot installed. The solution was to remove certbot. Investigate if the directory /root/.acme.sh exists and it’s contents.

apt remove certbot
ispconfig_update.sh --force

Then in Ispconfig go to Tools -> Sync, select Web sites and the server you just removed.

apt update error message “Could not execute ‘apt-key’ to verify signature”

It turned out the reason for this was changed permissions on the /tmp folder (caused by restoring a folder with BackupPC to /tmp instead of it’s original location).

Solution:

chown root:root /tmp
chmod 1777 /tmp

After this apt update worked as normal.

How can I use a PC to recover data when my Synology NAS malfunctions?

Purpose

This article will guide you to recover data on your PC when Synology NAS malfunctions.

Notes:

The drives may not be able to mount the volume again after being migrated to a new NAS.

Environment

  • Available on DSM version 6.2.x and above.
  • Only applicable to ext4 or Btrfs file system.
  • Ubuntu version should be 18.04 and above.

Resolution

  1. Make sure your PC has sufficient drive slots for drive installation.
  2. Remove the drives from your Synology NAS and install them in your PC. For RAID or SHR configurations, you must install all the drives (excluding hot spare drives) in your PC at the same time.
  3. Prepare an Ubuntu environment by following the instructions in this tutorial.
  4. Go to the Files on the left bar and select Home.
  5. Right-click and select New Folder, and create one or more folders as mount points for accessing data.1
  6. Right-click on the new folder(s), click Properties, the parent folder with folder name is ${mount_point}.
    Example: If the parent folder is /home/ubuntu/ and the folder name is Test, the mount point will be /home/ubuntu/Test/.
  7. Go to Show Application in the lower-left corner > Type to search….
  8. Enter Terminal in the search bar and select Terminal.
  9. Enter the following command to obtain the root privileges.

sudo -i

  1. Enter the following commands to install mdadm and lvm2, both of which are RAID management tools. lvm2 must be installed or vgchange will not work.

apt-get update
apt-get install -y mdadm lvm2

  1. Enter the following command to assemble all the drives removed from your Synology NAS. The results may differ according to the storage pool configurations on your Synology NAS.

mdadm -Asf && vgchange -ay

  1. Enter the following command to get the information of ${device_path}.

cat /proc/mdstat
lvs

According to the output of pvs/vgs/lvs, the device paths are as follows:

${device_path}
No lvs output/dev/${md}2
With lvs output/dev/${VG}/${LV}3

Below are the samples of md status corresponding to its RAID and volume type:

Device PathsClassic RAID with single volume
cat /proc/mdstatroot@ubuntu:~# cat /proc/mdstat Personalities : [raid1] md4 : active raid1 sdc3[0] 73328704 blocks super 1.2 [1/1] [U] unused devices:<none>
lvsNo output
${device_path}/dev/md4
Device PathsSHR with single volume
cat /proc/mdstatroot@ubuntu:~# cat /proc/mdstat Personalities : [raid1] md3 : active raid1 sda5[0] 73319616 blocks super 1.2 [1/1] [U] unused devices:<none>
lvsroot@ubuntu:~# lvs LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert lv vg1000 -wi-a—– 69.92g
${device_path}/dev/vg1000/lv
Device PathsClassic RAID/SHR with multiple volume
cat /proc/mdstatPersonalities : [raid1] md3 : active raid1 sdc3[0] sdd3[1] 73328704 blocks super 1.2 [2/2] [UU] unused devices:<none>
lvsroot@ubuntu:~# lvs LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy% Sync Convert syno_vg_reserved_area vg1 -wi-a—– 12.00m volume_1 vg1 -wi-a—– 30.00g volume_3 vg1 -wi-a—– 30.00g
${device_path}/dev/vg1/volume_1
/dev/vg1/volume_3
  1. Enter the following commands to mount all the drives as read-only to access your data. Enter your device path (according to RAID and volume type in Step 12) in ${device_path} and mount point (created in Step 6)in ${mount_point}. Your data will be placed under the mount point.

$ mount ${device_path} ${mount_point} -o ro

  1. Check the data in Files > Home > the folders you created in Step 5.

If you still cannot recover the data through the above steps, refrain from trying any other methods to repair because it may cause more damage to your data. As your last option, please seek the help of a local data rescue company. Kindly understand that data retrieval is still not guaranteed.

Notes:

  1. A mount point is equal to one volume. If you have multiple volumes that need to be recovered, please create the same number of folders as the number of volumes.
  2. The number of md (array) will be listed in the result of cat /proc/mdstat.
  3. syno_vg_reserved_area can be ignored, the number of volume_x is equal to the number of volumes.

ERROR 1067 (42000) at line xxx: Invalid default value for ‘field’

This is beacuse mysql server 5.7 have changed the the date time default field 0000-00-00 00:00:00 options. You can only have one field with the value 0000-00-00 00:00:00. Where fore you have to change the default indata mask to.

datetime NOT NULL DEFAULT ‘1000-01-01 00:00:00’
or
timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP

exampel
ALTER TABLE testdate CHANGE datestart datestart DATETIME NOT NULL DEFAULT ‘1000-01-01 00:00:00’;

How to change mysql table engine MyISAM to InnoDB

login to mysql shell as root.
locate the database where the tables are situated.
SET @DATABASE_NAME = ‘name_of_your_db’; // name_of_your_db = the database you want to change table engine.
SELECT CONCAT(‘ALTER TABLE ', table_name, ' ENGINE=InnoDB;’) AS sql_statements FROM information_schema.tables AS tb WHERE table_schema = @DATABASE_NAME AND ENGINE = ‘MyISAM’ AND TABLE_TYPE = ‘BASE TABLE’ ORDER BY table_name DESC;
The result will end up in a list of the tables that needs to be changed. Copy the list and do the following.
Shift to the database involved:

USE name_of_your_db
START TRANSACTION;
insert the copied list
COMMIT;
You have now changed the engine of the tables.

[ERROR] Fatal error: Can’t open and lock privilege tables: Table ‘mysql.user’ doesn’t exist

This error ocurs then the database mysql is missing or corrupt.
Stop the mysql server “service mysql-server stop”
Make a backup of /var/db/mysql “mv /var/db/mysql /var/db/mysql.old”.
To rebuild the database execute “/usr/local/libexec/mysqld –initialize”
You will get a temporary password. Remember the password for later use.
Start the mysql server “service mysql-server start”
To start upp the new configuration “mysql_secure_installation”. Use the Password to start the configuration and step thru the wizzard.
Restore the mysql backup and the server is good as new.

Prevent Mac OSX ssh from disconnecting (also on any Linux/BSD/*nix system)

To prevent ssh from disconnecting while idle, add the following to ~/.ssh/config:

Host *
    ServerAliveInterval 30
    TCPKeepAlive no

This solution is alse useable in any Linux/BSD/*nix environment. If you want to implement this not only on your own user, as a sysadmin, add the above to /etc/ssh/ssh_config instead.