apache2 randomly uses old self signed certificate

On a site with a valid SSL certificate, sometimes a page load would result in a warning about being insecure. When inspecting the certificate, it turned out to be an old self signed certificate that shouldn’t be in use any more since the site now has a valid “real” SSL certificate.

Reloading the page a couple of times after the problem occured, and the site loaded again with a green SSL “lock” symbol. But randomly the problem kept occuring.

Even after deleting the old self signed certificate files from the server and restarting apache2, the problem remaind.

It turned out that neither systemctl restart apache2, service apache2 restart or apachectl stop / start killed all running processes.

To verify this I did apachectl stop and then ps ax | grep apache2 revealing there where still running processes. After killing each of them manually I started apache2 again and the problem was resolved.

How to convert a certificate PFX file to CRT/KEY using openssl

Your PFX certificate file is protected with a password. It can be converted to CRT and KEY files using SSL:

openssl pkcs12 -in certfile.pfx -nocerts -out keyfile-encrypted.key

When you enter this command you will be asked to type in the pfx file password in order to extract the key. You will be asked to enter a passphrase for the encrypted key. The key will be stored in keyfile-encrypted.key.

The exported keyfile is encrypted but you might need it in unencrypted format. To unencrypt the key, do:

openssl rsa -in keyfile-encrypted.key -out keyfile.key

You will be asked for the passphrase that you entered in the previous step. The unencrypted key will be stored in keyfile.key.

Then it is time to extract the certificate:

openssl pkcs12 -in certfile.pfx -clcerts -nokeys -out certfile.crt

Again, you will need to enter the pfx file password in order to extract the certificate. The certificate will be stored in certfile.crt.

High load on CPU and disk I/O every hour (Apache, MySQL and mod_pagespeed on Ubuntu)

On one of my Ubuntu servers I noticed a significant peak in CPU load (load average, LA) and disk I/O about every hour. At first, I suspected that MySQL was the cause of this, doing some houeskeeping or garbage cleaning.

However, it turned out it was caused by the Apache module mod_pagespeed. The high load occuered when pagespeed was cleaning out it’s cache.

The solution was to locate the cache on tmpfs instead. This was done by editing the file /etc/apache2/mods-available/pagespeed.conf and change the location of the cache by the line (/run is located on tmpfs which is RAM memory):

ModPagespeedFileCachePath "/run/cache/mod_pagespeed/"

Then restart Apache by:

service apache2 restart

Delete old PHP5 session files automatically

If not specified the PHP5 session files will be located in a directory like /var/lib/php5 and the builtin garbage collection will delete them, normally after 24 hours.

But often systems like CMS:es will put session files somewhere else and if the system doesn’t have it’s own garbage collection those session files will be kept forever. The reason can be to let a website visitor click “keep me logged in” or to remember a visitors preferences on the website. This normally doesn’t impose a problem as these session files are very small and the number of them counts in hundreds or possibly a couple of thousand files.

However, I encountered a site that created a very large number of session files and kept them forever. At some point the session files count was in the range of millions of files causing the system to run out of inodes. A solution could be to investigate how the system was handling session files but the internal workings of the site was outside of my responsibility. Another solution was to increase inodes but this will be a temporary solution.

The solution chosen was to create a garbage collection routine for the site in question, that deleted session files older than x days. The oldest session files was over 4 years old. The decision was to delete all session files older than a month (30 days), causing visitors who logged in, or had their preferences set, for more than a month ago had to re-login or set their preferences again on their next visit to the site. This was accomplished by the following command (which is run by cron every night):

find /var/www/somedomain.com/web/var/session/ -type f -mtime +30 -exec rm {} \;

Error 404 when trying to password protect administrator folder in Joomla using Apache htaccess

When protecting the administrator folder in Joomla using Apache htaccess to increase the security of the website (which by the way is a good thing to do to prevent too interested individuals to peek where they shouldn’t), you might get an error 404 message when trying to access administrator and the expected password prompt never appears.

The cause can be that the server is configured to allow user defined error pages stored in for example the error folder. If some of the error files are missing the above mentioned problem can occur. Make sure you have the correct folder name and file names for your error files.

This particular problem occured for me when I moved a Joomla site from one hosting provider to another and by mistake also replaced the error folder with the one from the old hosting provider which had a completely different file name structure for the error message files.

Error message in Apache log file: “Options FollowSymLinks or SymLinksIfOwnerMatch is off which implies that RewriteRule directive is forbidden”

The error message "Options FollowSymLinks or SymLinksIfOwnerMatch is off which implies that RewriteRule directive is forbidden" occured in my Apache logfile and the visitor was displayed an error 403 forbidden error page when trying to access a cgi-script written in Perl. The solution was to add the following for the directory in question to my Apache configuration file: 

<Directory /var/www/cgi-bin>
  <Files ~ (\.pl)>
  Options ExecCGI FollowSymLinks
  </Files>
  Options +FollowSymLinks +SymLinksIfOwnerMatch
</Directory>

 

Err 310 ERR_TOO_MANY_REDIRECTS in Chrome and Firefox using Apache’s mod_rewrite

I used Apache’s mod_rewrite in order to direct a couple of domains pointing to the same website using the 301 moved permanently (to avoid duplicate content), i.e. something like this in .htaccess: 

RewriteEngine On
RewriteCond %{HTTP_HOST} !^example\.com$
RewriteRule ^(.*)$ http://example.com/$1 [R=301,L]

I randomly got the error message 310 ERR_TOO_MANY_REDIRECTS when trying the different domains in both Chrome and Firefox (other browsers seemd to work). I had decided my site should be reached without "www", so http://www.example.com was redirected to http://example.com also. 

After a while of head scratching I found a couple of forgotten DNS entries. Two of them where outdated. My domain setup for example.com in DNS looked like this (sample IP-addresses): 

example.com zone:

@  IN  A  192.168.100.1
@  IN  A  192.168.100.2
@  IN  A  192.168.100.3

www  IN  A  192.168.100.1

But only 192.168.100.1 was valid and had a running web server on it.

I guess what happens is when Chrome/Firefox tries to talk to 192.168.100.2 or .3 and get no response, they add "www" in front of the domain name, i.e. http://www.example.com. Now it got a response from the webserver saying 301 redirect to http://example.com. Now trying to talk to example.com on 192.168.100.2 or .3 no response, adding "www" and there we have our loop.

Fixing the DNS entries (removed the invalid 192.168.100.2 and .3) fixed the problem. A bit odd and hard to find, and most of all stupid to have the outdated records still in the zone. 

 

Remove or add www to URL using Apache mod_rewrite

To avoid a website being considered as "duplicate content" by Google (i.e. the same website appearing under different URLs) it is a good idea to make sure the website doesnt appear as both http://www.example.com and http://example.com. This can be achieved by using the mod_rewrite in Apache using the .htaccess file.

To remove "www" from the URL, i.e. the website’s URL should be http://example.com, your .htaccess should look like this: 

RewriteEngine On
RewriteCond %{HTTP_HOST} !^example\.com$
RewriteRule ^(.*)$ http://example.com/$1 [R=301,L]

If you instead want to make sure "www" is added, i.e. the website’s URL should be http://www.example.com, your .htaccess should look like this: 

RewriteEngine On
RewriteCond %{HTTP_HOST} !^www\.example\.com$
RewriteRule ^(.*)$ http://www.example.com/$1 [R=301,L]