How to export mailman mailinglist subscribers

There is no built-in function to export the mailinglist subscriber in mailman. If you want to migrate to another software or move to another mailman installation you are in need to export the subscribers. I found this to be the easiest way:

  1. Log in to your mailman mailinglist administration panel, like http://hostname/cgi-bin/mailman/admin/mylist
  2. Replace “admin” with “roster” in the URL in the address bar and press enter, like http://hostname/cgi-bin/mailman/roster/mylist
  3. The subscribers are listed in a bulleted list with the @ sign replaced by ” at “. Mark the list in the browser, right click the selection and “Copy”
  4. Open a plain text editor, like notepad if you are running Windows (avoid Word, Wordpad, LibreOffice writer etc since you will also paste a lot of text attributes in those editors, use a plain text editor)
  5. Right click in the editor and select “Paste”
  6. Search and replace, normally by pressing CTRL-H. Search for ” at ” and replace it with “@” (without the “”) and click “Replace all”.
  7. Save the file

If you are moving the subscriber list into another mailman installation:

  1. In the text editor above, right click and “Select all” to select all text (the list of the subscribers) and then right click and “Copy”
  2. In the administration panel of the new mailman installation, click on “Membership management
  3. Click on “Add new members to the list
  4. In the top box where you can type email addresses one per line, right click and “Paste”
  5. Set “Send welcome message to new members” to “No” (unless you really want mailman to send an email welcoming all the subsrcibers you are adding)
  6. Click “Save” in the bottom of the page

Delete old PHP5 session files automatically

If not specified the PHP5 session files will be located in a directory like /var/lib/php5 and the builtin garbage collection will delete them, normally after 24 hours.

But often systems like CMS:es will put session files somewhere else and if the system doesn’t have it’s own garbage collection those session files will be kept forever. The reason can be to let a website visitor click “keep me logged in” or to remember a visitors preferences on the website. This normally doesn’t impose a problem as these session files are very small and the number of them counts in hundreds or possibly a couple of thousand files.

However, I encountered a site that created a very large number of session files and kept them forever. At some point the session files count was in the range of millions of files causing the system to run out of inodes. A solution could be to investigate how the system was handling session files but the internal workings of the site was outside of my responsibility. Another solution was to increase inodes but this will be a temporary solution.

The solution chosen was to create a garbage collection routine for the site in question, that deleted session files older than x days. The oldest session files was over 4 years old. The decision was to delete all session files older than a month (30 days), causing visitors who logged in, or had their preferences set, for more than a month ago had to re-login or set their preferences again on their next visit to the site. This was accomplished by the following command (which is run by cron every night):

find /var/www/somedomain.com/web/var/session/ -type f -mtime +30 -exec rm {} \;

PHP Parse error: syntax error, unexpected end of file in XXXXX.php

  • Check that you have spaces around curly braces, i.e. for example don’t use <?php}?> but instead use <?php } ?>
  • If your script has been running without problems on earlier versions of PHP but you are now running PHP 5.4 or later, replace all occurrences of <? with <?php

Cron scripts in /etc/cron.daily not running

If you put a script to be run by cron in cron.hourly, cron.daily, cron.weekly or cron.monthly but they won’t run, make sure that they:

  • Are chmod +x
  • Are owned by the correct user (like root:root)
  • Start with #!/bin/sh or the corresponding shell used to execute them
  • The filename doesn’t contain any dots, like a script name ending in .sh will not execute

You can also execute the command to verify that your script will be run:

run-parts –test /etc/cron.daily

FormMail.pl keeps reporting Bad referrer

I used FormMail.pl from Matt’s Script Archive on one of my web sites and called this script from several other web sites. This way I only need to maintain one copy of the script regarding updates.

The script began complaining about “Bad referrer” when called from my other sites, even though I could positively verify that the other site’s domain name was present in the referrers of FormMail.pl.

Not finding the problem in the script itself I began to think over what I had recently changed on the site hosting the FormMail.pl and one thing was that I recently added a http redirect any incoming calls that didn’t go to the site url using https and I also redirected www.sitename to just sitename. For example, a call to http://www.sitename was redirected to https://sitename using Apache’s http rewrite module.

On my other sites using the FormMail.pl script from the main site, I was calling the FormMail.pl using a url beginning with http://www, i.e. the call was redirected by http redirect rules. When changing the url to call the FormMail.pl script to https://sitename (i.e. using SSL and no www), the call would not be touched by http redirect rules and voila – everything was working again.

Ispconfig3 on Ubuntu 12.04 upgrade to 14.04

Upgrading a system running Ispconfig3 on Ubuntu 12.04 (LTS) to 14.04 is quite straight forward. However there are some issues to consider before doing so. It might affect some of the sites that are being hosted.

  • Ubuntu 14.04 will move you from Apache 2.2 to 2.4
  • php will be upgraded from 5.3 to 5.5. Most modern CMS:es like Joomla and WordPress will run on php 5.5 but clients can be using other software or third party extensions that are not ready for php 5.5

Upgrade procedure:

  • Backup, backup and backup. And do some backup again.
  • Prepare your users for some downtime. The upgrade can take up to several hours depending on your server.
  • Upgrade all installed packages so you have the latest versions;
    apt-get update
    apt-get upgrade
  • Then run do-release-upgrade
  • During the upgrade process you probably will be prompted several times about configuration files that have been locally modified. I usually examine the differences using D option and in most cases I select to install the new configuration file using Y option.
  • After Ubuntu has been upgraded and the system has been rebooted you must reconfigure Ispconfig3. I did it by using the update procuedure even though I was running the latest Ispconfig before I upgraded Ubuntu:
    cd /tmp
    wget http://sourceforge.net/projects/ispconfig/files/ISPConfig%203/ISPConfig-3.0.5.4p5/ISPConfig-3.0.5.4p5.tar.gz
    tar xvzf ISPConfig-3.0.5.4p5.tar.gz
    cd ispconfig3_install/install
    php -q update.php
  • Allow Ispconfig3 to reconfigure your services
  • In my case Apache2 wouldn’t start after the upgrade. It was caused by the ruby module and since I don’t use it my simple solution for the moment was just to disable it:
    a2dismod ruby
    service apache2 restart
  • php imap extension had been disabled, so to fix it:
    php5enmod imap
    service apache2 restart
  • If you are hosting PrestaShop sites, you may need to disable php opcache described here or disable encryption by issuing the SQL command: UPDATE `ps_configuration` SET `value` = ‘0’ WHERE `name` = ‘PS_CIPHER_ALGORITHM’;
  • Apache2 configuration files has been moved from /etc/apache2/conf.d to /etc/apache2/conf-availible directory. Then to enable them you need to symlink the configuration file from /etc/apache2/conf-availible to /etc/apache2/conf-enabled and issue the command: service apache2 restart
  • You might experience problems with Postfix after the upgrade with log entries like fatal: no SASL authentication mechanisms and mail not being sent from the mailqueue. In that case:
    apt-get install sasl2-bin
    edit /etc/default/saslauthd and set START=yes
    /etc/init.d/saslauthd start
    service amavis restart
    service postfix restart

    I also had to comment out two lines in /etc/postfix/main.cf:
    #smtpd_sasl_path = private/auth
    #smtpd_sasl_type = dovecot

    And then do:
    service postfix restart

 

Ispconfig3 site cron not executing

When setting up a website in Ispconfig3 I wanted to run a cron job (shell script) for the site (setup under Sites -> Cron jobs) but the job did not execute. When examining the /var/log/auth.log I found lines like these:

Mar 22 10:31:01 servername jk_chrootsh[28726]: abort, homedir ‘/var/www/clients/client6/web284’ for user web284 (5015) does not contain the jail separator <jail>/./<home>

Mar 22 10:31:01 servername jk_chrootsh[28725]: abort, homedir ‘/var/www/clients/client1/web283’ for user web283 (5014) does not contain the jail separator <jail>/./<home>

The solution was to just add a dummy ssh user (using Jailkit as Chroot shell) in Ispconfig3 for the website.
I haven’t verified but I suspect the issue could be caused by the fact that the system originally was set up under Ubuntu 12.04 (LTS) and recently I did a do-release-upgrade to Ubuntu 14.04.

FileZilla FTP-client retrieve directory listing failed on MLSD command using TLS

Some week ago FileZilla released a new version – 3.10.0.1. After installing this version some clients had problems connecting with their web hosting servers using FTP. One of the changes in FileZilla was that it now defaults to TLS encrypted connections if the server supports it, which many web hosting providers do.

However, if the FTP server is not properly configured together with it’s firewall the client will connect but fail to retrieve the directory listing (timeout).

A workaround on the client side is to connect with plain old FTP using no encryption. To do this in the later versions of FileZilla you must create a connection in the site manager and select plain FTP (unsecure) in the encryption field. (This option is not availible in the quick connect.)

A better solution is to solve the problem on the server side. To do this the FTP server must be configured to use a specific range of ports for passive mode and allowing traffic to them through the server firewall. The example below shows how to do this with pure-ftpd and iptables. In the example we setup pure-ftpd to use ports 50000-55000 for passive transfers and then we allow the same range in iptables.

echo “50000 55000” > /etc/pure-ftpd/conf/PassivePortRange
/etc/init.d/pure-ftpd restart

Then add the following to your iptables rules and reload them:

iptables -I INPUT -p tcp -m tcp –dport 50000:55000 -j ACCEPT