CNK's blog

Logging in Chef

There are a couple of different techniques for logging during a chef client run. The simplest option for debugging things in any programming language is by adding print statements - or in the case of Ruby, puts statements (print with a newline added). However, in order for print statements to work, they need to be executed in a context where stdout is available AND where you, the user, can see stdout. When running chef manually (either using chef-client or via test kitchen’s ‘kitchen converge’ command), you are watching output go by on the console. So you can do things like:

puts "This is normal Ruby code inside a recipe file."

And in a client run, you will see that output - in the compile phase.

$ chef-client --once --why-run --local-mode \
              --config /Users/cnk/Code/sandbox/customizing_chef/part3_examples/solo.rb
              --override-runlist testcookbook::default

Starting Chef Client, version 12.3.0
[2015-07-09T16:25:06-07:00] WARN: Run List override has been provided.
[2015-07-09T16:25:06-07:00] WARN: Original Run List: []
[2015-07-09T16:25:06-07:00] WARN: Overridden Run List: [recipe[testcookbook::default]]
resolving cookbooks for run list: ["testcookbook::default"]
Synchronizing Cookbooks:
  - testcookbook
  Compiling Cookbooks...
  This is normal Ruby code inside a recipe file.  ########### this is the message ##########
  Converging 0 resources

Running handlers:
  Running handlers complete
  Chef Client finished, 0/0 resources would have been updated

You can get nearly the same functionality - but with a timestamp and some terminal coloring, if you use Chef::Log in the same context:

puts "This is a puts from the top of the default recipe; node info: #{node['foo']}"
Chef::Log.warn("You can log node info #{node['foo']} from a recipe using 'Chef::Log'")

Gives:

 $ chef-client --once --why-run --local-mode \
               --config /Users/cnk/Code/sandbox/customizing_chef/part3_examples/solo.rb \
               --override-runlist testcookbook::default

 Starting Chef Client, version 12.3.0
 [2015-07-09T16:33:44-07:00] WARN: Run List override has been provided.
 [2015-07-09T16:33:44-07:00] WARN: Original Run List: []
 [2015-07-09T16:33:44-07:00] WARN: Overridden Run List: [recipe[testcookbook::default]]
 resolving cookbooks for run list: ["testcookbook::default"]
 Synchronizing Cookbooks:
   - testcookbook
   Compiling Cookbooks...
   This is a puts from the top of the default recipe; node info: bar
   [2015-07-09T16:33:44-07:00] WARN: You can log node info bar from a recipe using 'Chef::Log'
   Converging 0 resources
Running handlers:
  Running handlers complete
  Chef Client finished, 0/0 resources would have been updated

NB the default log level for chef-client writing messages to the terminal is warn or higher. So if you try to use Chef::Log.debug('something') you won’t see your message unless you have turned up the verbosity. This unexpected feature, caused me a bit of grief initially as I couldn’t find my log messages anywhere. Now what I do is use Chef::Log.warn while debugging locally and then plan to take the messages out before I commit the code.

From my experiments, just about anywhere you might use puts, you can use Chef::Log. I think the later is probably better because it will probably put information into actual log files in contexts like test kitchen that write log files for examining later.

If you need something logged at converge time instead of compile time, you have 2 options, use the log resource, or wrap Chef::Log inside a ruby_block call. In either case, during the compile phase, a new resource gets created and added to the resouce collection. Then during the converge phase, that resource gets executed. Creating a Chef::Log statement inside a ruby_block probably isn’t too useful on its own, though it may be useful if you have created a ruby_block for some other reason. This gist has some example code and the output: https://gist.github.com/cnk/e5fa8cafea8c2953cf91

Anatomy of a Chef Run

Each chef run has 2 phases - the compile phase and the converge phase.

Compile phase

In the compile phase, the chef client loads libraries, cookbooks, and recipess. Then it takes the run list, reads the listed recipes, and buids a collection of the resources that need to be executed in this run. Ruby code within the recipe may alter what resources are added to the resource collection based on information about the node. For example, if the node’s OS family is ‘debian’, package commands need to use ‘apt’ to install packages. So if you are installing emacs, the resource collection on an ubuntu box will have an ‘apt’ resource for installing that package - but the resource collection on a RHEL box will have a ‘yum’ resource instead.

The compile phase also has logic for creating a minimal, ordered collection of resources to run. Part of this process is deduplication. If multiple recipies include apt’s default recipe (which calls ‘apt-get update’), the compile phase adds this to the resource collection once. Any other calls to the same resource are reported in the run output as duplicates.

[2015-07-09T22:34:01+00:00] WARN: Previous bash[pip install to VE]:
  /tmp/kitchen/cookbooks/dev-django-skeleton/recipes/django_project.rb:75:in `from_file'
[2015-07-09T22:34:01+00:00] WARN: Current  bash[pip install to VE]:
  /tmp/kitchen/cookbooks/dev-django-skeleton/recipes/django_project.rb:86:in `from_file'

Converge phase

The converge phase is the phase in which the resource code actually gets run. As the each resource runs, information is added to the run status object - some of which can later be written back to the chef server as the node status at the end of the run.

Run status information

The Customizing Chef book has some useful information about what chef collects in the run status object. For example, the run status object has a reference to the node object at the start of each run (basically node information from the chef server combined with the data collected by ohai). It also has a reference to the run context object:

This object contains a variety of useful data about the overall
Chef run, such as the cookbook files needed to perform the run,
the list of all resources to be applied during the run, and the
list of all notifications triggered by resources during the run.

Excerpt From: "Customizing Chef" chapter 5 by Jon Cowie

Two very useful methods are ‘all_resources’ and ‘updated_resources’. One of the examples on the book is a reporting handler that logs both of those lists to a log file (see Handler Example 2: Report Handler)

Testing in Django

Test Runner

First the good part, Django, by default, uses Python’s built in unittest library - and as of Python 2.7+ that has a reasonable set of built in assertion types. (And for versions of django before 1.8, django backported the python 2.7 unittest library.) Django has a pretty good test discovery system (apparently from the upgraded Python 2.7 unittest library) and will run any code in files matching test*.py. So to run all your test, you just type ./manage.py test at the top of your project. You can also run the tests in individual modules or classes by doing something like: ./manage.py test animals.tests - without having to put the if name == __main__ stuff in each file. You can even run individual tests - though you have to know the method name (more or less like you have to in Ruby’s minitest framework): ./manage.py test animals.tests.AnimalTestCase.test_animals_can_speak

The best part of Django’s test runner is the setup it does for you - creating a new test database for each run, running your migrations, and if you ask it to, importing any fixtures you request. Then, after collection all the discovered tests, it runs each test inside a transaction to provide isolation.

Test Output

But coming from the Ruby and Ruby on Rails worlds, the testing tools in Python and Django are not as elegant as I am used to. At one point I thought the Ruby community’s emphasis on creating testing tools that display English-like output for the running tests bordered on obsessive. But having spent some time in Python/Django which doesn’t encourage tools like that, I have come to appreciate the Rubyist’s efforts. Both RSpec and Minitest have multiple built in output format options - and lots of people have created their own reporter add ons - so you can see your test output exactly the way you want it with very little effort. The django test command allows 4 verbosity levels but for the most part they only change how much detail you get about the setup process for the tests. The only differences in the test output reporting are that you get dots at verbosity levels 0 and 1 and the test names and file locations at levels 2 and 3:

$ python ./manage.py test -v 2

..... setup info omitted .......

test_debugging (accounts.tests.test_models.UserModelTests) ... ok
test_login_link_available_when_not_logged_in (accounts.tests.test_templates.LoginLinkTests) ... ok
test_logout_link_available_when_logged_in (accounts.tests.test_templates.LoginLinkTests) ... ok
test_signup_link_available_when_not_logged_in (accounts.tests.test_templates.LoginLinkTests) ... ok
test_user_account_link_available_when_logged_in (accounts.tests.test_templates.LoginLinkTests) ... ok
test_profile_url (accounts.tests.test_urls.AccuntsURLTests) ... ok
test_signup_url (accounts.tests.test_urls.AccuntsURLTests) ... ok
test_url_for_home_page (mk_web_core.tests.GeneralTests) ... ok

----------------------------------------------------------------------
    Ran 8 tests in 0.069s

So increasing the verbosity level is useful for debugging your tests but disappointing if you are trying to use the tests to document your intentions.

Behave and django-behave

This is the main reason why, despite being unenthusiastic about Cucumber in Ruby, I am supporting using Python’s behave with django-behave for our new project. One of the things I don’t like about cucumber is it all to frequently becomes an exercise in writing regular expressions (for the step matching). I don’t like that if you need to pass state between steps, you set instance variables; this is effective, but it looks kind of like magic.

With ‘behave’, you need to do the same things but in more explicit ways. The step matching involves litteral text with placeholders. If you want to do full regular expression matching you can, but you need to set the step matcher for that step to be ‘re’ - regular expression matching isn’t the default. For sharing state, there is a global context variable. When you are running features and scenarios, additional namespaces get added to the root context object - and then removed again as they go out of scope again. Adding information to the context variable seems more explicit - but with the namespace adding and removing - I am not sure that this isn’t more magical than the instance variables in Cucumber.

Django’s TestCase Encourages Integration Tests

The main testing tool that Django encourages using is it’s TestCase class which tests a bunch of concerns - request options, routing, the view’s context dictionary, response status and template rendering.

It’s odd to me that Django’s docs only discuss integration tests and not proper unit tests. With Django’s Pythonic explicitness, it is fairly easy to set up isolated pieces of the Django stack by just importing the pieces you care about into your test. For example, you can test your template’s rendering by creating a dictionary for the context information and then rendering the template with that context. Harry Percival’s book “Test-Driven Development with Python” does a very nice job of showing you how to unit test the various sections of the Django stack - routing, views, templates, etc.

More than just not discussing isolated unit tests, at least some of Django’s built in assertions actively require you to write a functional / integration test. I tried rendering my template to html and then called assertContains to test some specific html info. But I got an error about the status code! In order to use assertContains on the template, I have to make a view request.

Coming from the Rails world, I don’t really want the simple assertContains matching. What I really want is Rails’ built-in html testing method, assert_select. I found a Django library that is somewhat similar, django-with-asserts. But like assertContains, django-with-assert’s test mixin class uses the Django TestCase as it’s base and so also wants you to make a view request so it can test the StatusCode. I really wanted django-with-asserts functionality but I want to use it in isolation when I can, so I forked it and removed the dependency on the request / response cycle.

A Send-Only Email Server

Our ZenPhoto install wants to be able to notify us when there are new comments. I also may eventually want to set up exception notifications for some of my dynamic sites. At least for now, I don’t want to run a full-blown mail server for our domains; I don’t want to deal with spam detection and restricting who can use the mail server to relay mail, etc. But I know that many of the common Unix email servers can be configured so that they don’t receive mail and only send mail if it originates on one or more specific servers. I don’t have a lot of experience setting up mail servers. The ones I am most familiar with are qmail (which is what ArsDigita used everywhere) and Postfix. I am betting that it will be easier to set up Postfix on Ubuntu so let’s look for some instructions.

Installing Postfix

There are some promising looking instructions on the Digital Ocean site - for Postfix on Ubuntu 14.04. Postfix is apparently the default mail server for Ubuntu because sudo apt-get install mailutils installs postfix as one of the “additional packages”. The install process asked me two questions: what kind of mail server configuration I needed (I chose ‘Internet Site’), and what is the domain name for the mail server. I debated whether I should leave this set to the hostname for the server, which is a subdomain of one of our domains, or if I should set it to just the domain. Tim may have our domain name registrar set up for email forwarding for the domain so it may be slightly safer to configure this mail server with the subdomain. And it will make it a lot clearer to me where the email is coming from.

$ sudo apt-get install mailutils
...
... Lots of install info....
...
Setting up postfix (2.11.0-1ubuntu1) ...
Adding group `postfix' (GID 114) ...
Done.
Adding system user `postfix' (UID 106) ...
Adding new user `postfix' (UID 106) with group `postfix' ...
Not creating home directory `/var/spool/postfix'.
Creating /etc/postfix/dynamicmaps.cf
Adding tcp map entry to /etc/postfix/dynamicmaps.cf
Adding sqlite map entry to /etc/postfix/dynamicmaps.cf
Adding group `postdrop' (GID 115) ...
Done.
setting myhostname: trickster.ictinike.org
setting alias maps
setting alias database
changing /etc/mailname to trickster.ictinike.org
setting myorigin
setting destinations: trickster.ictinike.org, localhost.ictinike.org,
, localhost
setting relayhost:
setting mynetworks: 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128
setting mailbox_size_limit: 0
setting recipient_delimiter: +
setting inet_interfaces: all
setting inet_protocols: all
/etc/aliases does not exist, creating it.
WARNING: /etc/aliases exists, but does not have a root alias.

Postfix is now set up with a default configuration.  If you need to
make changes, edit /etc/postfix/main.cf (and others) as needed.
To view Postfix configuration values, see postconf(1).

After modifying main.cf, be sure to run '/etc/init.d/postfix reload'.

Running newaliases
 * Stopping Postfix Mail Transport Agent postfix
    ...done.
 * Starting Postfix Mail Transport Agent postfix
    ...done.
Processing triggers for ufw (0.34~rc-0ubuntu2) ...
Processing triggers for ureadahead (0.100.0-16) ...
Setting up mailutils (1:2.99.98-1.1) ...
update-alternatives: using /usr/bin/frm.mailutils to provide /usr/bin/frm (frm) in auto mode
update-alternatives: using /usr/bin/from.mailutils to provide /usr/bin/from (from) in auto mode
update-alternatives: using /usr/bin/messages.mailutils to provide /usr/bin/messages (messages) in auto mode
update-alternatives: using /usr/bin/movemail.mailutils to provide /usr/bin/movemail (movemail) in auto mode
update-alternatives: using /usr/bin/readmsg.mailutils to provide /usr/bin/readmsg (readmsg) in auto mode
update-alternatives: using /usr/bin/dotlock.mailutils to provide /usr/bin/dotlock (dotlock) in auto mode
update-alternatives: using /usr/bin/mail.mailutils to provide /usr/bin/mailx (mailx) in auto mode
Processing triggers for libc-bin (2.19-0ubuntu6.6) ...

Configuring Postfix to only accept mail from localhost

The installer had set up Postfix to listen on all available interfaces. So netstat -ltpn shows

Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name
tcp        0      0 127.0.0.1:3306          0.0.0.0:*               LISTEN      2028/mysqld
tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN      11341/sshd
tcp        0      0 0.0.0.0:25              0.0.0.0:*               LISTEN      15201/master
tcp6       0      0 :::80                   :::*                    LISTEN      2176/apache2
tcp6       0      0 :::22                   :::*                    LISTEN      11341/sshd
tcp6       0      0 :::25                   :::*                    LISTEN      15201/master

So, following the instructions, I edited /etc/postfix/main.cf and changed inet_interfaces = all to inet_interfaces = localhost and restarted the postfix service. Now I see postfix only on the local interface (ipv4 and ipv6):

tcp        0      0 127.0.0.1:25            0.0.0.0:*               LISTEN      15405/master
tcp6       0      0 ::1:25                  :::*                    LISTEN      15405/master

I tested email sending with: echo "test email body" | mail -s "Test email" cnk@<destination> and it went through just fine. YEAH!

Now, I need to forward system mail (e.g. root mail) to me. To do this, I need to add a line to /etc/aliases for root + the destination emails. Then I got the new entries in /etc/aliases into /etc/aliases.db by running the newaliases command. I tested the new root works by sending a second test email: echo "test email body" | mail -s "Test email for root" root And this one also got to me.

There was an additional section about how to protect my domain from being used for spam - especially in this case, being impersonated. The article on setting up an SPF record doesn’t look too hard - if the service we are using to do DNS lets us set that up. I’ll have to look into it when we are switching DNS.

Configuring Email in ZenPhoto

Having the ability to get root mail is good - but the main reason I wanted email on this server was for ZenPhoto’s comment functionality. So now, on the plugin page of the ZenPhoto admin site, there is a Mail tab with two options. For now I chose zenphoto_sendmail which just uses the PHP mail facility to send mail using the local mail server.

Upgrading ZenPhoto

I have shared a web site with a photographer friend of mine for several years. I would like to do some other, more modern things, with my web server so it’s time to upgrade. I have been using RedHat as my Linux distribution for …. well since RedHat version 4 or 5. But when I started using Vagrant VMs when I looked for RH Virtual Box images, the CentOS or Fedora images where generally much larger than the Ubuntu images. So I started playing with Ubuntu. And at work we are planning to use Ubuntu for our new project because we want something that supports AppArmor - so means Arch or something in the Debian family. Ubuntu is widely used in the Rails and Django communities so seems like a good choice. The latest long-term supoort version is Ubuntu 14.04, aka Trusty.

Having chosen my Linux distribution, I need to chose a hosting service. The two current contenders for low cost VPS’s are DigitalOcean and Linode. A couple of years ago Linode sponsored RailsRumble and provided the hosting for all the contestants. It seemed fairly decent and had some nice options like their StackScripts. So I think I’ll use Linode.

New Linode Server

I spun up a Linode 2G instance on Ubuntu 14.04 LTS with a 512 Mb swap disk (max it would allow me to set). That same form asked me to set a root password for the new server.

Using that password, I logged in and immediately did:

apt-get update
apt-get upgrade
apt-get install git-core curl zlib1g-dev build-essential libssl-dev libreadline-dev libyaml-dev libsqlite3-dev \
        sqlite3 libxml2-dev libxslt1-dev libcurl4-openssl-dev python-software-properties libffi-dev

useradd -g staff -m -s /bin/bash cnk
useradd -g staff -m -s /bin/bash tim
# and set passwords for both of us
apt-get install emacs24-nox
# added both tim and cnk to the sudoer's file

From the Getting Started Guide

hostname

One of the first things the Getting Started guide asks you to do is to set the hostname.

root@localhost:/# echo "trickster" > /etc/hostname
root@localhost:/# hostname -F /etc/hostname

Edited /etc/hosts to add one line mapping the public IP assigned to me to the hostname I just configured. The final file is:

127.0.0.1       localhost
127.0.1.1       ubuntu
45.79.nnn.nnn   trickster.<domain>.org trickster

# The following lines are desirable for IPv6 capable hosts
::1     localhost ip6-localhost ip6-loopback
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters

Time zone

The default for Ubuntu is for servers to use UTC until that is changed to something else:

root@localhost:~# date
Wed Jun 17 06:04:31 UTC 2015

root@localhost:~# dpkg-reconfigure tzdata
..... I get a 'GUI' that let's me choose my timezone
Current default time zone: 'US/Pacific'
Local time is now:      Tue Jun 16 23:05:07 PDT 2015.
Universal Time is now:  Wed Jun 17 06:05:07 UTC 2015.

Securing the server

Linode also has an excellent security guide so let’s work through that.

sshd Configuration

It suggests disabling password authentication and only allowing keys. OK so I copied my public key into the ~/.ssh/authorized_keys file on the linode box. So I can now ssh without a password.

Then I edited /etc/ssh/sshd_config to disable password authentication and root login and restarted sshd.

Setting Up iptables

The guide has a fairly sensible looking set of firewall rules for IP tables and good instructions for how to create them in a file and then load them into iptables. The defaults look just fine to me for now so I just followed the instructions. And similarly I followed the instructions for how to get loading the firewall rules into the boot sequence before the network interface is enabled. I would need to reboot the server to see if that really worked but don’t feel like it just now.

The guide also suggested setting up Fail2Ban but since we are not allowing logins with passwords, I am not really sure how helpful that is. We do have ZenPhoto set up to password protect most of our albums - mostly because we were getting comment spam on the old version of the site. So perhaps I will want to set that up at some point - but not for now.

Installing MySQL, Apache, and PHP

Linode also has a guide for setting up a LAMP stack on their server (including some tuning for their smallest (1G RAM) offering). But I had found this other guide for setting up Rails on Ubuntu 14.04, so I mostly used it.

Installing and Configuring MySQL

So first, install some packages:

sudo apt-get install mysql-server mysql-client libmysqlclient-dev

For reference, this gives me mysqld Ver 5.5.43-0ubuntu0.14.04.1 for debian-linux-gnu on x86_64 ((Ubuntu))

I was prompted to set a root password for the database which I did. The default install bound MySQLd to 127.0.0.1:3306 which is good and the default_storage_engine is InnoDB - also good. But the default server character set is latin1. It is the 21st century and I think we should all be using UTF8 all the time. So I created a file in /etc/mysql/conf.d/default_charset_utf8.cnf and used what I had used yesterday when I set up MySQL a CentOS system with MySQL 5.5:

[mysqld]
default-character-set = utf8

But the server would not restart. In the error log I see:

/usr/sbin/mysqld: unknown variable 'default-character-set=utf8'

Hun? Searching for that error message turned up this blog post which claims that option was deprecated in MySQL 5.0. Searching the MySQL docs, I only see default-character-set as a command line option. Apparently the more correct way to do this now is:

[mysqld]
character_set_server = utf8

That works:

mysql> show variables like 'char%';
+--------------------------+----------------------------+
| Variable_name            | Value                      |
+--------------------------+----------------------------+
| character_set_client     | utf8                       |
| character_set_connection | utf8                       |
| character_set_database   | utf8                       |
| character_set_filesystem | binary                     |
| character_set_results    | utf8                       |
| character_set_server     | utf8                       |
| character_set_system     | utf8                       |
| character_sets_dir       | /usr/share/mysql/charsets/ |
+--------------------------+----------------------------+
8 rows in set (0.00 sec)

Checking the security of my MySQL server set up, there do not appear to be any test databases or anonymous access to the database. And root can only log in locally. While I was in the mysql console, I created the zenphoto user and zenphoto_prod database.

Apache, PHP, and ZenPhoto

A week or so ago, I had installed LAMP + the new version of ZenPhoto on a VM (using Vagrant) so I could see how feasible it would be to migrate directly from our super old ZenPhoto to the latest version by creating database migration files. So now I need to do the same thing on the new server. First I installed apache and PHP:

apt-get install apache2 libapache2-mod-php5 php5-mysqlnd php5-gd

I was able to use the defaults for most of the apache configuration. I may want to use the status command. In /etc/apache2/envvars file there is the following comment:

## The command to get the status for 'apache2ctl status'.
## Some packages providing 'www-browser' need '--dump' instead of '-dump'.
# export APACHE_LYNX='www-browser -dump'

Looks like w3m satisfies that requirement so I changed the line above to export APACHE_LYNX='w3m -dump'

I now see the default Apache page for Ubuntu when I go to the IP address of my server. I didn’t make any changes to the php parameters. Later, I may need to tweak the parameters in /etc/php5/apache2/php.ini but for now I am just going to move on to installing ZenPhoto.

ZenPhoto

I copied zenphoto-zenphoto-1.4.7.tar.gz up to the server, untarred it into /var/www/html/ and renamed the folder to just ‘zenphoto’.

chown -R www-data:www-data zenphoto

ZenPhoto supplies an example config file: zp-core/zenphoto_cfg.txt. You copy that to zp-data/zenphoto.cfg.php and then edit it to provide the database connection information, etc.

chmod 600 zp-data/zenphoto.cfg.php

Normally what one would do next is navigate to the zenphoto/zp-core/setup.php url and let the set up script install everything. But I want to use the data from our current site. Fortunately zenphoto is set up so people can share a database by setting a tablename prefix. So by setting the table prefix for the new site to something different than our original install, I was able to have complete sets of tables for the old and new installations in the same database. I went through the tables field by field and found that although there were some new columns, I was able to write SQL statements to copy info from the original site into the new tables. The admin / configuration information was very different so I did not attempt to tranfer that. But the image and album information was very similar so it was pretty straightfoward to transfer most of the data into the new schema.

Once I had the original album and image data in the new tables, I navigated to http://<ipaddress>/zenphoto/ to run the setup script on top of the mostly set up database. With the database in place and the software set up, I used rsync to transfer the photos into the albums directory on the new site.

Rewrite rules for pretty urls

Now I can navigate to the photo albums - but the urls are different. The old site was using rewrite rules to create pretty urls like: http://ictinike.org/zenphoto/tim/misc/ted_09/ but the new site is serving that alubum page as http://45.79.100.71/zenphoto/index.php?album=tim/misc/ted_09

The install script had asked if I wanted it to create a .htaccess file and I had said yes. mod_rewrite was not enabled by default in my apache install but I had enabled it using sudo a2enmod rewrite but still no dice. The .htaccess files from the old and new versions of ZenPhoto are very different so it was hard to tell if that was the issue, or something else. In fact the new file says:

# htaccess file version 1.4.5;
#       Rewrite rules are now handled by PHP code
# See the file "zenphoto-rewrite.txt" for the actual rules
#
# These rules redirect everything not directly accessing a file to the
#       Zenphoto index.php script
...

In the ZenPhoto admin interface there is a checkbox under URL options for mod rewrite. When I check that, the links in the pages are now the ‘pretty urls’ that I expect. But clicking on them gives me a 404 error. Unchecking the box gives me back the index.php + query args urls - which work. Hmmmmm. It took me a while to figure out that the issue was that my main apache configration was set to ignore .htaccess files. In my /etch/apache2/apache2.conf:

<Directory /var/www/>
    Options Indexes FollowSymLinks
    AllowOverride None
    Require all granted
</Directory>

I could have allowed .htaccess files to be used by changing the AllowOverride directive. But since I have full control over the configs, it seemed more sensible to just copy the rules from the .htaccess file into the virtual host config file for my ZenPhoto site. Since ZenPhoto does a lot of magical set up, I didn’t remove the unused .htaccess file from the zenphoto directory - in case something in the ZenPhoto admin checks to see if it is available.

<Directory />
    # CNK copied this from the .htaccess file in /zenphoto
    # rather than change the AllowOverride settings globally
    #
    # htaccess file version 1.4.5;
    #   Rewrite rules are now handled by PHP code
    # See the file "zenphoto-rewrite.txt" for the actual rules
    #
    # These rules redirect everything not directly accessing a file to the Zenphoto index.php script
    #
    <IfModule mod_autoindex.c>
        IndexIgnore *
    </IfModule>
    <IfModule mod_rewrite.c>
        RewriteEngine On

        RewriteBase /zenphoto

        RewriteCond %{REQUEST_FILENAME} -f [OR]
        RewriteCond %{REQUEST_FILENAME} -d
        RewriteRule ^.*$ - [L]

        RewriteRule ^.*/?$   index.php [L,QSA]
    </IfModule>
</Directory>

Comment configuration

OK almost everything is up and running. But I don’t see the comments I migrated - nor do I see a comment form on each page. I didn’t transfer the configuration stuff via the database, so I need to enable comments from the admin interface There is a configuration page for the comments plugin - asking if we need people’s names and email addresses and whether or not to allow anonymous or private commenting. I am going to defer to Tim on how he wants that set up. I did go ahead and enable one of the captcha plugins so at least there is a small barrier to comment spam. That and the comment system will email us about new comments - once I get email set up. See the next post for how I set up an outgoing mail server on the new machine. With that working, all I had to do was enable the zenphoto_sendmail plugin to get new comments emailing us. I think that just sends mail as if it were using /usr/bin/sendmail (which postfix will pick up as a backwards compatibility nicety). If we want something more configurable, we may want to switch to the PHPMailer plugin which allows you to set more configuration options in the ZenPhoto admin interface.

Search

There is an archive page that shows when all the photos were taken (based on the EXIF data I think). The listings were there but the pages it linked to said “no images found”. But when I clicked the “Refresh Metadata” button on the main admin page, it rebuilt whatever index was needed to make the images show up on the by date listings.