Improve your multi-homed servers with policy routing

This article was first published in the Summer 2016 issue of Usenix ;login:.

Traditional IP routing systems route packets by comparing the destinaton address against a predefined list of routes to each available subnet; but when multiple potential routes exist between two hosts on a network, the preferred route may be dependent on context that cannot be inferred from the destination alone. The Linux kernel, together with the iproute2 suite,1 supports the definition of multiple routing tables2 and a routing policy database3 to select the preferred routing table dynamically. This additional expressiveness can be used to avoid multiple routing pitfalls, including asymmetric routes and performance bottlenecks from suboptimal route selection.

Background

The CU-Boulder Research Computing environment spans three datacenters, each with its own set of special-purpose networks. A traditionally-routed host simultaneously connected to two or more these networks compounds network complexity by making only one interface (the default gateway) generaly available across network routes. Some cases can be addressed by defining static routes; but even this leads to asymmetric routing that is at best confusing and at worst a performance bottleneck.

Over the past few months we've been transitioning our hosts from a single-table routing configuration to a policy-driven, multi-table routing configuration. The end result is full bidirectional connectivity between any two interfaces in the network, irrespective of underlying topology or a host's default route. This has reduced the apparent complexity in our network by allowing the host and network to Do the Right Thing™ automatically, unconstrained by an otherwise static route map.

Linux policy routing has become an essential addition to host configuration in the University of Colorado Boulder "Science Network." It's so useful, in fact, that I'm surprised a basic routing policy isn't provided by default for multi-homed servers.

The problem with traditional routing

The simplest Linux host routing scenario is a system with a single network interface.

# ip addr show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
2: ens192: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 00:50:56:88:56:1f brd ff:ff:ff:ff:ff:ff
    inet 10.225.160.38/24 brd 10.225.160.255 scope global dynamic ens192
       valid_lft 60184sec preferred_lft 60184sec

Such a typically-configured network with a single uplink has a single default route in addition to its link-local route.

# ip route list
default via 10.225.160.1 dev ens192
10.225.160.0/24 dev ens192  proto kernel  scope link  src 10.225.160.38

Traffic to hosts on 10.225.160.0/24 is delivered directly, while traffic to any other network is forwarded to 10.225.160.1.

A dual-homed host adds a second network interface and a second link-local route; but the original default route remains. (Figure 1.)

# ip addr show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
2: ens192: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 00:50:56:88:56:1f brd ff:ff:ff:ff:ff:ff
    inet 10.225.160.38/24 brd 10.225.160.255 scope global dynamic ens192
       valid_lft 86174sec preferred_lft 86174sec
3: ens224: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 00:50:56:88:44:18 brd ff:ff:ff:ff:ff:ff
    inet 10.225.176.38/24 brd 10.225.176.255 scope global dynamic ens224
       valid_lft 69193sec preferred_lft 69193sec

# ip route list
default via 10.225.160.1 dev ens192
10.225.160.0/24 dev ens192  proto kernel  scope link  src 10.225.160.38
10.225.176.0/24 dev ens224  proto kernel  scope link  src 10.225.176.38

The new link-local route provides access to hosts on 10.225.176.0/24, and is sufficient for a private network connecting a small cluster of hosts. In fact, this is the configuration that we started with in our Research Computing environment: .160.0/24 is a low-performance "management" network, while .176.0/24 is a high-performance "data" network.

Figure 1 - A simple dual-homed server with a traditional default route

In a more complex network, however, link-local routes quickly become insufficient. In the CU Science Network, for example, each datacenter is considered a discrete network zone with its own set of "management" and "data" networks. For hosts in different network zones to communicate, a static route must be defined in each direction to direct performance-sensitive traffic across the high-performance network route. (Figure 2.)

server # ip route add 10.225.144.0/24 via 10.225.176.1
client # ip route add 10.225.176.0/24 via 10.225.144.0

Though managing these static routes can be tedious, they do sufficiently define connectivity between the relevant network pairs: "data" interfaces route traffic to each other via high-performance networks, while "management" interfaces route traffic to each other via low-performance networks. Other networks (e.g., the Internet) can only communicate with the hosts on their default routes; but this limitation may be acceptable for some scenarios.

Figure 2 - A server and a client, with static routes between their data interfaces

Even this approach is insufficient, however, to allow traffic between "management" and "data" interfaces. This is particularly problematic when a client host is not equipped with a symmetric set of network interfaces. (Figure 3.) Such a client may only have a "management" interface, but should still communicate with the server's high-performance interface for certain types of traffic. (For example, a dual-homed NFS server should direct all NFS traffic over its high-performance "data" network, even when being accessed by a client that itself only has a low-performance "management" interface.) By default, the Linux rp_filter4 blocks this traffic, as the server's response to the client targets a different route than the incomming request; but even if rp_filter is disabled, this asymmetric route limits the server's aggregate network bandwidth to that of its lower-performing interface.

The server's default route could be moved to the "data" interface--in some scenarios, this may even be preferable--but this only displaces the issue: clients may then be unable to communicate with the server on its "management" interface, which may be preferred for certain types of traffic. (In Research Computing, for example, we prefer that administrative access and monitoring not compete with IPC and file system traffic.)

Figure 3 - In a traditional routing configuration, the server would try to respond to the client via its default route, even if the request arrived on its data interface

Routing policy rules

Traditional IP routing systems route incoming packets based solely on the the intended destination; but the Linux iproute2 stack supports route selection based on additional packet metadata, including the packet source. Multiple discrete routing tables, similar to the virtual routing and forwarding (VRF) support found in dedicated routing appliances,5 define contextual routes, and a routing policy selects the appropriate routing table dynamically based on a list of rules.

In this example there are three different routing contexts to consider. The first of these--the "main" routing table--defines the routes to use when the server initiates communication.

server # ip route list table main
10.225.144.0/24 via 10.225.176.1 dev ens224
default via 10.225.160.1 dev ens192
10.225.160.0/24 dev ens192  proto kernel  scope link  src 10.225.160.38
10.225.176.0/24 dev ens224  proto kernel  scope link  src 10.225.176.38

A separate routing table defines routes to use when responding to traffic on the "management" interface. Since this table is concerned only with the default route's interface in isolation, it simply reiterates the default route.

server # ip route add default via 10.225.160.1 table 1
server # ip route list table 1
default via 10.225.160.1 dev ens192

Similarly, the last routing table defines routes to use when responding to traffic on the "data" interface. This table defines a different default route: all such traffic should route via the "data" interface.

server # ip route add default via 10.225.176.1 table 2
server # ip route list table 2
default via 10.225.176.1 dev ens224

With these three routing tables defined, the last step is to define routing policy to select the correct routing table based on the packet to be routed. Responses from the "management" address should use table 1, and responses from the "data" address should use table 2. All other traffic, including server-initiated traffic that has no outbound address assigned yet, uses the "main" table automatically.

server # ip rule add from 10.225.160.38 table 1
server # ip rule add from 10.225.176.38 table 2
server # ip rule list
0:  from all lookup local
32764:  from 10.225.176.38 lookup 2
32765:  from 10.225.160.38 lookup 1
32766:  from all lookup main
32767:  from all lookup default

With this routing policy in place, a single-homed client (or, in fact, any client on the network) may communicate with both the server's "data" and "management" interfaces independently and successfully, and the bidirectional traffic routes consistently via the appropriate network. (Figure 4.)

Figure 4 - Routing policy allows the server to respond using its data interface for any request that arrived on its data interface, even if it has a different default route

Persisting the configuration

This custom routing policy can be persisted in the Red Hat "ifcfg" network configuration system by creating interface-specific route- and rule- files.

# cat /etc/sysconfig/network-scripts/route-ens192
default via 10.225.160.1 dev ens192
default via 10.225.160.1 dev ens192 table mgt

# cat /etc/sysconfig/network-scripts/route-ens224
10.225.144.0/24 via 10.225.176.1 dev ens224
default via 10.225.176.1 dev ens224 table data

# cat /etc/sysconfig/network-scripts/rule-ens192
from 10.225.160.38 table mgt

# cat /etc/sysconfig/network-scripts/rule-ens224
from 10.225.176.38 table data

The symbolic names mgt and data used in these examples are translated to routing table numbers as defined in the /etc/iproute2/rt_tables file.

# echo "1 mgt" >>/etc/iproute2/rt_tables
# echo "2 data" >>/etc/iproute2/rt_tables

Once the configuration is in place, activate it by restarting the network service. (e.g., systemctl restart network) You may also be able to achieve the same effect using ifdown and ifup on individual interfaces.

Red Hat's support for routing rule configuration has a confusing regression that merits specific mention. Red Hat (and its derivatives) has historically used a "network" initscript and subscripts to configure and manage network interfaces, and these scripts support the aforementioned rule- configuration files. Red Hat Enterprise Linux 6 introduced NetworkManager, a persistent daemon with additional functionality; however, NetworkManager did not support rule- files until version 1.0, released as part of RHEL 7.1.6 If you're currently using NetworkManager, but wish to define routing policy in rule- files, you'll need to either disable NetworkManager entirely or exempt specific interfaces from NetworkManager by specifying NM_CONTROLLED=no in the relevant ifcfg- files.

In a Debian-based distribution, these routes and rules can be persisted using post-up directives in /etc/network/interfaces.

Further improvements

We're still in the process of deploying this policy-based routing configuration in our Research Computing environment; and, as we do, we discover more cases where previously complex network requirements and special-cases are abstracted away by this relatively uniform configuration. We're simultaneously evaluating other potential changes, including the possibility of running a dynamic routing protocol (such as OSPF) on our multi-homed hosts, or of configuring every network connection as a simultaneous default route for fail-over. In any case, this experience has encouraged us to take a second look at our network configuration to re-evaluate what we had previously thought were inherent limitations of the stack itself.

Jonathon Anderson Posted Wed Jun 1 00:00:00 2016

User-selectable authentication methods using pam_authtok

Research Computing is in the process of migrating and expanding our authentication system to support additional authentication methods. Historically we've supported VASCO IDENTIKEY time-based one-time-password and pin to provide two-factor authentication.

$ ssh joan5896@login.rc.colorado.edu
joan5896@login.rc.colorado.edu's password: <pin><otp>

[joan5896@login04 ~]$

But the VASCO tokens are expensive, get lost or left at home, have a battery that runs out, and have an internal clock that sometimes falls out-of-sync with the rest of the authentication system. For these and other reasons we're provisioning most new account with Duo, which provides iOS and Android apps but also supports SMS and voice calls.

Unlike VASCO, Duo is only a single authentication factor; so we've also added support for upstream CU-Boulder campus password authentication to be used in tandem.

This means that we have to support both authentication mechanisms--VASCO and password+Duo--simultaneously. A naïve implementation might just stack these methods together.

auth sufficient pam_radius_auth.so try_first_pass # VASCO authenticates over RADIUS
auth requisite  pam_krb5.so try_first_pass # CU-Boulder campus password
auth required   pam_duo.so

This generally works: VASCO authentication is attempted first over RADIUS. If that fails, authentication is attempted against the campus password and, if that succeeds, against Duo.

Unfortunately, this generates spurious authentication failures in VASCO when using Duo to authenticate: the VASCO method fails, then Duo authentication is attempted. Users who have both VASCO and Duo accounts (e.g., all administrators) may generate enough failures to trigger the break-in mitigation security system, and the VASCO account may be disabled. This same issue exists if we reverse the authentication order to try Duo first, then VASCO: VASCO users might then cause their campus passwords to become disabled.

In stead, we need to enable users to explicitly specify which authentication method they're using.

Separate sssd domains

Our first attempt to provide explicit access to different authentication methods was to provide multiple redundant sssd domains.

[domain/rc]
description = Research Computing
proxy_pam_target = curc-twofactor-vasco


[domain/duo]
description = Research Computing (identikey+duo authentication)
enumerate = false
proxy_pam_target = curc-twofactor-duo

This allows users to log in normally using VASCO, while password+Duo authentication can be requested explicitly by logging in as ${user}@duo.

$ ssh -l joan5896@duo login.rc.colorado.edu

This works well enough for the common case of shell access over SSH: login is permitted and, since both the default rc domain and the duo alias domain are both backed by the same LDAP directory, NSS sees no important difference once a user is logged in using either method.

This works because POSIX systems store the uid number returned by PAM and NSS, and generally resolve the uid number to the username on-demand. Not all systems work this way, however. For example, when we attempted to use this authentication mechanism to authenticate to our prototype JupyterHub (web) service, jobs dispatched to Slurm retained the ${user}@duo username format. Slurm also uses usernames internally, and the ${user}@duo username is not populated within Slurm: only the base ${user} username.

Expecting that we would continue to find more unexpected side-effects of this implementation, we started to look for an alternative mechanism that doesn't modify the specified username.

pam_authtok

In general, a user provides two pieces of information during authentication: a username (which we've already determined we shouldn't modify) and an authentication token or password. We should be able to detect, for example, a prefix to that authentication token to determine what authentication method to use.

$ ssh joan5896@login.rc.colorado.edu
joan5896@login.rc.colorado.edu's password: duo:<password>

[joan5896@login04 ~]$

But we found no such pam module that would allow us to manipulate the authentication token... so we wrote one.

auth [success=1 default=ignore] pam_authtok.so prefix=duo: strip prompt=password:

auth [success=done new_authtok_reqd=done default=die] pam_radius_auth.so try_first_pass

auth requisite pam_krb5.so try_first_pass
auth [success=done new_authtok_reqd=done default=die] pam_duo.so

Now our PAM stack authenticates against VASCO by default; but, if the user provides a password with a duo: prefix, authentication skips VASCO and authenticates the supplied password, followed by Duo push. Our actual production PAM stack is a bit more complicated, supporting a redundant vasco: prefix as well, for forward-compatibility should we change the default authentication mechanism in the future. We can also extend this mechanism to add arbitrary additional authentication mechanisms in the future.

Two software design methods

There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies and the other way is to make it so complicated that there are no obvious deficiencies.

--C.A.R. Hoare, The 1980 ACM Turing Award Lecture

Jonathon Anderson Posted Sun Mar 13 02:07:49 2016

Oceans (Where my feet may fall)

Spirit lead me where my trust is without borders. Let me walk upon the waters wherever You would call me.

Take me deeper than my feet could ever wander, and my faith will be made stronger in the presence of my Savior.

iTunes store

Jonathon Anderson Posted Sun Jul 26 20:22:41 2015

Two Compasses

A captain and his first mate were sailing on the open ocean. Each possessed a compass, with a third affixed to the helm. The first mate approached the captain, saying, "Sir: in my clumsiness this morning, I have dropped my compass into the sea! What should I do?" After a moment, the captain wordlessly turned and threw his own compass into the sea. The first mate watched and became enlightened.

Jonathon Anderson Posted Thu May 14 00:13:13 2015

The Psalms and Me

Every time I read the Psalms:

Hear my prayer, O LORD, Give ear to my supplications!

"Oh, maybe this will be a nice verse to uplift my friend!"

For the enemy has persecuted my soul; He has crushed my life to the ground; He has made me dwell in dark places, like those who have long been dead. Therefore my spirit is overwhelmed within me; My heart is appalled within me.

"Yes! I'll bet this message will really resonate with my friend! Sometimes we all feel downtrodden."

Answer me quickly, O LORD, my spirit fails; Do not hide Your face from me, Or I will become like those who go down to the pit.

"Yes! When we are at our lowest, we should run to God!"

And in Your lovingkindness, cut off my enemies And destroy all those who afflict my soul

"Um... David? We cool, bro?"

Awake to punish all the nations; Do not be gracious to any who are treacherous in iniquity.

"Hey... I don't know if I meant all that..."

Scatter them by Your power, and bring them down, O Lord, our shield.
Destroy them in wrath, destroy them that they may be no more
Deal with them as You did with Midian, [...] they became manure for the ground

"Hold on, there, man! Let's not get too crazy..."

How blessed will be the one who seizes and dashes your little ones Against the rock.

sigh

"Come on, David. Things were going so well. I mean, sure: you murdered a man to cover up the fact that you impregnated his wife... but you were sorry, right?"

Wash me thoroughly from my iniquity And cleanse me from my sin.

"See? There. That's more like it..."

O God, shatter their teeth in their mouth; Break out the fangs of the young lions, O LORD.

"No! Bad David! No biscuit!"

...

"Whatcha got for me, Jesus?"

You have heard that it was said, "You shall love your neighbor and hate your enemy. But I say to you, love your enemies and pray for those who persecute you, so that you may be sons of your Father who is in heaven; for He causes His sun to rise on the evil and the good, and sends rain on the righteous and the unrighteous. For if you love those who love you, what reward do you have?

"Aw, yeah. That's the stuff."

"David, have you met this guy? I think you should probably meet this guy."

Jonathon Anderson Posted Tue May 12 05:27:21 2015

Understanding OpenStack networking with Neutron and Open vSwitch

I couldn't figure out OpenStack's networking system enough to get my instances' floating IPs to work, even from the packstack --allinone host itself. I read the RDO document Networking in too much detail, but even that seemed to assume more knowledge about how things fit together than I had.

I eventually got some help from the #rdo irc channel; but I think the best documentation ended up being Visualizing OpenStack Networking Service Traffic in the Cloud from the OpenStack Operations Guide.

In the end, most of my problem was that I was trying to assign an IP address to my br-ex interface that conflicted with the the l3-agent that was already connected to the br-ex bridge. Literally any other address in the subnet that wasn't also used by an instance gave me the behavior I was looking for: being able to ping the floating addresses from the host.

ip addr add 172.24.4.225/28 dev br-ex

Once that was done, I was able to configure NAT on the same host. This is described at the end of the "Networking in too much detail" document, and was echoed by the individual who helped me in #rdo; but I modified the POSTROUTING rule to identify the external network interface, p4p1. If the external interface is left unspecified, then even internal traffic from the host to the guests will be rewritten to the external address, which isn't valid on the floating-IP subnet.

iptables -A FORWARD -d 172.24.4.224/28 -j ACCEPT 
iptables -A FORWARD -s 172.24.4.224/28 -j ACCEPT
iptables -t nat -I POSTROUTING 1 -s 172.24.4.224/28 -o p4p1 -j MASQUERADE

Migrating from Apache to Nginx

I've had this hanging out as a draft for a while, and never got it polished up as well as I'd like; but in case it's useful to anyone, here's some notes on my recent migration to nginx.


I run civilfritz.net on a VPS, but I do my best to keep as low a monthly payment as possible. That means running on the smallest (lowest-memory) VM available from my provider: a 1GB Linode.

1GB used to seem like a lot of memory; but when I'm trying to run a Minecraft server alongside a preforking Apache server alongside a Salt master, it fills up quickly.

I've wanted to try moving to a lighter-weight webserver for a while; so today I'm porting my Apache config to Nginx.

sites-available/civilfritz.net

civilfritz.net runs as a pair of Apache virtual hosts to support http and https. I want the majority of the configuration between the vhosts to be identical, so I include a separate common configuration file in each.

The http vhost includes the common config, as well as a rewrite for the ikiwiki /auth section. (Authentication should only happen over https, but attempts to authenticate over http should be redirected there.)

# apache http vhost

<VirtualHost *:80>
    RewriteEngine on
    RewriteRule ^/auth(|/.*)$ https://%{HTTP_HOST}%{REQUEST_URI} [R,L]

    Include sites-available/civilfritz.net-common
</VirtualHost>

The transition to nginx was pretty simple. The ikiwiki /auth section is a virtually equivalent rewrite rule, and the include directive is also similar.

# nginx http vhost

server
{
        listen 80;

        rewrite ^/auth(|/.*)$ https://$server_name:443$request_uri? permanent;

        include sites-available/civilfritz.net-common;
}

The https vhost also includes the common config, as well as the requisite ssl config. To support http basic authentication, an instance of pwauth is configured as an external authentication module, which proxies to PAM.

# apache https vhost

<VirtualHost *:443>
    SSLEngine on
    SSLCertificateFile /etc/ssl/certs/civilfritz.net.pem
    SSLCertificateKeyFile /etc/ssl/private/civilfritz.net.key

    AddExternalAuth pwauth /usr/sbin/pwauth
    SetExternalAuthMethod pwauth pipe

    <Location />
        AuthType Basic
        AuthBasicProvider external
        AuthExternal pwauth
        AuthName "civilfritz.net"
    </Location>

    Include sites-available/civilfritz.net-common

    <Location /auth>
        Require valid-user
    </Location>
</VirtualHost>

Again, the nginx vhost starts out similarly. Listen on tcp 443, initialize the requisite certificate and key, and include the common config.

pwauth is an Apache-specific interface, so I wasn't able to use it to proxy to pam in nginx; but the auth_pam module works well enough and, since I'm not trying to use PAM to auth directly against local unix files (I'm using sssd to access kerberos), I still don't have to run the server as root.

# nginx ssl vhost

server
{
        listen 443 ssl;

        ssl_certificate /etc/ssl/certs/civilfritz.net.pem;
        ssl_certificate_key /etc/ssl/private/civilfritz.net.key;

        include sites-available/civilfritz.net-common;

        location /auth
        {
                auth_pam "civilfritz.net";
                include fastcgi_params;
                fastcgi_pass unix:/var/run/fcgiwrap.socket;
                fastcgi_index ikiwiki.cgi;
                fastcgi_param REMOTE_USER $remote_user;
        }
}

The semantics of Nginx basic authentication differ from Apache. In Apache I was able to set AuthName globally (at /) and then require authentication arbitrarily at lower points in the tree. Here, the inclusion of the auth_pam directive implies an auth requirement; so I'll have to repeat the authentication realm ("civilfritz.net") anywhere I want to authenticate.

The biggest difference, though, is how Nginx handles cgi. Whereas Apache builds-in cgi execution for nominated files or directories, Nginx proxies all cgi execution through an external interface: here, fastcgi. A packed-in fastcgi_params file contains some useful default cgi environment variables, but omits REMOTE_USER. I set here so that ikiwiki can determine what user has authenticated.

sites-available/civilfritz.net-common

The vast majority of my local config is in the common file included by both vhosts.

# Apache initial config

ServerAdmin anderbubble@gmail.com
DirectoryIndex index.html
ServerName civilfritz.net
ServerAlias www.civilfritz.net

LogLevel warn
ErrorLog /var/log/apache2/error.log
CustomLog /var/log/apache2/access.log combined

DocumentRoot /srv/www/wiki

RewriteEngine on

Alias /robots.txt /srv/www/robots.txt

Alias /minecraft/overview /srv/www/minecraft-overviewer

<Location /users/janderson/private>
    Require user janderson
</Location>

<Directory />
    Options FollowSymLinks
    AllowOverride None
    Order deny,allow
    Deny from all
</Directory>

<Directory /srv/www>
    Order allow,deny
    Allow from all
</Directory>

<Directory /srv/www/wiki>
    AddHandler cgi-script .cgi
    Order allow,deny
    Allow from all
    Options +ExecCGI
    ErrorDocument 404 /ikiwiki.cgi
    ExpiresActive on
    ExpiresDefault "access plus 0 seconds"
    Header set Cache-Control "no-store, no-cache, must-revalidate, max-age=0"
    Header set Pragma "no-cache"
</Directory>

<Location /gitweb>
    Order allow,deny
    Allow from all
    DirectoryIndex index.cgi
</Location>

<Directory /home/*/public_html/>
    AllowOverride FileInfo AuthConfig Limit Indexes Options=ExecCGI
</Directory>

WSGIApplicationGroup %{GLOBAL}

New Nginx config

sites-available/civilfritz.net-common

index index.html;
server_name civilfritz.net www.civilfritz.net;

root /srv/www/wiki/;

location /
{
        error_page 404 /ikiwiki-404.cgi;
        expires -1;
}

location /robots.txt
{
        alias /srv/www/robots.txt;
}

location /minecraft/overview
{
        alias /srv/www/minecraft-overviewer;
}

location /ikiwiki.cgi
{
        include fastcgi_params;
        fastcgi_pass unix:/var/run/fcgiwrap.socket;
        fastcgi_index ikiwiki.cgi;
}

location /ikiwiki-404.cgi
{
        internal;
        include fastcgi_params;
        fastcgi_pass unix:/var/run/fcgiwrap.socket;
        fastcgi_param REDIRECT_URL $request_uri;
        # also needed to remove explicit 200
        fastcgi_param REDIRECT_STATUS 404;
}

location ~ /gitweb/(index|gitweb).cgi
{
        root /usr/share/;
        gzip off;
        include fastcgi_params;
        fastcgi_pass unix:/var/run/fcgiwrap.socket;
}

location /gitweb/
{
        root /usr/share/;
        gzip off;
        index index.cgi;
}

location ~ ^/~(.+?)(/.*)?$
{
        alias /home/$1/public_html$2;
        autoindex on;
}

Salt state

nginx.sls

nginx:

  pkg:
    - installed

  service:
    - running
    - enable: True
    - reload: True
    - watch:
      - pkg: nginx

civilfritz/www.sls

include:
  - nginx

[...]

/etc/nginx/sites-enabled/default:
  file:
    - absent
    - watch_in:
      - service: nginx

/etc/nginx/sites-enabled/civilfritz.net:
  file:
    - symlink
    - target: /etc/nginx/sites-available/civilfritz.net
    - require:
      - file: /etc/nginx/sites-available/civilfritz.net
    - watch_in:
      - service: nginx

/etc/nginx/sites-available/civilfritz.net:
  file:
    - managed
    - source: salt://civilfritz/nginx-sites/civilfritz.net
    - user: root
    - group: root
    - mode: 0644
    - require:
      - file: /etc/nginx/sites-available/civilfritz.net-common
    - watch_in:
      - service: nginx

/etc/nginx/sites-available/civilfritz.net-common:
  file:
    - managed
    - source: salt://civilfritz/nginx-sites/civilfritz.net-common
    - user: root
    - group: root
    - mode: 0644
    - watch_in:
      - service: nginx

/srv/www/wiki/ikiwiki-404.cgi:
  file:
    - symlink
    - target: /srv/www/wiki/ikiwiki.cgi
Jonathon Anderson Posted Thu Apr 24 19:30:49 2014

Christmas 2013: Raspberry Pi and Arduino

I spent a fair amount of time this Christmas hacking around on the Raspberry Pi with my electrical-engineer brother-in-law. On top of that, he gave me an Arduino Uno Starter Pack for Christmas. So now we have a great foundation of common ground between our neighboring disciplines.

His original project goal for the Raspberry Pi was, "A webpage I can use from my phone to click a button and make lights turn on and off." With the help of Pylons Pyramid and an existing GPIO Python module I was able to make a rudimentary GPIO web interface. It allows you to designate pins as input or output, and read or set the state of each pin. You can even (thanks to Twitter's Bootstrap) visit it from your phone, and it won't look all that terrible.

The Arduino is my first real foray into embedded programming (unless you count the rudimentary CPU I built in my undergraduate Computer Organization and Design course). At the suggestion of my brother-in-law and the Internet I set out to create the "Hello, world!" of electronics: a blinking LED. That said, the Arduino IDE on Debian was so easy to use (it's literally an apt-get install arduino away) that I wasn't terribly satisfied with the relatively minimal work it took to achieve my blinking light. It had been a while since I'd written much C, so I took the opportunity to make a more full-featured "Hello, world!" with the addition of Morse code, serial console logging, and analog encoding control.

When he returns from a visit with his own respective in-laws, we're going to brush up on my soldering skills by building the Adafruit proto shield. Here's hoping that projects like these keep us collaborating, despite the physical distance between us after the holidays are over.

Documentation as a habit

People seemed to like this comment that I made at LISA 2013:

It's important to see documentation as a habit, rather than as a task to be completed. Don't ask how you can catch up on the backlog of documentation that isn't written; just document what you do and answers to questions you've had to ask from here on out.