Posts about technology

The SSH agent

This is one part in a series on OpenSSH client configuration. Also read Elegant OpenSSH Configuration and Secure OpenSSH Defaults.

As part of another SSH client article we potentially generated a new ssh key for use in ssh public-key authentication.

$ ssh-keygen -t rsa -b 4096 # if you don't already have a key

SSH public-key authentication has intrinsic benefits; but many see it as a mechanism for non-interactive login: you don't have to remember, or type, a password.

This behavior is dependent, however, on having a non-encrypted private key. This is a security risk, because the non-encrypted private key may be compromised, either by accidential mishandling of the file or by unauthorized intrusion into the client system. In almost all cases, ssh private keys should be encrypted with a passphrase.

$ ssh-keygen -t rsa -b 4096 -f test
Generating public/private rsa key pair.
Enter passphrase (empty for no passphrase): 
Enter same passphrase again:

If you already have a passphrase that is not encrypted, use the -p argument to ssh-keygen to set one.

$ ssh-keygen -p -f ~/.ssh/id_rsa

Now the private key is protected by a passphrase, which you'll be prompted for each time you use it. This is better than a password, because the passphrase is not transmitted to the server; but we've lost the ability to authenticate without having to type anything.


OpenSSH provides a dedicated agent process for the sole purpose of handling decrypted ssh private keys in-memory. Most Unix and Linux desktop operating systems (including OS X) start and maintain a per-user SSH agent process automatically.

$ pgrep -lfu $USER ssh-agent
815 /usr/bin/ssh-agent -l

Using the ssh-add command, you can decrypt your ssh private key by inputing your passphrase once, adding the decrypted key to the running agent.

$ ssh-add ~/.ssh/id_rsa # the path to the private key may be omitted for default paths
Enter passphrase for /Users/user1234/.ssh/id_rsa: 
Identity added: /Users/user1234/.ssh/id_rsa (/Users/user1234/.ssh/id_rsa)

The decrypted private key remains resident in the ssh-agent process.

$ ssh-add -L
ssh-rsa [redacted] /Users/user1234/.ssh/id_rsa

This is better than a non-encrypted on-disk private key for two reasons: first the decrypted private key exists only in memory, not on disk. This makes is more difficult to mishandle, including the fact that it cannot be recovered without re-inputing the passphrase once the workstation is powered off. Second, client applications (like OpenSSH itself) no longer require direct access to the private key, encrypted or otherwise, nor must you provide your (secret) key passphrase to client applications: the agent moderates all use of the key itself.

The default OpenSSH client will use the agent process identified by the SSH_AUTH_SOCK environment variable by default; but you generally don't have to worry about it: your workstation environment should configure it for you.


At this point, there's nothing more to do. With your ssh key added to the agent process, you're back to not needing to type in a password (or passphrase), but without the risk of a non-encrypted private key stored permanently on disk.

Secure OpenSSH defaults

This is one part in a series on OpenSSH client configuration. Also read Elegant OpenSSH configuration and The SSH agent.

It's good practice to harden our ssh client with some secure "defaults". Starting your configuration file with the following directives will apply the directives to all (*) hosts.

(These are listed as multiple Host * stanzas, but they can be combined into a single stanza in your actual configuration file.)

If you prefer, follow along with an example of a complete ~/.ssh/config file.

Require secure algorithms

OpenSSH supports many encryption and authentication algorithms, but some of those algorithms are known to be weak to cryptographic attack. The Mozilla project publishes a list of recommended algorithms that exclude algorithms that are known to be insecure.

Host *

(More information on the the available encryption and authentication algorithms, and how a recommended set is derived, is available in this fantastic blog post, "Secure secure shell.")

Hash your known_hosts file

Every time you connect to an SSH server, your client caches a copy of the remote server's host key in a ~/.ssh/known_hosts file. If your ssh client is ever compromised, this list can expose the remote servers to attack using your compromised credentials. Be a good citizen and hash your known hosts file.

Host *
HashKnownHosts yes

(Hash any existing entries in your ~/.ssh/known_hosts file by running ssh-keygen -H. Don't forget to remove the backup ~/.ssh/known_hosts.old.)

$ ssh-keygen -H
$ rm -i ~/.ssh/known_hosts.old

No roaming

Finally, disable the experimental "roaming" feature to mitigate exposure to a pair of potential vulnerabilities, CVE-2016-0777 and CVE-2016-0778.

Host *
UseRoaming no

Dealing with insecure servers

Some servers are old enough that they may not support the newer, more secure algorithms listed. In the RC environment, for example, the login and other Internet-accessible systems provide relatively modern ssh algorithms; but the host in the domain may not.

To support connection to older hosts while requiring newer algorithms by default, override these settings earlier in the configuration file.

# Internal RC hosts are running an old version of OpenSSH
Match host=*
MACs hmac-sha1,,hmac-ripemd160,,hmac-sha1-96

Elegant OpenSSH configuration

This is one part in a series on OpenSSH client configuration. Also read Secure OpenSSH defaults and The SSH agent.

The OpenSSH client is very robust, verify flexible, and very configurable. Many times I see people struggling to remember server-specific ssh flags or arcane, manual multi-hop procedures. I even see entire scripts written to automate the process.

But the vast majority of what you might want ssh to do can be abstracted away with some configuration in your ~/.ssh/config file.

All (or, at least, most) of these configuration directives are fully documented in the ssh_config manpage.

If you prefer, follow along with an example of a complete ~/.ssh/config file.


One of the first annoyances people have--and one of the first things people try to fix--when using a command-line ssh client is having to type in long hostnames. For example, the Research Computing login service is available at

$ ssh

This particular name isn't too bad; but coupled with usernames and especially when used as part of an scp, these fully-qualified domain names can become cumbersome.

$ scp -r /path/to/src/

OpenSSH supports host aliases through pattern-matching in Host directives.

Host login*.rc

Host *.rc

In this example, %h is substituted with the name specified on the command-line. With a configuration like this in place, connections to login.rc are directed to the full name

$ scp -r /path/to/src/ user1234@login.rc:dest/

Failing that, other references to hosts with a .rc suffix are directed to the internal Research Computing domain. (We'll use these later.)

(The .rc domain segment could be moved from the Host pattern to the HostName value; but leaving it in the alias helps to distinguish the Research Computing login nodes from other login nodes that you may have access to. You can use arbitrary aliases in the Host directive, too; but then the %h substitution isn't useful: you have to enumerate each targeted host.)


Unless you happen to use the same username on your local workstation as you have on the remove server, you likely specify a username using either the @ syntax or -l argument to the ssh command.

$ ssh user1234@login.rc

As with specifying a fully-qualified domain name, tracking and specifying a different username for each remote host can become burdensome, especially during an scp operation. Record the correct username in your ~/.ssh/config file in stead.

Match host=*,*
User user1234

Now all connections to Research Computing hosts use the specified username by default, without it having to be specified on the command-line.

$ scp -r /path/to/src/ login.rc:dest/

Note that we're using a Match directive here, rather than a Host directive. The host= argument to Match matches against the derived hostname, so it reflects the real hostname as determined using the previous Host directives. (Make sure the correct HostName is established earlier in the configuration, though.)


Even if the actual command is simple to type, authenticating to the host may be require manual intervention. The Research Computing login nodes, for example, require two-factor authentication using a password or pin coupled with a one-time VASCO password or Duo credential. If you want to open multiple connections--or, again, copy files using scp--having to authenticate with multiple factors quickly becomes tedious. (Even having to type in a password at all may be unnecessary; but we'll assume, as is the case with the Research Computing login example, that you can't use public-key authentication.)

OpenSSH supports sharing a single network connection for multiple ssh sessions.

ControlMaster auto
ControlPath ~/.ssh/.socket_%h_%p_%r
ControlPersist 4h

With ControlMaster and ControlPath defined, the first ssh connection authenticates and establishes a session normally; but future connections join the active connection, bypassing the need to re-authenticate. The optional ControlPersist option causes this connection to remain active for a period of time even after the last session has been closed.

$ ssh login.rc's password: 
[user1234@login01 ~]$ logout

$ ssh login.rc
[user1234@login01 ~]$

(Note that many arguments to the ssh command are effectively ignored after the initial connection is established. Notably, if X11 was not forwarded with -X or -Y during the first session, you cannot use the shared connection to forward X11 in a later session. In this case, use the -S none argument to ssh to ignore the existing connection and explicitly establish a new connection.)


But what if you want to get to a host that isn't directly available from your local workstation? The hosts in the domain referenced above may be accessible from a local network connection; but if you are connecting from elsewhere on the Internet, you won't be able to access them directly.

Except that OpenSSH provides the ProxyCommand option which, when coupled with the OpenSSH client presumed to be available on the intermediate server, supports arbitrary proxy connections through to remotely-accessible servers.

Match host=*
ProxyCommand ssh -W %h:%p

Even though you can't connect directly to Janus compute nodes from the Internet, for example, you can connect to them from a Research Computing login node; so this ProxyCommand configuration allows transparent access to hosts in the internal Research Computing domain.

$ ssh janus-compile1.rc
[user1234@janus-compile1 ~]$

And it even works with scp.

$ echo 'Hello, world!' >/tmp/hello.txt
$ scp /tmp/hello.txt janus-compile1.rc:/tmp
hello.txt                                     100%   14     0.0KB/s   00:00

$ ssh janus-compile1.rc cat /tmp/hello.txt
Hello, world!

Public-key authentication

If you tried the example above, chances are that you were met with an unexpected password prompt that didn't accept any password that you used. That's because most internal Research Computing hosts don't actually support interactive authentication, two-factor or otherwise. Connections from a CURC login node are authorized by the login node; but a proxied connection must authenticate from your local client.

The best way to authenticate your local workstation to an internal CURC host is using public-key authentication.

If you don't already have an SSH key, generate one now.

$ ssh-keygen -t rsa -b 4096 # if you don't already have a key

Now we have to copy the (new?) public key to the remote CURC ~/.ssh/authorized_keys file. RC provides a global home directory, so copying to any login node will do. Targeting a specific login node is useful, though: the ControlMaster configuration for tends to confuse ssh-copy-id.

$ ssh-copy-id login01.rc

(The ssh-copy-id command doesn't come with OS X, but theres a third-party port available on GitHub. It's usually available on a Linux system, too. Alternatively, you can just edit ~/.ssh/authorized_keys manually.)

Two software design methods

There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies and the other way is to make it so complicated that there are no obvious deficiencies.

--C.A.R. Hoare, The 1980 ACM Turing Award Lecture

Linux policy-based routing

How could Linux policy routing be so poorly documented? It's so useful, so essential in a multi-homed environment... I'd almost advocate for its inclusion as default behavior.

What is this, you ask? To understand, we have to start with what Linux does by default in a multi-homed environment. So let's look at one.

$ ip addr
4: eth2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP qlen 1000
    link/ether 78:2b:cb:66:75:c0 brd ff:ff:ff:ff:ff:ff
    inet brd scope global eth2
    inet6 fe80::7a2b:cbff:fe66:75c0/64 scope link 
       valid_lft forever preferred_lft forever
6: eth5: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc mq state UP qlen 1000
    link/ether e4:1d:2d:14:93:60 brd ff:ff:ff:ff:ff:ff
    inet brd scope global eth5
    inet6 fe80::e61d:2dff:fe14:9360/64 scope link 
       valid_lft forever preferred_lft forever

So we have two interfaces, eth2 and eth5. They're on separate subnets, and respectively. In our environment, we refer to these as "spsc-mgt" and "spsc-data." The practical circumstance is that one of these networks is faster than the other, and we would like bulk data transfer to use the faster "spsc-data" network.

If the client system also has an "spsc-data" network, everything is fine. The client addresses the system using its data address, and the link-local route prefers the data network.

$ ip route list dev eth5  proto kernel  scope link  src

Our network environment covers a number of networks, however. So let's say our client lives in another data network--"comp-data." Infrastructure routing directs the traffic to the -data interface of our server correctly, but the default route on the server prefers the -mgt interface.

$ ip route list | grep ^default
default via dev eth2

For this simple case we have two options. We can either change our default route to prefer the -data interface, or we can enumerate intended -data client networks with static routes using the data interface. Since changing the default route simply leaves us in the same situation for the -mgt network, let's define some static routes.

$ ip route add via dev eth5
$ ip route add via dev eth5

So long as we can enumerate the networks that should always use the -data interface of our server to communicate, this basically works. But what if we want to support clients that don't themselves have separate -mgt and -data networks? What if we have a single client--perhaps with only a -mgt network connection--that should be able to communicate individually with the server's -mgt interface and its -data interface. In the most pathological case, what if we have a host that is only connected to the spsc-mgt ( interface, but we want that client to be able to communicate with the server's -data interface. In this case, the link-local route will always prefer the -mgt network for the return path.

Policy-based routing

The best case would be to have the server select an outbound route based not on a static configuration, but in response to the incoming path of the traffic. This is the feature enabled by policy-based routing.

Linux policy routing allows us to define distinct and isolated routing tables, and then select the appropriate routing table based on the traffic context. In this situation, we have three different routing contexts to consider. The first of these are the routes to use when the server initiates communication.

$ ip route list table main dev eth2  proto kernel  scope link  src dev eth5  proto kernel  scope link  src via dev eth5 via dev eth5
default via dev eth2

A separate routing table defines routes to use when responding to traffic from the -mgt interface.

$ ip route list table 1
default via dev eth2

The last routing table defines routes to use when responding to traffic from the -data interface.

$ ip route list table 2
default via dev eth5

With these separate routing tables defined, the last step is to define the rules that select the correct routing table.

$ ip rule list
0:  from all lookup local 
32762:  from lookup 2 
32763:  from all iif eth5 lookup 2 
32764:  from lookup 1 
32765:  from all iif eth2 lookup 1 
32766:  from all lookup main 
32767:  from all lookup default

Despite a lack of documentation, all of these rules may be codified in Red Hat "sysconfig"-style "network-scripts" using interface-specific route- and rule- files.

$ cat /etc/sysconfig/network-scripts/route-eth2
default via dev eth2
default via dev eth2 table 1

$ cat /etc/sysconfig/network-scripts/route-eth5 via dev eth5 via dev eth5
default via dev eth5 table 2

$ cat /etc/sysconfig/network-scripts/rule-eth2
iif eth2 table 1
from table 1

$ cat /etc/sysconfig/network-scripts/rule-eth5
iif eth5 table 2
from table 2

Changes to the RPDB made with these commands do not become active immediately. It is assumed that after a script finishes a batch of updates, it flushes the routing cache with ip route flush cache.


5 February 2015

Personal stuff


I finally got curtain brackets mounted in the family room, though no curtains are hung yet. Last night my drill battery died, and I ended up with a drill bit stuck in the wall; but now it's out, anchors in, and brackets hung. Tonight, curtains.


My piratebox still won't install the software from the usb stick, claiming that the storage device is full. Since there's plenty of space left on the usb stick, I can only assume it's talking about the internal storage. Maybe I flashed it incorrectly? Or maybe a previous failed install has left it without some space it exepcts to have? Or maybe I need to do some kind of reset?

In any case, I was able to telnet in, re-flashed the firmware manually, hit the reset button for good measure, and then tried installing again from This time it worked, and I have a "PirateBox" ssid once again following me around. I must admit, though: I'm a bit disappointed by the "new, responsive" layout. I think it's really only the media browser that has improved.

I don't know that I'm going to do anything about it today, but I'd really like to install OpenProject on civilfritz. I think it would help Andi and me work together to get projects done at home.

CU stuff

We had a meeting with Peak and IBM storage about a possible replacement for the existing home and projects storage N-series system. They are proposing a black-box GPFS system "IBM StoreWise v7000 Unified" which I should look into further. They have also mentioned ESS nee GSS (hopefully still a grey-box solution), though they didn't have many details on what that would look like. They weren't even sure if it's running Linux or AIX. (I'm hoping that their first instinct was wrong, and that it's Linux on Power.)

We have a talk scheduled about upgrading the PetaLibrary. I specifically am of the opinion that we should be using the PetaLibrary to house home and projects. The way I see it, we should have two independent filesets, /pl/home/ and /pl/projects/, and just NFS export and snapshot each of them. We could even still have each home and project directory be a dependent fileset underneath, if we want.

I'm going to keep working on the user guide today. Hopefully I'll have a batch queueing / slurm guide to give to Aaron by tomorrow.

I also need to work on a local passwords pam config to tick off the last of my performance plan post-its.

Before I do anything, though, I should look through org-mode and calendar to see what else I might be forgetting.

Understanding OpenStack networking with Neutron and Open vSwitch

I couldn't figure out OpenStack's networking system enough to get my instances' floating IPs to work, even from the packstack --allinone host itself. I read the RDO document Networking in too much detail, but even that seemed to assume more knowledge about how things fit together than I had.

I eventually got some help from the #rdo irc channel; but I think the best documentation ended up being Visualizing OpenStack Networking Service Traffic in the Cloud from the OpenStack Operations Guide.

In the end, most of my problem was that I was trying to assign an IP address to my br-ex interface that conflicted with the the l3-agent that was already connected to the br-ex bridge. Literally any other address in the subnet that wasn't also used by an instance gave me the behavior I was looking for: being able to ping the floating addresses from the host.

ip addr add dev br-ex

Once that was done, I was able to configure NAT on the same host. This is described at the end of the "Networking in too much detail" document, and was echoed by the individual who helped me in #rdo; but I modified the POSTROUTING rule to identify the external network interface, p4p1. If the external interface is left unspecified, then even internal traffic from the host to the guests will be rewritten to the external address, which isn't valid on the floating-IP subnet.

iptables -A FORWARD -d -j ACCEPT 
iptables -A FORWARD -s -j ACCEPT
iptables -t nat -I POSTROUTING 1 -s -o p4p1 -j MASQUERADE

Rebuilding civilfritz TLS

Some random notes on when I rebuilt TLS for civilfritz using gnutls and cacert.

  • https
  • ldap start_tls

certtool --generate-privkey --outfile

certtool --generate-request --load-privkey --outfile


certtool --certificate-info <

Subject Alternative Name (not critical):
   XMPP Address:
   XMPP Address:

$ cat /etc/ssl/certs/ | certtool --verify-chain
    Issued by: O=Root CA,OU=,CN=CA Cert Signing Authority,
    Verifying against certificate[1].
Error: Issuer's name: O=CAcert Inc.,OU=,CN=CAcert Class 3 Root
certtool: issuer name does not match the next certificate

$ cat | certtool --verify-chain
    Issued by: O=Root CA,OU=,CN=CA Cert Signing Authority,
    Verifying against certificate[1].
    Verification output: Verified.

Certificate[1]: O=Root CA,OU=,CN=CA Cert Signing Authority,
    Issued by: O=Root CA,OU=,CN=CA Cert Signing Authority,
    Verification output: Verified.

Chain verification output: Verified.

Migrating from Apache to Nginx

I've had this hanging out as a draft for a while, and never got it polished up as well as I'd like; but in case it's useful to anyone, here's some notes on my recent migration to nginx.

I run on a VPS, but I do my best to keep as low a monthly payment as possible. That means running on the smallest (lowest-memory) VM available from my provider: a 1GB Linode.

1GB used to seem like a lot of memory; but when I'm trying to run a Minecraft server alongside a preforking Apache server alongside a Salt master, it fills up quickly.

I've wanted to try moving to a lighter-weight webserver for a while; so today I'm porting my Apache config to Nginx.

sites-available/ runs as a pair of Apache virtual hosts to support http and https. I want the majority of the configuration between the vhosts to be identical, so I include a separate common configuration file in each.

The http vhost includes the common config, as well as a rewrite for the ikiwiki /auth section. (Authentication should only happen over https, but attempts to authenticate over http should be redirected there.)

# apache http vhost

<VirtualHost *:80>
    RewriteEngine on
    RewriteRule ^/auth(|/.*)$ https://%{HTTP_HOST}%{REQUEST_URI} [R,L]

    Include sites-available/

The transition to nginx was pretty simple. The ikiwiki /auth section is a virtually equivalent rewrite rule, and the include directive is also similar.

# nginx http vhost

        listen 80;

        rewrite ^/auth(|/.*)$ https://$server_name:443$request_uri? permanent;

        include sites-available/;

The https vhost also includes the common config, as well as the requisite ssl config. To support http basic authentication, an instance of pwauth is configured as an external authentication module, which proxies to PAM.

# apache https vhost

<VirtualHost *:443>
    SSLEngine on
    SSLCertificateFile /etc/ssl/certs/
    SSLCertificateKeyFile /etc/ssl/private/

    AddExternalAuth pwauth /usr/sbin/pwauth
    SetExternalAuthMethod pwauth pipe

    <Location />
        AuthType Basic
        AuthBasicProvider external
        AuthExternal pwauth
        AuthName ""

    Include sites-available/

    <Location /auth>
        Require valid-user

Again, the nginx vhost starts out similarly. Listen on tcp 443, initialize the requisite certificate and key, and include the common config.

pwauth is an Apache-specific interface, so I wasn't able to use it to proxy to pam in nginx; but the auth_pam module works well enough and, since I'm not trying to use PAM to auth directly against local unix files (I'm using sssd to access kerberos), I still don't have to run the server as root.

# nginx ssl vhost

        listen 443 ssl;

        ssl_certificate /etc/ssl/certs/;
        ssl_certificate_key /etc/ssl/private/;

        include sites-available/;

        location /auth
                auth_pam "";
                include fastcgi_params;
                fastcgi_pass unix:/var/run/fcgiwrap.socket;
                fastcgi_index ikiwiki.cgi;
                fastcgi_param REMOTE_USER $remote_user;

The semantics of Nginx basic authentication differ from Apache. In Apache I was able to set AuthName globally (at /) and then require authentication arbitrarily at lower points in the tree. Here, the inclusion of the auth_pam directive implies an auth requirement; so I'll have to repeat the authentication realm ("") anywhere I want to authenticate.

The biggest difference, though, is how Nginx handles cgi. Whereas Apache builds-in cgi execution for nominated files or directories, Nginx proxies all cgi execution through an external interface: here, fastcgi. A packed-in fastcgi_params file contains some useful default cgi environment variables, but omits REMOTE_USER. I set here so that ikiwiki can determine what user has authenticated.


The vast majority of my local config is in the common file included by both vhosts.

# Apache initial config

DirectoryIndex index.html

LogLevel warn
ErrorLog /var/log/apache2/error.log
CustomLog /var/log/apache2/access.log combined

DocumentRoot /srv/www/wiki

RewriteEngine on

Alias /robots.txt /srv/www/robots.txt

Alias /minecraft/overview /srv/www/minecraft-overviewer

<Location /users/janderson/private>
    Require user janderson

<Directory />
    Options FollowSymLinks
    AllowOverride None
    Order deny,allow
    Deny from all

<Directory /srv/www>
    Order allow,deny
    Allow from all

<Directory /srv/www/wiki>
    AddHandler cgi-script .cgi
    Order allow,deny
    Allow from all
    Options +ExecCGI
    ErrorDocument 404 /ikiwiki.cgi
    ExpiresActive on
    ExpiresDefault "access plus 0 seconds"
    Header set Cache-Control "no-store, no-cache, must-revalidate, max-age=0"
    Header set Pragma "no-cache"

<Location /gitweb>
    Order allow,deny
    Allow from all
    DirectoryIndex index.cgi

<Directory /home/*/public_html/>
    AllowOverride FileInfo AuthConfig Limit Indexes Options=ExecCGI

WSGIApplicationGroup %{GLOBAL}

New Nginx config


index index.html;

root /srv/www/wiki/;

location /
        error_page 404 /ikiwiki-404.cgi;
        expires -1;

location /robots.txt
        alias /srv/www/robots.txt;

location /minecraft/overview
        alias /srv/www/minecraft-overviewer;

location /ikiwiki.cgi
        include fastcgi_params;
        fastcgi_pass unix:/var/run/fcgiwrap.socket;
        fastcgi_index ikiwiki.cgi;

location /ikiwiki-404.cgi
        include fastcgi_params;
        fastcgi_pass unix:/var/run/fcgiwrap.socket;
        fastcgi_param REDIRECT_URL $request_uri;
        # also needed to remove explicit 200
        fastcgi_param REDIRECT_STATUS 404;

location ~ /gitweb/(index|gitweb).cgi
        root /usr/share/;
        gzip off;
        include fastcgi_params;
        fastcgi_pass unix:/var/run/fcgiwrap.socket;

location /gitweb/
        root /usr/share/;
        gzip off;
        index index.cgi;

location ~ ^/~(.+?)(/.*)?$
        alias /home/$1/public_html$2;
        autoindex on;

Salt state



    - installed

    - running
    - enable: True
    - reload: True
    - watch:
      - pkg: nginx


  - nginx


    - absent
    - watch_in:
      - service: nginx

    - symlink
    - target: /etc/nginx/sites-available/
    - require:
      - file: /etc/nginx/sites-available/
    - watch_in:
      - service: nginx

    - managed
    - source: salt://civilfritz/nginx-sites/
    - user: root
    - group: root
    - mode: 0644
    - require:
      - file: /etc/nginx/sites-available/
    - watch_in:
      - service: nginx

    - managed
    - source: salt://civilfritz/nginx-sites/
    - user: root
    - group: root
    - mode: 0644
    - watch_in:
      - service: nginx

    - symlink
    - target: /srv/www/wiki/ikiwiki.cgi