Posts about sysadmin

The SSH agent

This is one part in a series on OpenSSH client configuration. Also read Elegant OpenSSH Configuration and Secure OpenSSH Defaults.

As part of another SSH client article we potentially generated a new ssh key for use in ssh public-key authentication.

$ ssh-keygen -t rsa -b 4096 # if you don't already have a key

SSH public-key authentication has intrinsic benefits; but many see it as a mechanism for non-interactive login: you don't have to remember, or type, a password.

This behavior is dependent, however, on having a non-encrypted private key. This is a security risk, because the non-encrypted private key may be compromised, either by accidential mishandling of the file or by unauthorized intrusion into the client system. In almost all cases, ssh private keys should be encrypted with a passphrase.

$ ssh-keygen -t rsa -b 4096 -f test
Generating public/private rsa key pair.
Enter passphrase (empty for no passphrase): 
Enter same passphrase again:

If you already have a passphrase that is not encrypted, use the -p argument to ssh-keygen to set one.

$ ssh-keygen -p -f ~/.ssh/id_rsa

Now the private key is protected by a passphrase, which you'll be prompted for each time you use it. This is better than a password, because the passphrase is not transmitted to the server; but we've lost the ability to authenticate without having to type anything.

ssh-agent

OpenSSH provides a dedicated agent process for the sole purpose of handling decrypted ssh private keys in-memory. Most Unix and Linux desktop operating systems (including OS X) start and maintain a per-user SSH agent process automatically.

$ pgrep -lfu $USER ssh-agent
815 /usr/bin/ssh-agent -l

Using the ssh-add command, you can decrypt your ssh private key by inputing your passphrase once, adding the decrypted key to the running agent.

$ ssh-add ~/.ssh/id_rsa # the path to the private key may be omitted for default paths
Enter passphrase for /Users/joan5896/.ssh/id_rsa: 
Identity added: /Users/joan5896/.ssh/id_rsa (/Users/joan5896/.ssh/id_rsa)

The decrypted private key remains resident in the ssh-agent process.

$ ssh-add -L
ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAgEAuuq61MFxmPDHf806s65O6pljn1hM/BB6XpDPu52YjOSriTtN2cgo1CpDDXNJffiwjvodr7Sq0UBTKcFvZzu41N5oGy5ob3H4axY+SRjs7giSYJna3OB+ABO3PV4LduEpt9IOFue7Q8Q0xvtSeZdNqPq2jPGXuxviFzBCVebkFE6BDdL6NI3Y591DIs2LZwm+v/wHk9cLjo//beWyMl7Gku5jnvCNozo7VBS6nfPcFW9SScAH8ow/2XP0sF4+YCOIbikQ6R69Dl18il8MumQUt+Cxc4cL39zjIuzpuGjhSTEb3TY3crHU8TH7cdhY7+WW1Ab8A19gec2vwAbAEOtdx24L7nXA2SFiImuO0FtQBS6gYp9BMKlotm0xBsNzvGsutIz6nb3iA/4OuVV9wUcE3SpicY0nE7Xn1LKfZdeSc+Jjs7K3Sry9SeTJ7awwaqQDEeAhw6GM5PUaZStwtheQe5tAe/OnwRR8J8lCXI+VUOnoTogOG7md99beygNuF1VEPHp044sjLzOf0ubcZYdA0jXqddkxG4S9TNFPNZ7dxGwdLQqWoqnwBDat4xHQ7g7ifdy1F3WKj2V+BVl4owCzlWYCRtgN5f6O18MLAjBLG0Mdd6jZoMGyBHG8jlJkMFNNiuwRqT53mR+hTFMFOHEe+DJXDPgBisLs19HBbK4MM9k= /Users/joan5896/.ssh/id_rsa

This is better than a non-encrypted on-disk private key for two reasons: first the decrypted private key exists only in memory, not on disk. This makes is more difficult to mishandle, including the fact that it cannot be recovered without re-inputing the passphrase once the workstation is powered off. Second, client applications (like OpenSSH itself) no longer require direct access to the private key, encrypted or otherwise, nor must you provide your (secret) key passphrase to client applications: the agent moderates all use of the key itself.

The default OpenSSH client will use the agent process identified by the SSH_AUTH_SOCK environment variable by default; but you generally don't have to worry about it: your workstation environment should configure it for you.

$ echo $SSH_AUTH_SOCK
/private/tmp/com.apple.launchd.L311i5Nw5J/Listeners

At this point, there's nothing more to do. With your ssh key added to the agent process, you're back to not needing to type in a password (or passphrase), but without the risk of a non-encrypted private key stored permanently on disk.

Secure OpenSSH defaults

This is one part in a series on OpenSSH client configuration. Also read Elegant OpenSSH configuration and The SSH agent.

It's good practice to harden our ssh client with some secure "defaults". Starting your configuration file with the following directives will apply the directives to all (*) hosts.

(These are listed as multiple Host * stanzas, but they can be combined into a single stanza in your actual configuration file.)

If you prefer, follow along with an example of a complete ~/.ssh/config file.

Require secure algorithms

OpenSSH supports many encryption and authentication algorithms, but some of those algorithms are known to be weak to cryptographic attack. The Mozilla project publishes a list of recommended algorithms that exclude algorithms that are known to be insecure.

Host *
HostKeyAlgorithms ssh-ed25519-cert-v01@openssh.com,ssh-rsa-cert-v01@openssh.com,ssh-ed25519,ssh-rsa,ecdsa-sha2-nistp521-cert-v01@openssh.com,ecdsa-sha2-nistp384-cert-v01@openssh.com,ecdsa-sha2-nistp256-cert-v01@openssh.com,ecdsa-sha2-nistp521,ecdsa-sha2-nistp384,ecdsa-sha2-nistp256
Ciphers chacha20-poly1305@openssh.com,aes256-gcm@openssh.com,aes128-gcm@openssh.com,aes256-ctr,aes192-ctr,aes128-ctr
MACs hmac-sha2-512-etm@openssh.com,hmac-sha2-256-etm@openssh.com,umac-128-etm@openssh.com,hmac-sha2-512,hmac-sha2-256,umac-128@openssh.com
KexAlgorithms curve25519-sha256@libssh.org,diffie-hellman-group-exchange-sha256,diffie-hellman-group-exchange-sha1

(More information on the the available encryption and authentication algorithms, and how a recommended set is derived, is available in this fantastic blog post, "Secure secure shell.")

Hash your known_hosts file

Every time you connect to an SSH server, your client caches a copy of the remote server's host key in a ~/.ssh/known_hosts file. If your ssh client is ever compromised, this list can expose the remote servers to attack using your compromised credentials. Be a good citizen and hash your known hosts file.

Host *
HashKnownHosts yes

(Hash any existing entries in your ~/.ssh/known_hosts file by running ssh-keygen -H. Don't forget to remove the backup ~/.ssh/known_hosts.old.)

$ ssh-keygen -H
$ rm -i ~/.ssh/known_hosts.old

No roaming

Finally, disable the experimental "roaming" feature to mitigate exposure to a pair of potential vulnerabilities, CVE-2016-0777 and CVE-2016-0778.

Host *
UseRoaming no

Dealing with insecure servers

Some servers are old enough that they may not support the newer, more secure algorithms listed. In the RC environment, for example, the login and other Internet-accessible systems provide relatively modern ssh algorithms; but the host in the rc.int.colorado.edu domain may not.

To support connection to older hosts while requiring newer algorithms by default, override these settings earlier in the configuration file.

# Internal RC hosts are running an old version of OpenSSH
Match host=*.rc.int.colorado.edu
MACs hmac-sha1,umac-64@openssh.com,hmac-ripemd160,hmac-ripemd160@openssh.com,hmac-sha1-96

Elegant OpenSSH configuration

This is one part in a series on OpenSSH client configuration. Also read Secure OpenSSH defaults and The SSH agent.

The OpenSSH client is very robust, verify flexible, and very configurable. Many times I see people struggling to remember server-specific ssh flags or arcane, manual multi-hop procedures. I even see entire scripts written to automate the process.

But the vast majority of what you might want ssh to do can be abstracted away with some configuration in your ~/.ssh/config file.

All (or, at least, most) of these configuration directives are fully documented in the ssh_config manpage.

If you prefer, follow along with an example of a complete ~/.ssh/config file.

HostName

One of the first annoyances people have--and one of the first things people try to fix--when using a command-line ssh client is having to type in long hostnames. For example, the Research Computing login service is available at login.rc.colorado.edu.

$ ssh login.rc.colorado.edu

This particular name isn't too bad; but coupled with usernames and especially when used as part of an scp, these fully-qualified domain names can become cumbersome.

$ scp -r /path/to/src/ joan5896@login.rc.colorado.edu:dest/

OpenSSH supports host aliases through pattern-matching in Host directives.

Host login*.rc
HostName %h.colorado.edu

Host *.rc
HostName %h.int.colorado.edu

In this example, %h is substituted with the name specified on the command-line. With a configuration like this in place, connections to login.rc are directed to the full name login.rc.colorado.edu.

$ scp -r /path/to/src/ joan5896@login.rc:dest/

Failing that, other references to hosts with a .rc suffix are directed to the internal Research Computing domain. (We'll use these later.)

(The .rc domain segment could be moved from the Host pattern to the HostName value; but leaving it in the alias helps to distinguish the Research Computing login nodes from other login nodes that you may have access to. You can use arbitrary aliases in the Host directive, too; but then the %h substitution isn't useful: you have to enumerate each targeted host.)

User

Unless you happen to use the same username on your local workstation as you have on the remove server, you likely specify a username using either the @ syntax or -l argument to the ssh command.

$ ssh joan5896@login.rc

As with specifying a fully-qualified domain name, tracking and specifying a different username for each remote host can become burdensome, especially during an scp operation. Record the correct username in your ~/.ssh/config file in stead.

Match host=*.rc.colorado.edu,*.rc.int.colorado.edu
User joan5896

Now all connections to Research Computing hosts use the specified username by default, without it having to be specified on the command-line.

$ scp -r /path/to/src/ login.rc:dest/

Note that we're using a Match directive here, rather than a Host directive. The host= argument to Match matches against the derived hostname, so it reflects the real hostname as determined using the previous Host directives. (Make sure the correct HostName is established earlier in the configuration, though.)

ControlMaster

Even if the actual command is simple to type, authenticating to the host may be require manual intervention. The Research Computing login nodes, for example, require two-factor authentication using a password or pin coupled with a one-time VASCO password or Duo credential. If you want to open multiple connections--or, again, copy files using scp--having to authenticate with multiple factors quickly becomes tedious. (Even having to type in a password at all may be unnecessary; but we'll assume, as is the case with the Research Computing login example, that you can't use public-key authentication.)

OpenSSH supports sharing a single network connection for multiple ssh sessions.

Match host=login.rc.colorado.edu
ControlMaster auto
ControlPath ~/.ssh/.socket_%h_%p_%r
ControlPersist 4h

With ControlMaster and ControlPath defined, the first ssh connection authenticates and establishes a session normally; but future connections join the active connection, bypassing the need to re-authenticate. The optional ControlPersist option causes this connection to remain active for a period of time even after the last session has been closed.

$ ssh login.rc
joan5896@login.rc.colorado.edu's password: 
[joan5896@login01 ~]$ logout

$ ssh login.rc
[joan5896@login01 ~]$

(Note that many arguments to the ssh command are effectively ignored after the initial connection is established. Notably, if X11 was not forwarded with -X or -Y during the first session, you cannot use the shared connection to forward X11 in a later session. In this case, use the -S none argument to ssh to ignore the existing connection and explicitly establish a new connection.)

ProxyCommand

But what if you want to get to a host that isn't directly available from your local workstation? The hosts in the rc.int.colorado.edu domain referenced above may be accessible from a local network connection; but if you are connecting from elsewhere on the Internet, you won't be able to access them directly.

Except that OpenSSH provides the ProxyCommand option which, when coupled with the OpenSSH client presumed to be available on the intermediate server, supports arbitrary proxy connections through to remotely-accessible servers.

Match host=*.rc.int.colorado.edu
ProxyCommand ssh -W %h:%p login.rc.colorado.edu

Even though you can't connect directly to Janus compute nodes from the Internet, for example, you can connect to them from a Research Computing login node; so this ProxyCommand configuration allows transparent access to hosts in the internal Research Computing domain.

$ ssh janus-compile1.rc
[joan5896@janus-compile1 ~]$

And it even works with scp.

$ echo 'Hello, world!' >/tmp/hello.txt
$ scp /tmp/hello.txt janus-compile1.rc:/tmp
hello.txt                                     100%   14     0.0KB/s   00:00

$ ssh janus-compile1.rc cat /tmp/hello.txt
Hello, world!

Public-key authentication

If you tried the example above, chances are that you were met with an unexpected password prompt that didn't accept any password that you used. That's because most internal Research Computing hosts don't actually support interactive authentication, two-factor or otherwise. Connections from a CURC login node are authorized by the login node; but a proxied connection must authenticate from your local client.

The best way to authenticate your local workstation to an internal CURC host is using public-key authentication.

If you don't already have an SSH key, generate one now.

$ ssh-keygen -t rsa -b 4096 # if you don't already have a key

Now we have to copy the (new?) public key to the remote CURC ~/.ssh/authorized_keys file. RC provides a global home directory, so copying to any login node will do. Targeting a specific login node is useful, though: the ControlMaster configuration for login.rc.colorado.edu tends to confuse ssh-copy-id.

$ ssh-copy-id login01.rc

(The ssh-copy-id command doesn't come with OS X, but theres a third-party port available on GitHub. It's usually available on a Linux system, too. Alternatively, you can just edit ~/.ssh/authorized_keys manually.)

Why hasn't my (Slurm) job started?

A job can be blocked from being scheduled for the following reasons:

  • There are insufficient resources available to start the job, either due to active reservations, other running jobs, component status, or system/partition size.

  • Other higher-priority jobs are waiting to run, and the job's time limit prevents it from being backfilled.

  • The job's time limit exceeds an upcoming reservation (e.g., scheduled preventative maintenance)

  • The job is associated with an account that has reached or exceeded its GrpCPUMins.

Display a list of queued jobs sorted in the order considered by the scheduler using squeue.

squeue --sort=-p,i --priority --format '%7T %7A %10a %5D %.12L %10P %10S %20r'

Reason codes

A list of reason codes1 is available as part of the squeue manpage.2

Common reason codes:

  • ReqNodeNotAvail
  • AssocGrpJobsLimit
  • AssocGrpCPUMinsLimit
  • resources
  • QOSResourceLimit
  • Priority
  • AssociationJobLimit
  • JobHeldAdmin

How are jobs prioritized?

PriorityType=priority/multifactor

Slurm prioritizes jobs using the multifactor plugin3 based on a weighted summation of age, size, QOS, and fair-share factors.

Use the sprio command to inspect each weighted priority value separately.

sprio [-j jobid]

Age Factor

PriorityWeightAge=1000
PriorityMaxAge=14-0

The age factor represents the length of time a job has been sitting in the queue and eligible to run. In general, the longer a job waits in the queue, the larger its age factor grows. However, the age factor for a dependent job will not change while it waits for the job it depends on to complete. Also, the age factor will not change when scheduling is withheld for a job whose node or time limits exceed the cluster's current limits.

The weighted age priority is calculated as PriorityWeightAge[1000]*[0..1] as the job age approaches PriorityMaxAge[14-0], or 14 days. As such, an hour of wait-time is equivalent to ~2.976 priority.

Job Size Factor

PriorityWeightJobSize=2000

The job size factor correlates to the number of nodes or CPUs the job has requested. The weighted job size priority is calculated as PriorityWeightJobSize[2000]*[0..1] as the job size approaches the entire size of the system. A job that requests all the nodes on the machine will get a job size factor of 1.0, with an effective weighted job size priority of 28 wait-days (except that job age priority is capped at 14 days).

Quality of Service (QOS) Factor

PriorityWeightQOS=1500

Each QOS can be assigned a priority: the larger the number, the greater the job priority will be for jobs that request this QOS. This priority value is then normalized to the highest priority of all the QOS's to become the QOS factor. As such, the weighted QOS priority is calculated as PriorityWeightQOS[1500]*QosPriority[0..1000]/MAX(QOSPriority[1000]).

QOS          Priority  Weighted priority  Wait-days equivalent
-----------  --------  -----------------  --------------------
admin            1000               1500                  21.0
janus               0                  0                   0.0
janus-debug       400                600                   8.4
janus-long        200                300                   4.2

Fair-share factor

PriorityWeightFairshare=2000
PriorityDecayHalfLife=14-0

The fair-share factor serves to prioritize queued jobs such that those jobs charging accounts that are under-serviced are scheduled first, while jobs charging accounts that are over-serviced are scheduled when the machine would otherwise go idle.

The simplified formula for calculating the fair-share factor for usage that spans multiple time periods and subject to a half-life decay is:

F = 2**((-NormalizedUsage)/NormalizedShares))

Each account is granted an equal share, and historic records of use decay with a half-life of 14 days. As such, the weighted fair-share priority is calculated as PriorityWeightFairshare[2000]*[0..1] depending on the account's historic use of the system relative to its allocated share.

A fair-share factor of 0.5 indicates that the account's jobs have used exactly the portion of the machine that they have been allocated and assigns the job additional 1000 priority (the equivalent of 2976 wait-hours). A fair-share factor of above 0.5 indicates that the account's jobs have consumed less than their allocated share and assigns the job up to 2000 additional priority, for an effective relative 14 wait-day priority boost. A fair-share factor below 0.5 indicates that the account's jobs have consumed more than their allocated share of the computing resources, and the added priority will approach 0 dependent on the account's history relevant to its equal share of the system, for an effective relative 14-day priority penalty.

Documentation as a habit

People seemed to like this comment that I made at LISA 2013:

It's important to see documentation as a habit, rather than as a task to be completed. Don't ask how you can catch up on the backlog of documentation that isn't written; just document what you do and answers to questions you've had to ask from here on out.

Securing Salt file_roots

My only real problem with Salt vs Puppet is its security model for files stored in the manifest. Puppet's fileserver supports per-node export configuration, allowing for node-private file distribution. Salt, on the other hand, exposes all files to all nodes at all times.

How Puppet does it

# fileserver.conf
[mount_point]
path /path/to/files
allow *.example.com
deny *.wireless.example.com

[private]
path /data/private/%h
allow *

How salt could do it

# file_roots.sls
file_roots:
  base:
    - /srv/salt
    - /srv/salt-example.com:
      - allow: *.example.com
      - deny: *.wireless.example.com
    - 
  dev:
    - /srv/salt/dev/services
    - /srv/salt/dev/states
  prod:
    - /srv/salt/prod/services
    - /srv/salt/prod/states

Proposal

file_roots:
  base:
    - /srv/salt
  *.example.com:
    - /srv/salt-example.com

A New Firewall Salt State

My evaluation of Salt Stack is going pretty well. I've moved my main vps over to it with no ill effect, and was able to transcribe its Puppet manifest almost in its entirety. In many instances, I think the Salt version is more readable, and feels lighter than the Puppet version.

One glaring hole, though, is Salt's support for firewall configuration. I was using the Puppet Labs firewall module to maintain iptables rules for my vps. That worked pretty well; but all Salt has right now is the ability to append new rules to a chain. The existing iptables state is documented at-risk for deprecation, too, so it's a bad place to start.

It is expected that this state module, and other system-specific firewall states, may at some point be deprecated in favor of a more generic firewall state.

(Salt does have good support for iptables at the functional layer; it's just the configuration management part that's lacking.)

Since the firewall module I used before worked well enough, and I have a bunch of config based on it already, I've started reimplementing its interface in a Salt state module.

"100 salt-master":
  firewall_rule:
    - managed
    - protocol: tcp
    - ports: 4505:4506
    - action: accept

I've found developing a Salt state to be a pretty simple process so far. I really like how Salt's effective layers cleanly separate between functionality, state management, and configuration. (My firewall state makes liberal use of the existing iptables module, for example.)

I've just published the module so far on github. This module at least recognizes that my existing config exists, and would be able to rebuild it in the proper order (sorted lexically by comment) if necessary. There's a lot of functionality missing, but it's a place to start. If anyone else uses it, that will just be an excuse to make it better!

Discovering Salt Stack

I've been a pretty stalwart Puppet user since I first discovered it in 2009. At that time, my choices were, as I saw them, between the brand-new cfengine3, the I've-seen-how-the-sausage-is-made bcfg2, and Puppet. Of those choices, Puppet seemed like the best choice.

In particular, I liked Puppet's "defined state" style of configuration management, and how simple it was to describe dependencies between the various packages, files, and services to be configured.

Like I said, I've been using Puppet happily for the past 4 years; but now, I think I've been swayed by Salt Stack.

I know I looked at salt stack before; but, at the time, I think I dismissed it as just "remote execution." Salt does, after all, start from a very different place than Puppet. At its most simple, it is a mechanism for shipping Python functions to remote nodes and executing them. It seemed the very opposite of the idempotent state management that I was looking for.

But now that I've taken the time to look deeper into the documentation (or, perhaps, now that the project has grown further) I've found Salt Stack States: the state enforcement configuration management system I was looking for; and with a trivial-to-setup remote execution layer underneath it.

Salt is based on 0MQ. I don't know much about message queues; but I do know that I could never get ActiveMQ working for use with Puppet's MCollective. After only 30 minutes of hacking, I had Salt, with 0MQ, running on two OS X machines and two Debian machines, all taking to the same master, each from behind its own form of inconveniently private network.

$ sudo salt '*' test.ping
ln1.civilfritz.net:
    True
Jonathons-MacBook-Pro.local:
    True
numfar.civilfritz.net:
    True
dabade.civilfritz.net:
    True

Glorious.

Some other things that I like about Salt:

  • States are defined in YAML, so there's no proprietary (cough poorly defined cough) language to maintain.
  • The remote execution layer and state module layer help keep executable code separate from state definitions.
  • Key management is a bit less foolish. (It shows you what you're about to sign before you sign it.)

Of course, no new technology arrives without the pain of a legacy conversion. I have a lot of time and effort invested into the Puppet manifests that drive ln1.civilfritz.net; but converting them to Salt Stack States is serving as a pretty good exercise for evaluating whether I really prefer Salt to Puppet.

I've already discovered a few things I don't like, of course:

  • The abstraction of the underlying Python implementation is a bit thin. This is sometimes a good thing, as it's easier to see how a state definition maps to individual function calls; but it also means that error messages sometimes require an understanding of Python. Sometimes you even get full tracebacks.
  • Defined states don't seem to understand the correlation between uid and uidNumber. In Puppet I started specifying group ownership as 0 when I discovered that AIX uses the gid system rather than root. In Salt, this appears to try to reassign the group ownership every time.
  • All hosts in a Salt config have access to all of the files in the master.
  • YAML formatting can be a bit wonky. (Why are arguments lists of dictionaries? Why is the function being called in the same list as its arguments?)
  • No good firewall (iptables) configuration support. The iptables module isn't even present in the version of Salt I have; but the documentation warns that even it is likely to be deprecated in the future.

That said, I can't ignore the fact that, since Salt happens to be written in Python, I might actually be able to contribute to this project. I've already done some grepping around in the source code, and it seems immediately approachable. Enhancing the roots fileserver, for example, to provide node-restricted access to files, shouldn't be too bad. I might even be able to port Puppet Lab's firewall module from Ruby to Python for use as a set of Salt modules.

Time will tell, I suppose. For now, the migration continues.

Introducing civilfritz Minecraft

I started playing Minecraft with my brother and old college roommate a few weeks ago. My expectations have been proven correct, as I've found it much more compelling to play on a persistent server with a group of real-life friends. In fact, in the context of my personal dedicated server instance, I'm finding the game strikes a compelling chord between my gamer side and my sysadmin side.

There's already some documentation for running a Minecraft server on the Minecraft wiki, but none of it was really in keeping with how I like to administer a server. I don't want to run services in a screen session, even if an init script sets it up for me.

I wrote my own Debian init script that uses start-stop-daemon and named pipes to allow server commands. Beyond that, I made a Puppet module that can install and configure the server. You can clone it from Git at git://civilfritz.net/puppet-minecraft.git.

I also really like maps, so I started looking for software that would let me generate maps of the world. (I was almost pacified when I learned how to craft maps. Almost.) I eventually settled on Minecraft Overviewer, mostly because it seems to be the most polished implementation. They even provide a Debian repository, so I didn't have to do anything special to install it.

I've configured Minecraft Overviewer to update the render once a day (at 04:00 EST, which hopefully won't conflict with actual Minecraft server use), with annotations updated once an hour. You can see it at http://civilfritz.net/minecraft/overview.

I couldn't get Overviewer to display over https for some reason I don't understand yet; so all access is redirected back at http for now.

Why I'm abandoning strict Allman style in Puppet manifests

I pretty much always use Allman style in languages that have braces. I like the symmetry, and the visible separation of identifier from value.

Though Allman style has its roots in C, the only brace language I use these days is Puppet. (Python end-runs around this whole issue by omitting braces altogether, which I ultimately prefer.) Pedantic as I am, my choice of brace style has extended (as closely as I could) to writing Puppet manifests.

class motd

(
  $content = undef
)

{
  file
  { '/etc/motd':
    content => $content,
    owner   => '0',
    group   => '0',
    mode    => '0644',
  }
}

This isn't what most people do, and it's certainly not what the examples in the Puppet style guide do; but it's also not in violation of any of the recommendations in the style guide.

I've been doing this for years, now; but today, I had one of those "aha" moments where I pleasantly realized that I've been doing it wrong.

Allman style works just fine for Puppet class definition; but Puppet resources provide their titles within the braces, rather than outside. This supports the compression of multiple resources into a single declaration.

file {
  '/tmp/a':
    content => 'a';
  '/tmp/b':
    content => 'b';
}

This syntax is explicitly discouraged in the style guide, but it's part of the language's legacy.

The problem with Allman style in this context is that is separates the resource title from the resource type. In most braced languages, the title of an element is written outside of the braces, after the type.

1
2
3
4
5
6
#! /bin/bash

function main
{
    # ...
}

In this example, it would be easy to grep a pile of Bash source files for scripts that declare a main function.

$ grep 'function main' *.sh

Not so with Allman style. I can grep for /etc/motd; but that would match against any reference to the file. Finding the declaration itself becomes a manual exercise with a contextual grep (grep --before-context 1).

All of this becomes much simpler, however, if resource declarations include the resource title (and the interstitial brace) on the same line as the resource type.

class motd

(
  $content = undef
)

{
  file { '/etc/motd':
    content => $content,
    owner   => '0',
    group   => '0',
    mode    => '0644',
  }
}

Even I have to admit that grep "file { '/etc/motd':" *.pp is much simpler.

This is immaterial for class declarations, since the class name is located before the brace.

class motd
{
  ...
}

I'd argue that Puppet should at least support a similar syntax for resources; one that puts the title directly after the type.

file '/etc/motd'
{
  ...
}

That could get a bit confusing, though, when using parameterized classes, as a parameterized class application syntax is somewhat close to regular class definition syntax.

# definition
class motd
{
  # ...
}

# declaration
class motd
{
  content => 'Hello, world!',
}