Philip Potter

  • London emacs meetup, 20th May 2014

    Posted on 21 May 2014

    Last night was the second London emacs meetup. Here are my rough-and-ready notes, taken in org-mode, like all my notes.

    emacsspeak: emacs for visually impaired people

    intro, @bodil & @dotemacs

    • welcome!
    • format:
      • one or two short talks
      • about things that are useful for all emacs users
      • then afterwards, gather around particular fields of interest

    talk: writing major modes (by @dotemacs)


    • last time, @bodil gave a talk
    • definition:
      • “encapsulating a set of editing behaviours”
    • book: GNU Emacs Extensions, O’Reilly
    • Yukihiro Matsumoto (matz)
      • “I started Ruby development with influence from emacs implementation”
      • “but as an emacs addict I needed a language mode”
      • “auto-indent was a must”
      • “back in 1993, there was no auto-indenting language mode for a language with such syntax”
      • “If I couldn’t make ruby-mode work, the syntax of ruby would have become more C-like”
    • Bob Glickstein & Scott Andrew Borton


    • two types: major & minor
    • (define-minor-mode ...)
      • keymap (specific for the minor mode)
      • variable <name>-mode
      • command called <name>-mode
        • toggles the mode
      • add it to a hook
      • (add-hook 'some-mode 'your-mode)
    • major mode:
      • memorable name :)
      • hooks (<name>-mode-hook)
      • syntax table
        • defines how things should appear (highlighting)
      • entry function (<name>-mode)
      • syntax highlighting; two options:
        • optimized regexes
          • Q: what does “optimized” mean here?
            • eg matching keywords would mash together into a mega-regexp
        • use regexp-opt
          • takes a list of strings, and outputs a regexp which matches all of those things
          • aside: rx module compiles sexps to regexes
      • indentation
      • syntax table “behaviour” & movement
        • how you jump between functions, classes, & other constructs the language has
      • remove all buffer-local variables
      • set the variables
        • major-mode to <name>-mode
        • mode-name to string “name”
      • keymaps; two options:
        • sparse-keymap
          • suitable for up to half a dozen
        • define your own, otherwise
      • bind mode to files
        • (add-to-list 'auto-mode-alist '("\\.bar" . foo-mode))
      • run user defined hooks for the mode
        • defined with <name>-mode-hook
      • provide mode
        • (provide '<name>)
    • cookies
      • ;;;###AUTOLOAD
    • “this sounds like a lot of work, why reinvent the wheel”?
      • cookie cutters
        • sample-mode (available on emacswiki)
        • derived-mode (part of emacs)
    • checkdoc
      • good tool to run once you have defined a mode
      • kind of a lint tool
    • ecukes
      • framework that allows you to write tests in:
    • espuds
      • step definitions


    • keybinding conventions?
      • Who gets to claim the C-c space?
      • it’s really complicated
        • there’s seven or eight layers
        • minor modes override major
      • C-x normally reserved by emacs
      • C-c C-<something> major mode generally
      • C-c <something> own use (shouldn’t be modes)
      • super and hyper of course
        • windows key & os x command
    • why do I get so many compilation warnings when installing modes through package-install?

    tangent: eshell

    • written by someone working on windows and didn’t have a decent shell
    • redirecting straight into a buffer is nifty
    • all interactive commands are bound as shell commands
    • Plan9 features built in

    tangent: testing

    • why use ecukes and espuds?
      • why can’t I just do my testing in a repl?
        • you’ll have to restart your emacs a lot because your tests will be messing global mutable state
        • automating a test suite would make things more convenient than manually using C-x C-e
    • is there a good mode for editing these tests?
      • you could use cucumber-mode
      • is there a way to jump to the step-definitions?
    • is there something between cucumber and C-x C-e for testing?
    • is there a way to test keybindings without using ecukes?
      • ecukes has the most momentum (seemingly)
    • dash.el does tests nicely
      • the README has some examples which are the unit tests
      • see magnars/dash.el

    tangent2: magnars emacsrocks talk

    ideas for bird of feather groups

    • highlighting (first)
    • multiple emacs woes (OS-supplied vs emacs 24)
    • eshell (dotemacs)
    • bulletproof emacs for the lightweight users (mickey)
    • sql in emacs (mickey)
    • org-mode (everyone later)
      • org-reveal?
    • haskell (later)
    • flymake/flycheck
      • jfdi (apart from java)
      • need to be aware of the lang-specific linter you need

    making the most of paredit/smartparens (bodil)

    from earlier discussion

    • “I made paredit work for python!”
    • comparison
      • paredit works out of the box
      • smartparens configurable to work in any mode
    • paredit for haskell!

    bodil’s config

    • bodil-smartparens (find it on )
      • make smartparens behave as much like paredit as possible
      • turn on smartparens-strict-mode in your lisp mode hook
    • paredit’s M-<up> and M-r (splice-sexp-killing-backward and raise)
      • both useful for pulling expressions out of let bindings
    • bodil: I’d actually recommend paredit for lisps, but smartparens is useful for other langs
      • I’m actually using it for haskell
    • html tagedit – like paredit for html
      • syntax-aware killing
      • slurp and barf
      • magnar again :)
    • autoindent in curly-brace languages
      • ie when pressing RET in function(){|}
      • want to create new line indented, with curly brace on line following that
    • structural-haskell mode
      • sort of like paredit for haskell syntax


    • interactive commands available as regular shell commands (eg find-file)
    • lots of commands replaced with emacs-friendly modes (eg man)
    • can work with tramp (eg cd /sudo::/etc; find-file passwd)
    • configuration goes in emacs.d/eshell
      • in particular aliases in emacs.d/eshell/aliases
    • buffer redirection
      • use C-c M-b to select and insert a reference to a buffer
      • for example:
        • echo foo >> #<buffer *scratch*>
    • plan 9 features

    tangent: webkit.el

    • takes an external webkit window and puts it on top of the appropriate emacs window

    tangent: fish

    • why fish?
    • sick of bash
    • completion is really cool
  • Refactoring systems

    Posted on 21 February 2014

    “Refactoring is the process of changing a software system in such a way that it does not alter the external behavior of the code yet improves its internal structure.” – Martin Fowler, in Refactoring

    As a developer who fell into operations work by accident, I often relate what I see in production systems to my experiences manipulating code in an editor. One thing I’ve been thinking about recently is refactoring. When I was working on Java day-to-day, I would routinely use IntelliJ IDEA’s refactoring support: extract method, extract class, rename method, move method. Refactoring was an intrinsic part of good design: you never hit the perfect design first try, so you rely on refactoring to gradually improve your code from the first slap-dash implementation.

    It actually took some time before I read the original refactoring book. If you haven’t read it, go read it now! Even 14 years after it was written, it’s still relevant. It describes how to do refactoring when you don’t have fancy refactoring tools: through a series of small, incremental changes, which at no point cause the code to break, you can evolve from a poor design towards a better design.

    If I have one criticism of the book, it’s that it doesn’t reach far enough. The book’s subtitle is “Improving the Design of Existing Code”; but I think it should have been “Improving the Design of Existing Systems”. The approach of taking small, incremental changes to an existing system, to improve the overall design while at no point breaking anything, is by no means limited to code. This is acknowledged by the 2007 book Refactoring Databases; but it also applies to other aspects of system design such as DNS naming, network topology and mapping of functions to particular applications.

    The importance of maintaining a working system throughout a refactoring is high when developing, but it’s critical when modifying a production system. After all, when you’re developing code, the worst that can happen is a failing test, but the consequences in production are much more severe. And you don’t get any of IntelliJ’s automagical refactoring tools to help you.

    Sometimes you will need to make a deployment to production in between steps of a refactoring: perhaps to ensure you have introduced a new service name before configuring any consumers of that service to use it. As a result, refactoring whole systems can be slower and more laborious than refactoring code. Nevertheless, it’s the only way to improve the design of an existing system, short of creating an entirely new production in parallel and switching traffic over to it.

    Here are some recent refactorings I have performed on production systems:

    Rename service

    Problem: the name of a service, as it appears in DNS, does not describe its purpose.

    Solution: introduce a new name for the service. Gradually find each of the service’s consumers, and reconfigure them to use the new name. When the old name is no longer needed, remove it.

    Rename entire domain

    Problem: the name of a domain does not describe its purpose.

    Solution: introduce a new domain as a copy of the old. Modify the search domain of resolvers in that domain to search the new domain as well as the old. Find any references in configuration to fully-qualified domain names mentioning the domain, and change them to use the new domain. When no more consumers reference the old domain name, remove it from the search path, and finally remove the old domain entirely.

    I recently performed this dance to rename an entire environment from “test” to “perf”, to give it a more intention-revealing name. Using shortnames as much as possible, rather than fully-qualified names, made the job much easier. (There are other reasons for using shortnames: by standardizing on particular shortnames for services, you reduce the amount of environment-specific configuration needed.)

    Change IP address of service

    Problem: you want to shift a service from one public IP address to another.

    Solution: introduce a route from the new public IP to the service. Test the service works at the new IP – the curl --resolve option is very useful for testing this without having to modify DNS or /etc/hosts entries. Ensure any firewall rules which applied to the old IP address are copied to apply to the new address. When certain that the new IP address works, change your public DNS record from the old IP address to the new. Wait for the TTL to expire before using the old IP address for any other purpose. Finally, remove any stale firewall rules referring to the old address.

    This might seem like an odd thing to want to do, so here’s the context: in one of our environments, we had three public IP addresses listening on port 443. I wanted to expose a new service on port 443, but restricted at the firewall to only certain source addresses, so it couldn’t be a shared virtual host on an existing IP. One of our IPs was used by a reverse proxy with several virtual hosts proxying various management services – graphite, sensu, kibana. Our CI server, however, was occupying a public IP all to itself. If I created a new virtual host on our reverse proxy to front the CI server, I could serve it from the same IP address as our other management services, freeing up a public IP for a new service with separate access control restrictions.

    Merge sensu servers

    Problem: you have two sensu servers monitoring separate groups of clients, when you would rather have one place to monitor everything.

    Solution: choose one of the servers to keep. Reconfigure the clients of the other server to point at the original server. Remove the now-unused sensu server.

    Note: this refactoring assumes that the two servers have equivalent configuration in terms of checks and handlers, which in our situation was true. If not, you may need to do some initial work bringing the two sensu servers into alignment, before reconfiguring the clients.

    Remove intermediate http proxy

    Problem: On your application servers, you are using nginx to terminate TLS and reverse proxy a jetty application server. You wish to remove this extra layer by having jetty terminate TLS itself.

    Solution: Add a second connector to jetty, listening for https connections. Reconfigure consumers to talk to jetty directly, rather than talking to nginx. Once nothing is configured to talk to nginx, remove it.

    In our specific situation, we were using DropWizard 0.7, which makes it easy to add a second https connector. DropWizard 0.6 assumes that you have exactly one application connector, and it’s either http or https, but not both. We have some apps that are running DropWizard 0.6; our refactoring plan for them involves first upgrading to DropWizard 0.7, followed by repeating the steps above.

    It’s not a catalog, it’s a way of thinking

    The original refactoring book presented refactoring as a catalog of recipes for achieving certain code transformations. This is a fantastic pedagogical device: by showing you example after example, you can immediately see the real-world benefit. Then once you’ve seen a few examples of refactoring, it starts to become natural to come up with more. The same tricks turn up again and again: introduce new way of doing things, migrate consumers from old way to new way, remove old way. This general scheme applies to all sorts of refactoring from simple method renames to system-wide reconfiguration.

    Data makes everything harder

    The easiest part of the system to refactor is the application server. Given a good load balancer, a properly stateless application with a good healthcheck, and a good database, you can create a completely new design of application server, add it to the load balancer pool, test everything still works correctly, and remove the old design.

    Refactoring the system at the data layer is much harder. Migrating data from one DBMS to another is painful. Splitting a single database server shared between two applications into two database servers is painful. For certain types of migration, the ability to put the system into read-only mode might be necessary (or at least incredibly helpful).

    You won’t get the design right first time

    The reason that refactoring is important is that you won’t get the design right first time. You will inevitably need to make changes to accommodate new information. And so when you wistfully imagine the system as you’d like it to be, you have to discover the small steps which will get you there from the system you currently have. The way to reach a truly great design is to start with an okay design and evolve it.

    I don’t claim any of this is new

    I’m sure that these strategies have been used by operations people for years, without calling it “refactoring”. My main point is that the activities of code refactoring and system refactoring are based on the same underlying principles, of identifying small changes which do not change external behaviour in order to improve internal structure.

  • Keeping a record

    Posted on 15 February 2014

    People who work with me soon get to know that I keep meticulous records of meetings, conferences, user groups, and so on. If you haven’t seen examples, here are my notes from FOSDEM 2013 and devopsdays Paris: as you can see, I write down a lot of material. I do exactly the same at meetings in my workplace, at user groups, and suchlike.

    I do this because, like many people, I think that meetings are an incredibly unproductive way to spend time. I therefore want to ensure that the meetings that we do have count for something. I have attended too many meetings at which a decision was made but not written down, only for the team to later forget what the decision was, and have to call another meeting to discuss the whole issue again. I have also seen people attend meetings where they had nothing to contribute, they only wanted to listen to the discussion and understand the outcome.

    Taking comprehensive notes solves problems like these. Decisions made are recorded, action items are recorded along with who has taken responsibility for them, and to ensure everyone has a common understanding of what took place, I email a copy of my notes to all attendees. For those who were interested in the outcome rather than the discussion, they no longer have to attend the meeting and can instead skim my notes afterwards, saving them time.

    Having said that, I rarely read the notes that I take. I find that the act of writing things down has benefits in itself: it forces me to engage more – I can recognise when I’m drifting off because I stop taking notes; it forces me to restate the ideas I’m hearing in an abridged form, which means I’m not just passively listening but actively encoding. The combination of these effects mean that even if I were to save all my notes to /dev/null, it would still have been a massively beneficial activity, increasing my understanding and recall of what was said.

    How I take notes

    When taking notes, speed is everything. You can’t ask those present to slow down so you can capture all the detail. It’s not possible (or not possible for me, anyway) to take a full transcript of who said what, so some editorial judgement is necessary. I’m constantly trying to digest what is being said into key points to write down, and distilling out fluff, rhetoric, repetition, and extraneous detail.

    I am a touch-typist. I don’t think it’s possible to take comprehensive notes without this. At a conference, I will be looking at the speaker and the slides while typing; having to look at the keyboard to find keys would slow me down far too much. Sadly, I don’t think there’s any silver bullet here; learning to touch-type is a long and difficult process, but it’s necessary to be able to take comprehensive notes at a live event.

    I take notes using org-mode for emacs. I find org-mode a great fit for note-taking, because it is fast, intensely keyboard-focussed, and provides sufficient structure to be able to manage my notes. Here’s an example of the notes I might take in a meeting:

    * meeting to discuss design of widget processing
    * present
      - me
      - fred
      - jim
      - sheila
    * introduction by fred
    ** we need to perform widget processing
       - required by user journey x
       - enables users to deal with widgets but export as doohickeys
       - some prior art, but none seems to exactly match what we need
    * sheila
      - what about they do widget processing as a service
        - expensive, but probably cheaper than development effort to
          reimplement it
        - jim: not sure we can justify sending data about sensitive
          widgets to third parties
        - sheila: we could probably anonymise user data before processing
    * fred
      - I've previously used pydgets, a python widget-processing library
      - sheila: all our code is ruby/rails/sinatra at the moment
        - fred: we could create a separate python service for it
          - communicate using HTTP and json
          - sheila: I'm skeptical that our ruby folks would be happy
            writing python
          - me: we should investigate anyway, see if it's the right tool
            for the job
    * Actions
    ** spike: investigate pydgets and python RESTful services - fred
    ** spike: investigate and anonymisation - sheila

    Org-mode is great because the source format is human-readable. I don’t need to tell the recipients that my notes are in org-mode; I just paste them into an email and send them verbatim.

    The headings and bullets are my bread-and-butter of note-taking. Org-mode provides shortcuts for easy and fast manipulation of headings and bullets: C-RET new heading, M-RET new bullet, M-up/M-down move heading or bullet up/down, M-left/right promote/demote heading, C-c - convert heading to bullet, C-c C-w refile heading under different toplevel heading. These manipulation functions mean that I don’t have to stick to taking notes in chronological order; I can easily move notes around to other parts of the file.

    The first heading I make is always a list of people present; the last heading is always a list of actions. It’s worth remembering that most meetings are called to make decisions about what actions to take; by taking notes on actions, I am focussed on ensuring that the meeting isn’t drifting into endless discussion and is actually making decisions. If someone says they will do something, I capture that as a new heading and refile at the bottom, keeping all actions together for easy review.

    Why I take notes

    I take notes primarily for my own benefit. By taking notes, I force myself to listen actively, not just hearing the words that are being spoken but grappling with the concepts and ideas being talked about, trying to reword them into a concise form by getting at the essence of what’s being said. This can confuse people: once, as I started taking notes, a speaker told me “you don’t need to take notes, I’ll send you my slides”; I responded “it’s just what I do”. The psychology literature talks about note-taking having the complementary functions of “encoding” and “storage” – I primarily use notes for encoding, and treat storage as secondary.

    I also take notes so that we have a record of decisions made. If there’s any confusion later on, I can return to my record and consult it. There should also be a record of the constraints that were considered when making the decision, so that we can later determine if they are still valid or if the decision should be revisited.

    Finally, I take notes and send them out to those present because my colleagues keep giving me good feedback about them. This feedback is invaluable because I almost never read my own notes, so the only sense I get of how useful they are to read is from other people’s reports.

    The way I take notes affects the way I participate in meetings. If I can see from my notes that we haven’t agreed on an action, I will push for a decision so that my “actions” heading starts to fill up. Sometimes I create an “agenda” heading near the top as a scratch space for notes I want to talk about but haven’t yet had the opportunity. My note-taking habit has got to the point that I can’t imagine not taking notes in a meeting anymore; it just has so many benefits that it would seem ludicrous not to.

    Incidentally, this post is the first in my blog written using org-mode. Previously I have been writing in markdown, because that’s the default for jekyll, but now that I’ve got org-mode working I’m thoroughly converted. Click on the source link to the left to see the source on github.

  • The git pickaxe

    Posted on 09 February 2014

    I care a lot about commit messages. I try to write them following Tim Pope's example, using a short summary line, followed by one or more paragraphs of explanation. It's not unusual for my commit message to be longer than the diff. Why do I do this? Is it just some form of OCD? After all, who really reads commit messages?

    The reason I care about commit messages is because I'm an avid user of the git pickaxe. If I'm ever confused about a line of code, and I want to know what was going through the mind of the developer when they were writing it, the pickaxe is the first tool I'll reach for. For example, let's say I was looking at this line from our puppet-graphite module:

    exec <%= @root_dir %>/bin/ --debug start

    That --debug option looks suspect. I might think to myself: "Why are we running carbon-cache in --debug mode? Isn't that wasteful? Do we capture the output? Why was it added in the first place?" In order to answer these questions, I'd like to find the commit that added the switch. I could run git blame on the file, to find the last commit that touched the line. However that leads to a totally unrelated commit that had nothing to do with my --debug flag issue.

    So I still want to find the commit that added that --debug switch, but git blame has got me nowhere. What next? It turns out there's an option to git log which will find any commit which introduces or removes a string from anywhere in its commit:

    git log -p -S --debug

    This will show me every commit that either introduced or removed the string --debug. (It's a slightly confusing example, because --debug is not being used as a command-line switch to git, but as a string argument to the -S switch instead. Nevertheless, git does the right thing.) The -p switch shows the commit diff as well. There are in fact a few matches for this search, but the third commit that comes up is the winner:

    commit 5288d5804a3fc20dae4f3b2deeaa7f687595aff1
    Author: Philip Potter <>
    Date:   Tue Dec 17 09:33:59 2013 +0000
        Re-add --debug option (reverts #11)
        The --debug option is somewhat badly named -- it *both* adds debug
        output, *and* causes carbon-cache to run in the foreground. Removing the
        option in #11 caused the upstart script to lose track of the process as
        carbon-cache started unexpectedly daemonizing.
        Ideally we want to have a way of running through upstart without the
        debug output, but this will fix the immediate problem.
    diff --git a/templates/upstart/carbon-cache.conf b/templates/upstart/carbon-cache.conf
    old mode 100644
    new mode 100755
    index 43a16ee..2322b2d
    --- a/templates/upstart/carbon-cache.conf
    +++ b/templates/upstart/carbon-cache.conf
    @@ -12,4 +12,4 @@ pre-start exec rm -f '<%= @root_dir %>'/storage/
     chdir '<%= @root_dir %>'
     env GRAPHITE_STORAGE_DIR='<%= @root_dir %>/storage'
     env GRAPHITE_CONF_DIR='<%= @root_dir %>/conf'
    -exec python '<%= @root_dir %>/bin/' start
    +exec python '<%= @root_dir %>/bin/' --debug start

    Now I know exactly why --debug is there, and I know that I certainly don't want to remove it. But what if my commit message had just been "Re-add --debug option"? I'd be none the wiser. This is why I care so much about commit messages: because I have the tools to quickly get from a piece of code to the commit that introduced it, I spend much more time reading commit messages.

    This example is also interesting because it raises another question: should this explanation have been in a code comment instead? The --debug flag is inherently confusing, and a comment could have answered my questions even quicker by being right there in the file.

    However, a 6-line comment in the file would be quite a bit of noise whenever you weren't interested in the --debug switch, whereas a commit message can be as big as it needs to be to make the explanation clear. Comments and commit messages can be complementary: there could be a one-line comment saying that --debug causes carbon-cache to stay in the foreground, and a more detailed explanantion in the commit message. In some ways I see commit messages as a type of expanded commenting system which is available at your fingertips whenever you need it but automatically hides when you just want to read the code.

    A couple of small postscripts: I could have even narrowed down my search further by adding a path filter to my log command:

    git log -p -S --debug templates/upstart/carbon-cache.conf

    This search finds the commit in question instantly: it's the first result. But unlike the original git log command, it is not resilient against the file being renamed in an intervening commit. I tend not to use path filters for pickaxe searches, because I can normally find what I want easily enough anyway.

    The -S switch takes a string match only. If you want to match a regex instead, you can use the -G switch instead.

  • Automating dnsmasq and resolvconf

    Posted on 07 November 2013

    I've been working a lot with dnsmasq for DNS forwarding recently, and have hit enough problems that I thought it would be worth writing about them.

    On my current project, we're using Ubuntu 12.04, which uses dnsmasq as a local DNS cacher and forwarder, and resolvconf (the service as opposed to the resolv.conf file) to manage DNS server configuration.


    Dnsmasq is a simple DNS forwarder. It proxies multiple upstream DNS servers, add caching, and can even serve up A records from an /etc/hosts-style configuration file.

    Dnsmasq is configured by giving it an /etc/resolv.conf-style file with a list of nameservers. It will regularly poll this file for changes, and change its forwarding behaviour accordingly.

    Dnsmasq can also be configured to direct requests for particular domains to particular servers; for example, if you want everything in to go to your internal office server, but everything else to go to public DNS servers, dnsmasq can do that for you.

    Dnsmasq does NOT perform recursive DNS lookups; you will still need some form of recursive DNS server in order to achieve full DNS functionality.


    resolvconf is part of the ubuntu-minimal install, which means that it's considered a pretty core part of the distribution these days. It's an evolution from the traditional /etc/resolv.conf file, which lists nameservers and search domains to use when resolving DNS names to IP addresses.

    You associate a nameserver with a particular network interface with a line such as:

    echo nameserver | resolvconf -a IFACE.PROGNAME

    where IFACE is an interface, and PROGNAME is the name of an associated program. For example, dnsmasq itself registers itself with resolvconf by associating with the lo.dnsmasq entry. You can remove entries with resolvconf -d. Generally, you don't call resolvconf directly; instead, it is called automatically as part of bringing up a network interface, or starting a DNS service, or similar.

    Each time an interface is added or removed, resolvconf updates associated configuration files by running scripts in the /etc/resolvconf/update.d directory; one of these, libc, updates the traditional /etc/resolv.conf file.

    The problem

    This is where I get to the problem I was facing. I was trying to install and configure dnsmasq in a puppet run. However, immediately after dnsmasq was installed, I would start getting name resolution errors, and the rest of the puppet run would fail. But by the time I had logged onto the box to investigate, name resolution was working again! What was going on?

    It turns out there's a bit of a race condition when starting dnsmasq, particularly for the first time. What happens is this:

    1. /etc/init.d/dnsmasq starts the dnsmasq daemon. Dnsmasq, in its default configuration on ubuntu, looks for upstream nameservers in /var/run/dnsmasq/resolv.conf. Dnsmasq checks for the file, finds it missing, and gives up for the moment. It will poll again later.
    2. Once dnsmasq has started and returned, the init.d script registers with lo.dnsmasq in resolvconf.
    3. resolvconf runs its updates, generating configuration for dnsmasq in /var/run/dnsmasq/resolv.conf and also changing the standard libc resolver file /etc/resolv.conf to only refer to, the dnsmasq process
    4. At this point, the dnsmasq service is the sole DNS server that the local resolver can see, but dnsmasq itself hasn't yet seen any upstream nameservers. Therefore it can't give any useful answers. At this point, my puppet run starts failing.
    5. After a few seconds, dnsmasq polls the /var/run/dnsmasq/resolv.conf file again and finally finds the upstream nameservers left for it by resolvconf in step #3 above.
    6. I log into the machine, try to resolve a name, and everything works.

    I have filed a bug at launchpad to raise this issue.