Pairing in History

I was reminded of a post I made after visiting  Bletchley Park and the National Museum of Computing some years ago and realised it needed moving to my own blog. So here it is:

While visiting  Bletchley Park and the National Museum of Computing  I discovered some examples in history when pairing really worked.

The first example was in the “tunny room”. Messages from the German high command were transcribed here during World War II for processing in the decryption machines, after being intercepted at Knockholt. When a transmission was received on the incoming telegraph, it was transcribed onto tape feed by two operatives so that it could be fed into the machine. As the tour guide pointed out

“two girls were used for this, with the assumption being no two girls will make the same mistake”.

Reducing mistakes is one of the key benefits of pairing in software development teams, and prevents defects appearing later down stream. In the case of the Bletchley workers, the letters they were typing would have been Lorenzo code, a raw ciphertext received via headphones, so a misplaced letter could have thrown the decryption out completely, causing delays and misinformation.

The second pairing example was in the post war section of the museum, home to a restoration project for a 1950s punch card archiving system. The room, which does the work that a database might do today, is full of oily, complicated looking machinery. The first machine was the puncher. This was used to mark the cards with the data that needed storing – in the example at the museum, these were orders for a sales team. The pairing occurred here, at the point of data entry. Two operatives would have slightly different machines; one would punch a hole slightly above the number mark on the card, and one slightly below. A valid mark was therefore represented by an oval shape where the two punches intersected. A second machine, the verifier, was then used to ensure there were no errors on the cards. It detected single holes in the punch cards and flagged these as errors by outputting the values on a pink card. Cards that only had oval holes went onto the next stage. We still use pink cards on our Kanban wall to represent defects.

In this example the pairing was inherently built into the system through the mechanics of the machines involved and must have prevented a great deal of erroneous data getting through.

So, if pairing was so useful in during the war, and built into machines in the 50s, why isn’t it the de facto standard for software development today? Why hasn’t it simply grown in popularity since the 50s? I’d not heard of pair programming until I’d been working in the industry for 2 years, even having been at university for 3. There are many benefits to pairing in addition to the reduced defect rate, yet the practise doesn’t seem to have stuck around.

There does seem to be a resurgence of interest in pairing as agile software development gathers pace and I’m sure most of the luminaries of the agile world have been doing it for years. But why hasn’t it always been wide-spread?

How Monkeys Can Help Software Development

When researching my talk on F1 and Paediatric Intensive Care I began to appreciate the influence that published papers in different fields of expertise could have on software development. It was then that I was reminded of a lightning talk  that I saw at XP2010.  This used research in behavioural economics to hypothesise that certain innate human characteristics are at odds when estimating and prioritising feature development. As we develop and deploy on a feature level at 7digital it made sense to investigate this further, so I read up on the source material and produced a short 15minute talk which I’ve documented on the 7digital developer blog:

http://blogs.7digital.com/dev/2012/02/28/how-monkeys-can-help-software-development/

Tests Aren’t Just For Testing: Stop Using The Debugger

We had a successful final hour of the day today. Our task for the past few weeks has been to read extra data from the XML metadata files we receive from the major music labels. One of the major labels provides their data with slightly askew logic which is different to all of the others. They prefer to omit an element to show that a given release no longer has has the value previously in that element – this requires us to know the previous state of a release to compare with the changes. This is quite a common computing problem but, when working with the type of legacy code project that we do, it can become confusing.

We’d been trying to solve this new problem for most of the day and almost had it complete, except two tests still needed to pass. We couldn’t get the logic right so that both would pass, it was either one or the other – as these were acceptance tests too, the amount of code they covered was quite large. Add to this, that we’re working on a VB.NET project that hasn’t been updated for around 4 years and you’ll see why frustration started to build.

Debugging

My team mates Tony and Will had been trying to find out what was going on using the debugger, and hadn’t managed to sort it out when they asked for an additional pair of eyes. I heard about their efforts, and briefly looked at the code, but immideiately thought back to my days spent with Agile Coach Kevin Rutherford. Kevin used to tut disapprovingly every time we started the debugger, suggesting that we were just missing a test instead. I remember thinking that the fact we were missing a test was quite obvious but the steps to take to find that missing test were the hard part.

The Saff Squeeze

Today we used techniques from Kent Beck’s writings on the “Saff Squeeze”. Tony posted a quick blog about the resources we found after we’d solved the problem – it turns out we’d followed the process reasonably closely. We identified in the controller method two loops that could be extracted into pure functions (J.B Rainsberger has helped us a lot in our legacy code adventure). Once extracted we made these public and static (so they used no internal state) and we could then write a test directly of this functional part of the code. Passing in a couple of simple int arrays proved the code worked as expected so we moved onto the next loop. We followed the same process again, and a couple of simple tests asserted that these functions were doing  what we wanted. This meant the problem must be with the caller, and more specifically, the value of the parameters we were passing in.

A quick look at the formulation of the variables used as parameters, and another quick change to the parameters of the tests proved that the problem was with the parameters. We’d filtered our input list incorrectly, forgetting to coalesce nulls to false as we’d missed one of the edge cases. We had tests for this class but missed this edge case.

So, with that in mind, we wrote a new test to ensure the nulls we replaced with a value of false, the parameters to the extracted functions were now correct and our two acceptance tests passed. After a bit of refactoring and exploratory testing tomorrow this MMF will be done.

Test Rather Than Debug

If I’d just strolled over to my team mates and said “stop using the debugger and write more tests” they’d have just ignored me. By breaking down the code and isolating the logic into pure functions, which we could then test, we’ve seen directly the value we got by adding these tests. The best part of it all too, is that these tests will now last for the lifetime of this code – the debugging session, once over, can add no further value.

Kevin also used to say “if you think a private method should be public, you’re probably missing a class”, and he’d be right about the two methods we extracted here. The two methods we made public so that we could test were already static, so following Fowler’s “Move Method” refactor we could safely move these to their own respective classes. These two classes now have their own behaviour, helping us meet the Single Responsibility Principle and use Dependency Injection.

Hopefully we’ll see this quick episode today as a good reason to use this technique again, and I thought I’d blog about it as evidence for others too.

F1 and Paediatric Intensive Care

A joint presentation between Paul Shannon (7Digital, Agile Staffordshire) and Dr. Harriet Shannon (Great Ormond Street Children’s Hospital Institute of Child Health) on findings from research by KEN R. CATCHPOLE PhD, MARC R. DE LEVAL MD, ANGUS MCEWAN FRCA, NICK PIGOTT Frcpch, MARTIN J. ELLIOTT MD, FRCS, ANNETTE MCQUILLAN BSc, CAROL MACDONALD BSc and ALLAN J. GOLDMAN.

The research focussed on reducing the amount of time to transport patients from the operating theatre to intensive care, and how this relates to Agile teams. The intensive care team wanted to improve patient recovery and manage risk by removing bottlenecks and defining responsibilities for emergency situations.

Intensive care teams attended sessions with an F1 pit crew to understand how their roles, responsibilities, communication techniques and safeguards could help meet their goals. Parallels can be draw between these aspects of the F1 team, the intensive care team and agile teams; similar roles (Chief Engineer, Head Surgeon, Product Owner) and safeguards (Replacement wheel guns, spare heart monitors, automated acceptance tests) are two of the key successes. Conversely, some of the more successful practices adopted by the intensive care team (implementing more detailed processes rather than relying on communication and collaboration) conflict with current agile thinking.

This has been presented at both Agile Staffordshire in June 2011 and XP Day in November 2011.

Paper

It appears Wiley Online are publishing the article that our presentation is based on for free so it is available for viewing as a PDF or HTML without the need to sign up:

Patient handover from surgery to intensive care: using Formula 1 pit-stop and aviation models to improve safety and quality

Presentation

I used Prezi to create a simple presentation for the session – split into two distinct parts with the presentation of research first followed by some key questions and talking points for a discussion. The prezi is available via the prezi web site.

Content

Harriet had already written up a large amount of her part of the presentation so I thought I’d convert it into a blog post so that those that missed either of the sessions don’t have to miss out.

Introduction

Great Ormond Street Children’s Hospital (GOSH) is an international centre of excellence in child healthcare and treats 175,000 children per year, in over 50 different specialities. Together with its research partner, the UCL Institute of Child Health it forms the only academic biomedical research centre specialising in paediatrics in the UK.

One of these specialities is heart surgery. GOSH sees about 500 cases per year, whether it’s heart transplant, re-wiring blood vessels, patching up holes etc. providing cardiac support.

Patients undergo heart surgery in the operating theatre in the north wing of the hospital. They are then transferred onto a trolley and taken along some corridors and up in some lifts, before reaching the intensive care unit to start their recovery.

What Are We Transferring?

The patient, all the technology and support (ventilators, monitoring lines, infusions of medicines). Also, knowledge about the patient, about any complications found during the operation, any specific instructions for the medical staff on the ICU – it is this combination of tasks that makes the process susceptible to error, at a time when the patient is most vulnerable.

Aims of the Study

The primary aim of the study was to improve safety and quality of care by observing the Formula 1 pitstop crew.

Why F1?

The pit crews are a multi-professional team coming together to perform a complex task (change tyres and refuel). There are huge time pressures involved (a pitstop should take less than 7 seconds). Errors cannot be tolerated and often result in disastrous consequences.

This fits in with the handover team as they are also multi-professional: surgeon, medics, nurses. The time pressue is of paramount importance with a maximum of 15 minutes for each handover. Even small erros can have larger consequences as the patient’s health is at risk.

The F1 Team Task

  • The “Lollipop man” has overall control of the pitstop
  • The car goes up on the jack
  • Wheel nuts come off, followed by the old wheel
  • New wheel goes on and secured with the new wheel nut
  • Driver’s visor is cleaned
  • The car is fuelled
  • Car is lowered from the jack
  • Lollipop man gives the all clear to go

Changes Adopted

  • LEADERSHIP – previously is was unclear who was in charge, now the anaesthetist took overall charge while the surgeon moved around to gain an awareness and overview of the situation.
  • TASK SEQUENCE – a clear rhythm and order of events was adopted whereas previously tasks were inconsistent and non-sequential. The main tasks were broken into three distinct areas:
    • Equipment and Technology
    • Information handover
    • Discussion and plan
  • TASK ALLOCATION – everybody now knows what they are doing whereas it was previously informal and erratic. Everyone is responsible for a single, well-defined task.
  • COMMUNICATION – only one person speaks at a time, and during information hand over, this is in a specific order. The group is given equality so that nurses can easily communicate with consultant or surgeons, spotting mistakes sooner – previously imposed hierarchies meant people in different roles did not communicate.
  • TRAINING – the clinical team ensured that their process could be easily taught so that the high turn over of staff could be combated with efficient and thorough training. The training is now done in 30 minutes and a laminated check-list is kept with every patient.

Results of Adopting Changes

  • Technical errors were found to be down by 1 third
  • Handover omissions were halved
  • Duration of handover decreased
  • Teamwork was perceived as the single most significant factor

Discussion

How does this apply the agile software development teams? Following the presentation session at both Agile Staffordshire and XP Day we had a short discussion. It appears that there are similar roles, situations and practices in software development that align with the findings. In the interests of brevity I thought I’d write-up the findings from the discussion in a separate post – I create a link here once published.

A Sea of Blue: Spotting Abstraction with a Theme

The other week I inadvertently coined a new phrase. It may actually be two phrases and comes from our recent concentration on abstraction in our controllers and main calculation engines. I just thought they were one of the silly things I say to help turn programming concepts into easier to use language – much like we use design patterns to communicate intent when pairing or discussing architecture. I was encouraged to blog about it by my colleague Shaun, who thinks this might go global.

Phrase 1: “A Sea Of Blue”

This is what we see when we open up a controller or engine that has a number of service objects injected in the constructor. The name came from the fact that we work in C#, in visual studio and our theme colours interfaces in light blue. It is generally regarded as a good thing as the class in question will be more testable as the services you don’t need can be easily mocked. We can return specific messages from some services, verify that others are called in a specific way and provide a mock with no setup/verification to skip over unwanted behaviour.

We’ve found that sometimes the sea of blue means that you have a long parameter list. We usually combat this by defining DDD style aggregates for service objects with related behaviour and injecting an aggregate service instead – maybe these are more a lake of blue – or maybe I’m getting carried away.

Phrase 2: “Making a Bluey”

This one is probably less likely to catch on, especially as most people think of something completely different when you say “bluey”. In other words, extracting an interface, or Ctrl+Shift+R and “Extract Interface” if you want to be fancy. We’ve had discussion recently whether we sometimes do this too soon – do you need an abstraction unless you need two concrete implementations? I often argue that the second implementation you have is the mock implementation you use in tests. My colleague (Shaun again) blogged about the way we should mock roles rather than types and one of things we’ve discussed before is that you should understand why you use a tool like Moq by actually implementing a mock/stub of the class rather than using Moq. This makes it more obvious that you now have two versions of the abstraction in existence and therefore the previous argument no longer stands.

I like the concept of emergent design and the use of test driven design to define the api of a class. I think this extraction of an interface, so that it can be mocked effectively and used without worrying about the concrete implementation, is an effect of the tests driving the code. The resultant api is cleaner, the classes can be well tested and your code can follow the SOLID principles.

The Theme

I’ll get our very latest version of the theme available soon on the company blog but for now you can try the old version, which works well in Visual Studio 2008 but because of the open type font restrictions in WPF and therefore Visual Studio 2010 you can no longer use inconsolata and some of the colours need updating:

http://codeweavers.wordpress.com/2010/05/27/a-better-colour-scheme-for-visual-studio/

Our new theme includes updates for 2010 and resharper 5 too.