Skip to content

A Day with James Bach

I recently attended a 1 day workshop on how to coach testers. This workshop was organised by Anne-Marie Charrett and run by her and James Bach.

James is a fascinating teacher/coach.He does have a very forward style and will not shrink from challenging what you say. So you have to pay attention to what you say.

I have been an instructor for over a decade, teaching at Object Training. I regard coaching as the next step after teaching. Teaching is a structured activity with prepared lesson plans, presentation, exercise and a defined scope for each lesson. While teaching does have an amount of spontaneous interaction with students it is comparatively ‘tame’ compared to coaching. In coaching the student tends to set the parameters of the interaction by defining the problem domain they want to be coached about. It is the nature of the situation that the student may not be able to present the problem domain concisely. So the coach has to be able work with the student to  clarify it and guide the student through the thought process towards a solution.

There are a number of techniques in coaching, but central to it are what James and Anne-Marie call the Energy. To the best of my understanding this refers to what the student cares about, what gets them interested. When you, as coach, explore the potential problem space you will see the student respond and when you get to the actual problem, the responses of the student should exhibit a positive energy – veering away from it results in negative energy.

James showed us an interesting picture, called the coaching space. This reflects that both the student and the coach have a mind image of the problem space which is communicated between the two using whatever media are available (which may be face to face, skype, or any other media).  Given that communication is never comprehensive and the human languages by nature imprecise, the mind image in both parties tend to differ. The coach needs to be aware of it and may have to explore specific parts of the mind image to gain a better understanding.

The session included a 30 minute skype session with testers that had agreed with James to be coached as part of this course. Afterwards the skype transcripts were forwarded to Anne-Marie and James for review and returned with comments. Needless to say they are tough critics and it showed that I still have quite a way to go to become a good coach.

I particular liked what James called ‘Heirloom’ exercises. These are exercises that he has had around for a long time and which have been refined through its use. Attributes of heirloom exercises are that they are often simple, specific to a particular purpose and effective. In my time as an instructor I have accumulated a few of these exercises myself but had never thought of them in that way.

James’ and Anne-Marie’s coaching course reminded me of Stephanie Burns’ book ‘Artistry in Training’ which was formative for me when becoming an instructor. Both introduced me to  the respective matter (teaching & coaching) and gave me the guidance that I expect will help me to develop and improve myself. Hopefully one day I will be a coach as well as an instructor.

Ignite Australia 2011

I’m currently at the Iqnite conference in Darling Harbour. It has evolved from the software quality conferences in the past and is hosted by KJ Ross.
I was listening to the keynote by google and speakers from Atlassian and EBay in the morning and then to speakers from the financial services industry and some of the well established outsourcing agencies, I was struck how there are two radically different worlds out there. Without wanting to resort to the A word it is obvious that there are radically new approaches evolving.
New world: talking about open source, recognizing test automating is software development, giving people the space to innovate.
Old world: test management maturity metrics, process definitions, ROI, commercial tools over open source.

Open source is a frequent topic at this conference. The spectrum of what is available is very large and the quality is ported to be very good. This is interesting as it starts to put pressure on commercial tool vendors whose ability to charge large licence fees for their tools is increasingly being questioned.

Open source is highlighting a paradigm shift that is happening. The traditional test tool industry has been characterizing their tool users as non developers whereas the open source industry clearly sees test automators as developers. Given that test automation has for so long been a non starter, I see this change as a potential tipping point that will result in test automation becoming a main stream practice in SW development.

You have requirements

A while ago I had an email discussion floating through the company. One of the questions was: “When the designer gets the requirements what questions should they ask to make sure the project is a success?”

Here are some of the comments that came back:

General

This step of any development process is all about gaining knowledge without detail level information of the solution. The key things we’re looking for are pitfalls, things that will bite us later on in the process. Every project is going to be subject to the classic forces: Scope/Requirements, Budget, Time. The information which can be gathered early in the process about the volatility of the requirements will partially dictate the choice of methodology. The budget and allowed time are usually better understood in terms of constraints.

State of the Requirements

How were the requirements gathered? Was a structured approach used? Could requirements change during the design phase?

Are the requirements reviewed and signed-off (an indication but no guarantee for the quality)

Are they clearly written, concise and consistent? Is there a structure to the requirements that allows the reader to find their way through and get an understanding for the scope and completeness? Or are they written in dense or a vague way that makes it hard to understand what is there

Check that all types of requirements included:

  • Functional (maybe use cases)
  • Data/object Models
  • UI Models
  • Business Rules
  • Non Functional Requirements
  • External Systems

Are the assumptions and outstanding questions documented (this is a good thing)? Are they show stoppers (this is a bad thing)?

Has an architectural been involved while the requirements were created?

Stakeholders – Who has say and sign-off, who has design input. Finding these people early is important, especially those that are of a combative nature.

Hard limits vs. Soft limits – Knowing which restrictions placed on a solution are set in stone and which are a bit flexible can dramatically alter the nature of a project

Environment – On a technological level, providing correct development and build environment can make a major difference to the rate of a project. Having a well defined and implemented development (and deployment) environment is definitely worth the investment. In political terms it is important to maintain two key aspects of the work environment: People must be able to offer up new ideas and improvements to existing practice to be socialized by the team(s); People must not be made scapegoats of – there should be an environment which allows and rewards experimentation.

Pending changes to other projects (substitution, impacts): if any of the systems you interface with has pending changes / work in progress that will impact on your project, you need to know up front so you don’t have to repeat work. The problem you then face is coordinating with all the other teams you work with, but if they don’t plan sufficiently ahead this may be impossible to avoid.

Available resources (staff, cash)

Time for research and to do POC work / spikes – We know from project experience that doing proof of concept work and trying out design ideas on a small-scale to validate them ahead of implementation can dramatically improve the quality of estimates and prevent a team from pursuing a design that later proves to be flawed.

Pre-existing commercial arrangements / alliances that will impact your project

A most social day

Last Saturday I met with two friends, Dee and Peter to go for a paddle at Watson’s Bay to try out my new kayak: a Valley Aquanaut RM. I’d chosen it because I wanted a boat that I can use for instruction and be able to play in and around rocks and rebound.  Its hull is such that you can turn it on a dime (which you do lots when instructing on water) and it is easy to demonstrate the effect of various strokes. And being rotomoulded I don’t have to wince when I scrape over a bunch of rocks.

By the time we met at Watson’s bay a bunch of friends had already shown up with their own plans for paddling on the day. We pushed off and played a bit in the rebound outside of South Head. On the way back we ran into Mike, Jacqui and Claus and decided to join them to go over to North Head. There Mike continued on to Little Manly and the rest of us decided to turn back  and go to the old Quarantine Station which has great views and good coffee. On the way I managed to squeeze in a few gauntlets around the rocks which sent the adrenaline up.

When paddling back Mike rejoined us and with him came Matt who had just caught a taylor on his handline – updating his dinner menu.  Back to Watson’s Bay for an hour of playing around, rolling, trying out kayaks. Afterwards a picnic with seafood and coffee from Doyle’s.

All in all a most social day.

Restrict yourself

One of the most important assets of a BA is the ability to be thorough. However there are times when this ability can be more of a liability.

Here is an example of a pattern I run into quite regularly:

A while back at a forum someone asked what features a requirements management tool should have. The person asking the question had in mind starting an open source project for such a tool. The responses started slow but then quickly became a literal torrent of ideas. All good ideas but it had the effect of extinguishing that little flame of initiative as the poor person who asked the question was buried under too much features and requests. In fact, after the initial enthusiasm, no one came back to pick the idea up and progress it further. The initiative was well and truly dead.

There is always this delicate phase at the beginning of any initiative when motivation is fickle and committment is not yet given by potential participants. At this point it pays not to be thorough in delving what all can be done. Instead you need to restrict yourself to focus only on the essentials:

  • think of only 3-5 features the new product shall have
  • what is the one item that will make it worthwhile?  And most importantly
  • what can you leave out without damaging the integrity of the idea?

Overall it is about restricting yourself at this stage of the project. There is an interesting post over at Better Projects that talks about the A3 technique. Basically it records the idea and all issues on a single A3 sheet. A simple concept embodying an important idea.

Requirements and your Business Case

Dealing with clients I see a particular pattern in the early stages of a project. Having been stung in the past by cost blow-outs many organizations now demand a business case to justify the funding of a project. Key part of that business case is a reliable cost estimate. To provide a reliable cost estimate is dependent upon the quality of the requirements model – the more detailed the better the estimate. Or vice versa – the less detailed the requirements the less reliable the cost estimate becomes.

The project sponsors who want to start their projects find themselves in a quandary: they know that to achieve a reliable cost estimate they need the requirements – but that would cost a significant part of the project budget which they want to get approved.

This situation leads to a practice of putting up a strawman budget. Because everybody knows the estimated budget is quite arbitrary, the steering committee that approves funding for the project is often led to make equally arbitrary budget cuts to it. In the end the project goes ahead and the project manager is faced with the need of making ends actually meet, quite often by negotiating feature reductions with the users.

An approach I have used in the past splits the project into two parts. Part one is the System Definition Phase where we gather the requirements. This phase is sold on a time boxed/effort capped basis. Once the requirements have been produced the customer then can then estimate the development part quite reliably.  This first phase not only produces requirements but also a high level architecture. As a bonus you end up knowing your assumptions and risks very well.

This is all good but how can you know what effort is required to define all the requirements you may ask. The short answer is I don’t.  I won’t know how long it will take to do all of the requirements. I do know however to a quite reasonable reliability how long it takes to get enough requirements that allows us to do a reliable estimate. There is a big difference between the two.

The former (all requirements) is basically an open ended affair. The more detail you create the more questions appear. The more questions appear the longer it takes. If you were to chart productivity in business analysis over time you get a sharp bulge at the start and then a steady decline over time. Take that and add to it the fact that requirements have a natural tendency to change over time; after a sufficient amount of time your speed of requirements definition has become less than the rate of change of requirements – you have entered the domain of Analysis Paralysis.

What we do instead is to focus on producing ‘just enough requirements’. In this context ‘Just enough’ means the amount of requirements that we need to have to determine a build approach and to be able to size it for costs. To do ‘Just enough’ requirements means that we need to make sure we cover the whole scope of the problem domain but dive in only so deep. The ‘How deep’ is a key question and part of the guidance I give to business analysts. I give the business analyst guidance when to stop refining use cases and to move on to the next one. This also allows the BA to work iteratively: Identify several use cases, then refine them later. Repeat for the next set of use cases and so on.

On top of this sits a maturity matrix. It tracks how far individual use cases have matured. So even with larger numbers of use cases in a project ( hundreds) we do not loose track of the big picture. A team of business analysts can on a regular basis review its efforts and focus it so that the overall target of ‘just enough requirements’ is being met.

This approach allows you to quote a reliable time box to create ‘just enough’ requirements. How big the time box is driven by a scope description and the targeted reliability of the estimate.

For a fix cost project this phase can still be a major chunk of the overall project budget. So what happens business is not willing to fund this without a definitive business case?

Find out in my next blog.

Build Testability in

Something that really bugs me in software design is the fact that testability is rarely a requirement. Hence automated testing tends to use the GUI, which is akin to trying drive a car entirely by looking through the rear view mirror. As a result most organisations find the value of automated testing not commensurate to its investment.
I’ve seen one application where testability was a requirement. You could drive the UI via a REST interface and when you completed/ran a transaction then you could intercept the data being sent to the backend, inspect it and manipulate it. Ditto for the return of the data to the UI.
Because the test interface could be accessed with simple scripting languages such as Ruby it was easy to build quick test cases.
The result was twofold: a well designed test library for functional testing and a slew of quick and dirty test cases put together by anyone in the team to see if something worked the way it was intended.

In usability research there is a concept on affordability where the perceived effort to do something is matched with the perceived value. Testing has a very abstract value and lots of negative connotation. There is very little room to improve the value perception. Hence it is better to reduce the perceived effort. Showing a tester or a developer a simple ruby script to automate a test  can be very powerful.