Archive for April, 2011
Here is Part 2 of my notes on Lisa Crispin’s talk on Agile Testing. If you haven’t go catch up on Part One.
Lisa noted that her team stopped committing at the sprint level. They just work hard, don’t waste time and just focus on delivering the software. This works by letting the customer know that you are working by being transparent to them.
Teams need time to learn, experiment, and need slack. Need to give the team time to innovate and catch up on the latest technology, as well as to have time to move to the latest technology.
Automated tests need as much care and feeding as the code.
She noted that by learning the business it helped cut down time dealing with production support. She noted that they found scenarios where they could automated support tasks, or were even solving the wrong business problems. Lisa gave an example where a user kept requesting a report and it was being delivered to the user as they understood it, but it took sitting down with the end user to actually understand the report as the user was actually requesting it.
Lisa made reference to look at: Daniel Pink and Intrinsic Motivators, The Agile Samurai by Jonathan Rasmusson, and Jim Heismith and Israel Gat and their research into measuring technical debt, and her article Selling Agile to the CFO.
The quote of the evening seemed to be: “If it doesn’t have to work, you don’t have to test it.”
Emphasized that QA shouldn’t be treated as separate from development; QA time is part of development time.
Lisa pointed out that the most value was not in the actual integration tests, but was in the communication between the developers and testers that resulted from the interaction.
If you have too many thing going on at the same time, you task switch too much, the result is that you have a hard time predicting when you are done.
Encouraged us to try to get away from labels and just try to deliver the best value and best quality software that you can.
Encourage cross pollination across different teams in the area. You never know where new ideas come from. She talked about how she brought back the idea of an impediment backlog from when she visited over in the UK. When she took this idea back to her team she noticed that just making the impediment visible helped the team address those issues. –This reminded me of the Craftsman Swap that both Obtiva and 8th Light encourage, as well as Corey Haines’ journeyman tour.
This past Wednesday, Apr-20-2011, I attended the DFW Scrum meetup with guest Lisa Crispin, @lisacrispin, presenting over Skype, and I managed to take a wonderful 7 pages of notes in my composition book on her presentation. Because of this I will be breaking this up into a number of posts to help make it more digestible. I hope I didn’t butcher her talk up too much as I was busy trying to keep up with all of the gems she was throwing out to us. Apologies to Lisa if I did.
The big thing she started with was: before a team tries to go off and make any decisions, or do anything, they need to answer the question: “What does a commitment to quality mean?” Once answered, only then can they procede to improve the quality of their product.
On Reducing Show-Stoppers
The steps Lisa’s team went about reducing the number of show-stoppers they had in their product.
- 1st they setup the basics: Continuous Integration and a dedicated test environment.
- Once they had those in place, they setup a police light for show-stoppers. And anytime someone would report a show-stopper, that person then had to turn on the light. This had a two fold effect; it made the business person look silly if it was really a trivial bug, and it got annoying for the team if that light was constantly on.
- Development started TDDing their code. She made a quick side note that TDD is hard to learn, and really, any test automation is hard to learn. She pointed out that it took the developers 8 months to get over the hump of TDD.
- In the meantime, they wrote manual test scripts over the critical parts of the application. It was painful, and a great motivation for automating tests.
- Got UI based automated tests running.
- Worked to get functional automated tests instead of UI testing. Lisa mentioned that her team used Fitnesse.
- They started with a happy path case, after they had that going they woud then add tests around the more boundary and error condition cases.
- She noted that it took lots of baby steps over 8 years with a commitment to testing.
Testing is Not a Phase
The goal is a short feedback loop, as it is easier to recall the code an hour later as opposed to a month or two later. She noted that testers may be against this at first since it means testing the same thing multiple times, but that is important to shortening the feedback loop and improving the quality. I would also personally venture that it would help emphasize the importance of getting tests automated against a baseline set of expected functionality.
Lisa advised against calling a story done until all of the exploratory testing has been done.
She then pointed out some things to watch out for when planning. Watch out for overcommitting, since it usually doesn’t take into account the testing activities and anything they uncover. Also watch out for testing estimates that are not inline with development effort/estimate. Giving the example that if the testing effort is 2X the development effort, that may mean development might be missing something.
I will be posting part two soon as this was only two-and-a-half pages of the seven pages of notes.
As I have about a thirty minute commute each direction, so last summer after I got my iPhone, I started listening to podcasts on my way to and from work. So here is the list of podcasts that I have been listening to throughout the last 8 months or so. I am always looking for good technical related podcasts to fill the commute time, so if anybody has any other recommendations, throw them in the comments and let me know.
- Deep Fried Bytes
- Herding Code
- ElegantCode cast – No longer active in its same form, but has been replaced by the next podcast.
- .NET Rocks
- Polymorphic Podcast – Hasn’t been updated recently but was interesting none the less.
- Teach Me to Code – Combination of guest interviews and personal musings.
- coderpath – Infrequent posting schedule, now that one of the co-hosts now hosts the Pragmatic Podcasts.
- Rubiverse Podcast – Hasn’t been updated in a while, but had some interesting guests
- RailsCoach Podcast – Same host as the Teach Me to Code podcast above.
- Pragmatic Podcasts – Podcasts by the Pragmatic Publishers.
- Distributed Podcast – Focused on CQRS
- Improving Podcasts – Discussions about Agile
I started thinking about some of the bigger names in the developer community and how polarizing they can be due to their hard line positions on topics. One of the topics that came to mind was Test Driven Development, and how the advocates of it almost always have a strict stance on the correct ways to approach it. Outside-in or inside-out. Only one assertion per test or assert one logical concept. Mocks vs stubs. State vs behavior. TDD or BDD, or is there is really even a difference.
My wife pet-sits, and as she is cooking, so she likes to watch shows which cover training pets, similiar to how one might listen to podcasts as they drive to work. I will occasionally over-hear, or catch parts of these shows myself, usually while helping her in the kitchen. I also recently read Switch: How to Change When Change is Hard, by Chip and Dan Heath which has a section discussing the importance of reinforcing positive behavior when trying to encourage change and establish habits. Thinking about the hard-liners and TDD, I realized Kent Beck created the perfect “training clicker” for developers. Whether this was intentional or not at the time will be something I leave for him to answer.
To train by positive reinforcement, one has to capture the desired behavior and immediately reward it. When one test drives their code they are encouraged to run their test after each change to see if their change works. As unit tests are supposed to be fast, this gives the user immediate feedback to know if what they did worked or did not.
The majority of the test runners use either one of two words depending on the result of the test: success or failure. These two words are very emotionally charged. Combine this with the fact that they are usually printed in all caps and followed by a number of exclamation points results in output like:
Can’t you just see the emotions getting charged.
Add on to this a graphical user interface for the test runner, or even add ons which change the console text color depending on the end state of the test runs, which use the colors green and red for successes and failures respectively, and you get even more emotional resonance. You have now gone from the above results to something along the lines of
How is that for evoking an emotional response? I believe I have even heard Kent Beck talking about the thrill he sees in seeing the status bar turn green.
Also, the proponents of TDD encourage small units of work. The reason being, when you work in minimal units of change you know pretty much exactly what caused the test to fail. There ends up being a hidden effect to this though. When you work in small units between test runs, that behavior is now getting reinforced even more frequently, and engraining that behavior more deeply. Do this enough and that behavior will eventually turn into a habit. And our self-serving egos love to rationalize why our habits are the right thing to be doing, lest we allow ourselves to realize we might be acting wrongly.
And I do not think this just applies to those that are strong proponents for TDD. Do we ever consider that someone who is a strong opponent may have been negatively reinforced by TDD? Might they have tried on their own with no guidance and gotten frequent feedback of failures? Maybe they tried at the wrong level of abstraction, or on a codebase that was not designed with testability in mind. Maybe the test runner just kept giving them negative reinforcement on what they were doing until they decided that TDD is a waste of time.
I am putting this idea out there not to cast judgement against TDD, as it is a practice that I believe has a large amount of value to it, and would love to get good at, but as a something to think about. Maybe this will help the each side see why the other side might feel they way they do about TDD.
I would love to know your thoughts on this.