I posed a question at Slashdot about software development methodologies, and got a very interesting, extensive reply.
My question sprang from my own experience with software development, involving “lots of conversations between developers and actual users, with notecards and pens”:
Nothing beats discussion for the kind of small-scale projects I’ve worked on. “It takes a whole village to write an application.”
However, I have to wonder how poorly it scales … I wouldn’t trust space shuttle development if it lacked extreme process control.
When does “takes a whole village” (development team) become “takes a city planner with hundreds of subordinates”?
I received this extensive, thoughtful reply from a Slashdot user I have since identified as Mike Charlton:
Scalability and process control are two different subjects, but I’ll try to answer what I think your question is.
I’m mostly familiar with XP. There are a lot of other so-called “Agile” methodologies, but XP has the most well defined set of development practices. Other methodologies don’t specify what you should be doing, so a lot is left up to the individuals (which can be good or bad).
But just because it is “Agile” doesn’t mean it isn’t well controlled. For example, doing full XP you should be writing acceptance tests (both manual and automated) for all functionality (though I have rarely seen anyone actually do this). The story cards along with the acceptance tests form the requirements. The “Customer” (I call it the customer proxy since they are rarely the actual customer) should be reviewing the application with respect to the acceptance tests at the end of every iteration. Stories are marked off as they are completed. Regardless of what you use to write the story on (I often used a wiki), handing the card in, or setting the status to DONE means it is finished. The iteration plan (or backlog in Scrum) gives your commitment for the iteration., etc, etc. From an ISO/CMMI perspective I can’t think of a single thing that is missing.
Of course many people view “Agile” as a synonym for “Ad Hoc” and proceed accordingly. This is unfortunate.
One of the knocks on XP is that it isn’t scalable. This is both true and false … if you are asking about scalability I might be tempted to say that you have too many programmers and would be better off with less.
The problem isn’t in the amount of code you can generate. The problem is that it is very, very difficult to specify a lot of requirements at once and have it not turn into a [mess].
Your bottleneck is not in your programmers, it is in your customer proxy. Traditionally we ignore true specification (or user centered design) and let the programmers make decisions. If the decisions are questioned at all it is by the QA people (which results in a lot of pointless back and forth arguing).
Nobody has the big picture because the millions of tiny decisions made everyday are impossible to absorb by a single person. The cure for this is time.
Let the program take longer in development in order to understand what it is you truly need. Have someone play with the system and understand the implications of the changes before making the next step.
So, scalability is not really more of a problem for XP than it is for other methods. It’s just that the problem is more obvious.
If you really need more programmers working, then an issue tracking system is a decent replacement for index cards (although most issue tracking systems have difficulty assigning ordinal priority). Your requirements will still [prove unsatisfactory], though, just like other methods.
If you can partition your problem and find a group of application designers (i.e., subject matter experts who can tell you what your application should be doing) that can communicate with each other well, then you can easily scale XP. I think you can probably keep one of those designers (customer proxy) busy with 8 programmers (or 6 good programmers).
Depending on the problem I suppose that would allow you to scale to 50 programmers or so. More than that will probably raise the risk unacceptably no matter what method you choose. But 50 programmers should be able to maintain a rate of about 30K lines of production code per week (in fully TDD code). I have to say that I can’t imagine wanting to go faster than that … Finding 50 programmers who can do XP well might be rather challenging, though.
— wrook (134116) @ Slashdot
I was so taken by the depth of thought, and the implicit personal concern, that I tracked down the author, sent a personal email, and received in reply an email that amounts to another generous essay, which I present here in full by permission:
Hi Karl. Sorry for the late response. I’ve been travelling over the holiday season. Thanks for reading my post. I often wonder if anyone gets any value out of the long rants I make I’m “retired” from the software biz, so I’m not doing anything except my own stuff right now. But at one point I had been coaching a team of 40 non-colocated people on a project. It was rather interesting. We had about 15 programmers split into 3 teams, we also had a team of about 10 QA people, some documentation people, and the rest UI/Customer proxy people.
It wasn’t exactly optimal, but it’s what I had to work with. People were used to their old way of working, so I had to keep their roles similar to what they had done before. The biggest change I made was to the QA team.
I had the customer proxies (ex program managers and marketing people) write stories. They were reasonably good at this since the initial stories didn’t have to be complicated (which would have been beyond them). We took them to a planning game every 2 weeks (better would be every week, probably). QA came to the planning game too. Each story was explained, questions asked and the devs did estimates. QA also did estimates on how long it would take to write a manual test script (i.e., manual acceptance tests) for each story. Any story that needed to be split up again (often happened) was punted for the next meeting (which is why one week intervals is better).
My ex program managers were horrible at prioritizing work. They couldn’t put it in ordinal order (only high/medium/low priority, with everything at high). So I would prioritize the stories myself and then tell them where the cutoff line was for the iteration (2 weeks) and let them freak out. After that they could understand what was the most important stuff. Don’t sweat it too much. If they see the iteration plan and are OK with it, then everything will be fine. Just try to have 1 week or 2 week iterations so that they can change their mind frequently. Never add things to iterations mid-stream — they only have a few days to wait until the next iteration anyway!
Real requirements were written by QA and UI people. These were done *concurrently* with the code being written. The requirements were written in terms of manual acceptance tests. These are easy to write, so they are always finished before the code. But often the programmer would have to go to the QA/UI person and communicate what they were hoping to do or ask what they should do. This worked very easily and I didn’t have to do anything special to solve problems.
The programmers were not allowed to move on to the next problem until they had *personally* run the acceptance tests from UI and QA. QA didn’t run them at this point. If QA wasn’t finished the tests, then the programmers were expected to help with them (it never happened).
At the end of the iteration when everything was integrated, QA ran all the tests once more. If anything failed, then the story was either fixed or removed for the next iteration (usually the latter). It very rarely happened.
My intent was that the customer proxies would take the load at the end of the iteration and play with it looking for problems, etc. This was very naive. As ex program managers, they felt that they shouldn’t have to use the app at all. And on the other hand, the QA manager was worried that QA was only testing functionality that we were pretty sure worked, but not doing overall system testing. This was my biggest mistake.
I recommend having an entirely separate team of QA people whose job is to take each iteration build and find out what [needs work]. This is what they are used to doing anyway. Don’t call these bugs!!! They are just ordinary stories. If you call them bugs, then people get all weird about assigning them higher priorities than new work. In reality, we can live with some bugs, we are only trying to maximize useful functionality. UI specialists should also review each iteration looking for work flow improvement opportunities.
Documentation also worked concurrently with other development. They have to talk to the UI and QA people to get the details of the stories. As the stories change during the iteration, they are expected to change their documentation. When QA is doing the integration testing, they are checking their documentation against the actual implementation and making changes if necessary. Getting them to do this was very difficult. They had real difficulty imagining what the feature would look like without seeing it for real. In the end I encouraged them to make outlines of the documentation and then gave them access to the nightly builds so that they could see the stories as they were being checked in. With practice they could get everything done on time, but you need to give them a lot of support at the beginning. Try not to give them too much pressure.
They also bristled at having to rewrite documentation when things changed from iteration to iteration. But by showing them how the programmers did the same thing, they realized this was just an opportunity to improve their writing. In the end, I highly recommend doing documentation this way as it gets the doc people integrated into the process and makes them feel a part of what’s going on.
Finally, one of the best things about doing things this way was that our release planning was trivial. Every iteration build was a release candidate. Everything was done including the documentation. We could demo every iteration build (every two weeks in our case — but try to get it down to 1 week).
We could send “betas” to our customers as well for feedback. When the customers were at the point of “We want to buy this”, we could ship it in a matter of weeks.
Anyway, I’ve ranted enough given that you didn’t ask for this. But if you have any questions, please give me a shout.
— MikeC [1/4/2011 11:13 PM]
Thanks again, Mike. I find your experiences full of insight, and I’m pleased to share them on the web.