[bip] agile software development

Andrew Dalke dalke at dalkescientific.com
Fri Aug 3 19:39:34 PDT 2007


On Aug 3, 2007, at 11:10 PM, Titus Brown wrote:
> -> The Wikipedia page claims that most people use "modified waterfall
> -> models."
>
> That's more or less what I meant; that, or BDUF,
>
> 	http://en.wikipedia.org/wiki/Big_Design_Up_Front

BDUF is not the same as waterfall.  The wiki page says "often
associated with waterfall".  It does depend on what you mean
by BDUF.  I assert that in real life it occurs about as often
as people actually using waterfall models.  Meaning, it doesn't.


McConnell talks about a "point of departure" spec.  This
is one I like.  It's the BDUF to get an idea of what the problem
space and the solution space is like, and get enough planned out
to know if something is feasible, identify some potential trouble
spots, and start long-range planning.  It changes as the project
progresses.

A-ha, that wiki page includes a quote which would imply
this is a "small design up front."  This fits in with the
idea that "waterfall" and perhaps "BDUF" are more like
strawmen arguments.  "Ahh, you weren't using *real* waterfall,
you were doing iterative development and just called it waterfall"
"Oh, you weren't doing *real* BDUF because you allowed changes
downstream to change the initial design."

> Here's one pointer to a very much BDUF project:
>
> http://www.fastcompany.com/online/06/writestuff.html

That's an article on space shuttle avionics software.  Perhaps
the most pointed to example for this class of software.

My claim is that "no real project uses the waterfall design",
based on what I read from other people with the experience
to make that assertion.  Assume that BDUF is the same as
waterfall.  Is space shuttle avionics done with a waterfall
design?  Does it have BDUF?

That FC article specifically mentions that changes requests
are part of the process.  Indeed, quoting from
   http://www.cert.fr/francais/deri/wiels/Publi/RE-99.pdf

> As an operational vehicle, the Space Shuttle regularly needs  
> updates to its flight software to support new  capabilities (such  
> as docking with the space station), replace obsolete technology  
> (such as the move to GPS for navigation), or to correct anomalies.  
> Software updates are known as Operational Increments (OIs), and are  
> typically completed approximately every twelve to eighteen months.  
> An OI will implement any number of change requests (CRs). Each  
> change request goes through a rigorous analysis and review process  
> before it can be approved for inclusion in an OI. A change request  
> typically consists of a selection of pages from current Functional  
> Subsystem Software Requirements (FSSR) specifications, with  
> handwritten annotations showing new and changed requirements.  
> Change requests vary in length from a few pages to several hundred  
> pages. Each change request is reviewed by a number of requirements  
> analysts, along with members of the IV&V team, culminating in a  
> formal requirements inspection. Following the inspection, the  
> change request may be rejected, revised for re-inspection, or  
> forwarded to the review board for inclusion in the current OI. The  
> Shuttle Avionics Software Control Board (SASCB) makes the final  
> decision whether to include each CR in the OI. Their decision takes  
> into account various factors, including total size of the changes,  
> relative priorities of the change requests, and interaction between  
> change requests. documentation very well. The use of tables to  
> represent control functions and the ability to include real-valued  
> input variables directly in the model help to reduce the conceptual  
> distance between the existing documentation and the formal model.

Is that waterfall?  It's definitely iterative, which means it's
not pure waterfall.  Indeed, each change in "the trunk" appears to
be the result of many iterations on each "branch".  Though those
iterations might not be (are never?) written in the programming
language used in the trunk.

Is it BDUF?  If so, where's the front? There are lots of fronts.

It's more like "big design" than specifically "BDUF", no?

Quoting Titus:
> I believe (anecdotally) that software design for embedded auto systems
> use BDUF methodology.

I really don't know enough about that to comment.  I find it hard
to believe that there aren't highly iterative implementations there
as well.

Here's one relevant page:
   http://www.automotivedesignline.com/howto/showArticle.jhtml? 
articleID=166404279
> As shown below, automotive manufacturers today employ a variety of  
> tools and techniques to validate embedded software throughout the  
> design process. During the early stages of control development,  
> designers can use a Model-Based Verification environment to create  
> a model that represents the dynamic behavior of the plant and  
> simulate its response (in non-real-time) to an algorithmic  
> representation of the embedded control strategy. Rapid Prototyping  
> techniques allow designers to evaluate the real-time performance of  
> control strategies when they are connected to the actual plant, but  
> before anything is committed to code.


Freeman Dyson's "Disturbing the Universe", I think it was, makes
an analogy between motorcycles and nuclear power plants.  Motorcycles
are small, cheap to build, and not that dangerous.  As a result,
anyone could build one, and so there were many explorations of
motorcycle phase space.

Nuclear reactors, however, are big, expensive, and dangerous.
Most commercial power plants were second generation designs,
so are complicated beasts.  An exception was reactors for subs.
The first was (so I'm told), a bad design for a reactor.  But
there was strong incentive to make better and better designs,
and more iterations.


OpenBSD has a very strong review process in place.  They also
develop tools to inspect their software, so when they find a bug
in one place they add to their inspection suite to see if that
category of bug exists elsewhere.

Does anyone in this field do that?

I have an uncle who is a pilot.  I visited him and leafed through
the magazines he has.  One contained a summary of problems
that had occurred in flight over the previous month or so, with
details of what happened, what went wrong, the solution, and what
could have been done to prevent it.  That's a field with a strong
safety culture.  Can we have that in scientific software development?

How many here read RISKS?  How many have done post-mortems? (Or
post-project reviews, or any of several other names)?  How many
have read post-mortems from other projects?


Oh! Here's an interesting read:
   http://www.stsc.hill.af.mil/crosstalk/1995/01/Comparis.asp

It was written in 1995 and makes references to designs and
methodologies in use earlier.  Some interesting quotes:
> A methodology is composed of one of the software development models  
> used in conjunction with one or more techniques, i.e., methodology  
> = model + technique(s). The techniques of prototyping, cleanroom,  
> and object-oriented are ways to implement the waterfall,  
> incremental, and spiral models. These techniques may be mixed and  
> matched on a single project. Also, portions of a technique may be  
> used without using all aspects of that technique. This means that a  
> project using the spiral model may combine prototyping with object-  
> oriented analysis and design and also use cleanroom testing  
> techniques. Using the METHODOLOGY = MODEL + TECHNIQUE(S)  
> definition, there are more methodologies used than we have time to  
> identify. Therefore, the remainder of the discussion will deal with  
> models and techniques.

Continuing to quote that military reference:

> Because of the weaknesses shown above, the application of the  
> waterfall model should be limited to situations where the  
> requirements and the implementation of those requirements are very  
> well understood.
> "For example, if a company has experience in building accounting  
> systems, I/O controllers, or compilers, then building another such  
> product based on the existing designs is best managed with the  
> waterfall model ... ." [2, p. 30](3)
That reference is
[2] Blum, Bruce I., Software Engineering: A Holistic View, 1992.


That footnote is
(3) "The waterfall method is not recommended for major software- 
intensive Air Force acquisition programs."[14]

where
[14] Guidelines for Successful Acquisition and Management of Software  
Intensive Systems: Weapons Systems, Command and Control Systems,  
Management Information Systems, September 1994


Quoting that military reference:
> "Boehm's model has become quite popular among ADE (Aerospace,  
> Defense and Engineering) specialists, and is not so familiar among  
> business developers. ... It is particularly useful in ADE projects,  
> because they are risky in nature. Business projects are more  
> conservative. They tend to use mature technology and to work well- 
> known problems."
>
> "I [DeGrace] believe the spiral model actually is applicable to  
> many business applications, especially those for which success is  
> not guaranteed or the applications require much computation, such  
> as in decision support systems."[10, pp. 116-117]


Quoting that military reference:
> The first software development standard, DOD-STD-2167, was released  
> in June 1985 (see Table 6) despite the recognition that it was less  
> than perfect. DOD-STD-2167A followed closely behind in February  
> 1988 and was intended to be independent of any specific software  
> development methodology. The Foreword from DOD-STD-2167A states:  
> "This standard is not intended to specify or discourage the use of  
> any particular software development method. The contractor is  
> responsible for selecting software development methods (for  
> example, rapid prototyping) that best support the achievement of  
> contract requirements." Despite this intent, the standard was  
> interpreted by many to contain an implied preference for the  
> waterfall model (see Figure 1). Waterfall was the state of the  
> practice at the time.


Yes, you see here that waterfall was "state of the practice."
I'm more inclined to believe that people thought that's what
they should do, but none did.  I'm even going to assert that
the successful large projects that started with waterfall
went in to say "and we improved on it by" doing what is in that
military reference called "incremental model."


Quoting Titus:
> And I can definitely tell you that rapid prototyping is NOT taught in
> computer science departments... but that's a whole 'nother discussion
> that we can have over whisky (beer isn't strong enough) down the road.

And I claim that universities do a poor job at teaching
software engineering.  Just like English departments do a poor
job at teaching linguistics.  (See my previous post for why
I draw that parallel.)  People who become college professors in
CS rarely have the background (either theoretical or practical)
to teach that topic.

Schools also mostly do a poor job of teaching how to work on
teams.  They have a special term for students that get help
on homework assignments from other students - cheating.

(There are exceptions to all of these claims.)


> I passed your comments on to Chris Lee; I'm sure he'd welcome  
> patches or
> suggestions.

Thanks.  It's a lot of work on a code base I know little
about and won't be using for any of my work, so that's about
58th on my TO-DO list.  Unless there's funding?  Money does
tend to rearrange my list a bit.

Higher on that list is to give an example of my point that
most code in biology needs review.  I used pygr because it
came recommended by a couple of people on this list.  I'm
sure most here could do a code review, and send in patches,
and that the time savings from using existing code outweighs
the time needed to do that review.

So this is my challenge.  Take a biology package you use
(like pygr, or biopython, or corebio) but which you don't
write, review a module, and see how you might improve it.
You might even send in patches, but I'm more hoping for
now in getting people to look at other code bases.


				Andrew
				dalke at dalkescientific.com





More information about the biology-in-python mailing list