Tuesday, January 29, 2008

SOX Compliance and the Waterfall Method

So here I am, working away at my job, coding in a vacuum as always, a development team one. Some things never change. But other things, inevitably, do.

Our company was recently purchased. The new company has grand plans to eventually go public. With its eyes set on that prize, they have hired a consulting firm to help them achieve SOX compliance. This firm (who shall remain nameless) is busily churning out reams of process drafts to help us in that endeavor and submitting them to us for approval. It was only a matter of time before the SDLC for software development arrived on my desk for review.

Now, for those not familiar with how things run at our company, I'll simply refer you to this post, which rather succinctly sums it up. While some things have changed, most things, by and large, remain status quo. I have managed to convince them of the value of hiring temp testers prior to releasing builds, so we've shown moderate improvement there. But I'm still wearing tons of hats, and completely driving the entire development process single-handedly. And the company adamantly refuses to hire any other developers to help out. It's also worth noting that since the acquisition, the number of new software projects piling up on my to-do list is rapidly approaching the double-digits. So any process that these guys throw at me is going to affect me and it's going to affect me pretty damned profoundly.

You can imagine my utter shock and amazement when the plan that was presented to me for review and acceptance clearly stated that we were to implement, in excruciating detail, the Waterfall Method.

This presented several problems to me right off the bat:

  1. Whoever presented this plan is clearly unaware of the fact that the waterfall method clearly doesn't work. The very man who initially described it (Winston Royce), pointed out quite clearly that it doesn't work, and suggested an iterative model as a clearly superior alternative. See the actual article for proof.
  2. There is no way that we'd be able to implement that process with a development staff of one person. The process outlined requires that the roles are separately defined and filled by distinct individuals. We don't have individuals to fill those separate roles, and the company refuses to hire them.
  3. Even if we did implement the process, the timeline to implement software solutions for our customers would become so bloated that the customers would drop us like a rock. Our biggest customer demands a release every three months. If we adopted the Waterfall Model as it's spelled out in the SOX compliant process they submitted, it would take three months just to spec out the iteration. Not that the model would permit the iteration.

Consider this quote, dated in 2004, for crying out loud:

Asked for the chief reasons project success rates have improved, Standish Chairman Jim Johnson says, “The primary reason is the projects have gotten a lot smaller. Doing projects with iterative processing as opposed to the waterfall method, which called for all project requirements to be defined up front, is a major step forward.”

In his blog entry, Waterfall Method: A Colossal Blunder, Jeff Sutherland points out the following interesting tidbits in his comments:

The Waterfall process is a "colossal" blunder because it has cost 100s of billions of dollars of failed projects in the U.S. alone. Capers Jones noted 63% failure rates in projects over 1M lines of code in 1993. By the late 1990's, military analysts were documenting a 75% failure rate on billions of dollars worth of projects. In the U.K. the failure rate was 87%.

...

Let me reiterate, for projects over $3M-$5M, the Waterfall has an 85% failure rate. For those projects that are successful, an average of 65% of the software is never used. The Waterfall is a collosal blunder. The most successful Waterfall company I have worked with had a 100% Waterfall project success rate with on time, on features, and on budget. This led to a 100% failure rate in customer acceptance because the customer's business had changed or because the customer did not understand the requirements.

In his article, Improve Your Odds of Project Success hosted to SAP NetWeaver Magazine, David Bromlow provides the following chart that shows how the Waterfall Method makes it difficult to start effectively managing project risk until much later in the project compared to more agile methodologies:

In their article, From Waterfall to Evolutionary Development (EVO), Trand Johnson and Tom Gilb had this to say:

After a few years with the Waterfall model, we experienced aspects of the model that we didn’t like:

  • Risk mitigation was postponed until late stages;
  • Document-based verification was postponed until late stages;
  • Attempts to stipulate unstable requirements too early: change of requirements is perceived as a bad thing in waterfall;
  • Operational problems discovered too late in the process (acceptance testing);
  • Lengthy modification cycles, and much rework;
  • Most importantly, the requirements were nearly entirely focused on functionality, not on quality attributes.

Others have reported similar experiences:

  • In a study of failure factors in 1027 IT projects in the UK, scope management related to Waterfall practices was cited to be the largest problems in 82% of the projects. Only approximately 13% of the projects surveyed didn’t fail (Taylor 2000);
  • A large project study, Chaos 2000 by The Standish Group showed that 45% of requirements in early specifications were never used (Johnson 2002).

Finally, I'll offer this, from the article Proof Positive by Scott Ambler in Dr. Dobb's Journal:

Agility’s been around long enough now that a significant amount of proof is emerging. Craig Larman, in his new book Agile and Iterative Development: A Manager’s Guide (Addison-Wesley, 2003), summarizes a vast array of writings pertaining to both iterative and incremental (I&I) development, two of agility’s most crucial tenets, noting the positive I&I experiences of software thought leaders (including Harlan Mills, Barry Boehm, Tom Gilb, Tom DeMarco, Ed Yourdon, Fred Brooks and James Martin). More importantly, he discusses extensive studies that examine the success factors of software development. For example, he quotes a 2003 study conducted by Allen MacCormack and colleagues, to be published in IEEE Software, which looked at a collection of project teams of a median size of nine developers and 14 months’ duration. Seventy-five percent of the project teams took an iterative and incremental approach, and 25 percent used the waterfall method. The study found that releasing an iteration’s result earlier in the lifecycle seems to contribute to a lower defect rate and higher productivity, and also revealed a weak relationship between the completeness of a detailed design specification and a lower defect rate. Larman also cites a 2003 Australian study of agile methods, in which 88 percent of organizations found improved productivity, 84 percent experienced improved quality, 46 percent had no change to the cost of development, and 49 percent lowered costs. He also cites evidence that serial approaches to development, larger projects and longer release cycles lead to a greater incidence of project failure. A 2001 British study of 1,027 projects, for example, revealed that scope management related to waterfall practices, including detailed design up-front, was the single largest factor contributing to failure, cited by 82 percent of project teams.

So, with all this overwhelming information at our disposal (which is just the little bit I could scrape up with Google in about an hour), and years of historical evidence that proves empirically that Waterfall doesn't work, why on earth would you wield impose it as the one and only process to be used for all projects, regardless of size or complexity across your entire organization?

It's like voluntarily picking up a cursed +4 Vorpal Sword of Mighty Cleaving: it chops your own head off the moment you touch it.

It's sheer, absolute lunacy. Particularly in our case, where we lack the time, the resources, or the desire to acquire the resources to properly implement it as it's written. We'll be bogged down in a bureaucratic quagmire of Dagoban proportions.

You'll have to forgive me if this seems like a rant. But that's exactly what it is.

It might be time to brush off that resume. Sometimes, enough lunacy just piles up that you start to realize that there's no one behind the wheel who has any firing synapses in the brain.

Friday, January 25, 2008

Is VB.NET vs. C# Really Just Syntactic Sugar?

I recently read somewhere, as I have read before, that there aren’t any really compelling differences between C# and VB.NET. As has often been repeated, the differences all really boil down to “syntactic sugar.” C# is nice and terse, deriving its tight syntax from C and C++, while Visual Basic uses verbose language in an attempt to achieve greater clarity. Once the compilers get the code, though, it’s all supposed to be the same MSIL that gets generated, because you’re targeting the same .NET Framework.

So, it’s been asked, why would you choose one over the other?

That’s a fairly intriguing question. I’ve been working with VB and VB.NET for a really long time, and I’ve also had the opportunity to work with C, C++, Java, and C#. I like them all. I’d have to say that you can’t really beat VB for getting something up and running really damn fast.

But I’ve started to get this really deep-seated gnawing in the pit of my gut about what kinds of bad habits I’ve picked up over the years. VB has a reputation for doing things “automagically” to ease your life for you. Implicit type casting, dynamic variable allocation, case insensitivity, and a host of other little time-savers are designed to shield you from the nitty-gritty details of brain-cramping compiler complexities.

As a thought experiment, I took a large code base here at the office and ran it through C-Sharpener, a utility that converts VB.NET code to C#. Now, as a rule, I figure I write fairly safe code. I try to avoid reliance on the Microsoft.VisualBasic namespace, and use the Framework code instead. I always use Option Strict On (except for one particular class that used Reflection), and always explicitly define my variables. I’m a huge fan of type safety, so that wasn’t a concern to me.

What I didn’t expect to find were the things that C# complained about that I’d been doing and it told me, in no uncertain terms, were foolishness.

For instance, in Visual Basic, this is perfectly acceptable:

Imports System.Diagnostics
Dim log As New EventLog("Application", Environment.MachineName, "MyApp")
log.WriteEntry("MyApp", "Message Text", EventLogEntryType.Information)

(Ignore the crappy code. Just focus on the point I’m making.)

This will get you a wrist-slap from the C# compiler. Why? Because that particular overload of the WriteEntry method is static. You can’t invoke static methods from an instance variable in C#. The compiler flatly refuses to let you do so. Visual Basic, on the other hand, thinks that’s just fine and dandy; it resolves the issue on the fly for you.

Does that sound like syntactic sugar to you?

In Visual Basic, this is just fine and dandy:

If CInt(txtQuantity.Text) Then
   ' Do something spectactular
End If

Visual Basic helpfully converts the result of CInt to a Boolean to help evaluate the If...Then statement. If it’s nonzero, you get True and something spectacular happens.  In C#, you get a lovely compiler error about not being able to cast an int to a bool. Why? Because an int isn’t a bool, stupid!

"Yeah, yeah. So what? But I always want to do that." Good. So prove it. Explicitly cast, and for Pete’s sake work with the right data type. It shows me that you’ve thought about that when you wrote it. Visual Basic doesn’t force you to make your intent clear. C# does.

if ( 0 != int.Parse(txtQuantity.Text) ) {
  // do something spectactular
}

Again, does that sound like syntactic sugar to you? Remember, intent != syntax.

In Visual Basic, you can do this for days and the compiler will pat you on the back while you do it:

Public Function FooBar() As Integer
   Dim result As Integer
   Return
result  
   ' Do some more real work--that's unreachable
   Return result
End Function

Does the compiler care? Nope. Not a peep. C#, on the other hand, gives you this nifty warning: “Unreachable code detected.” Then it gives you the file name and line number where it’s at. It’s like your best friend saying, “Hey man, you really don’t want to do that.”

There’s no way that’s just syntactic sugar.

So here I am, looking at this project that I’ve converted, and I’m both pleased and shocked. Pleased because the number of conversion issues and errors is relatively minor. Shocked because I found myself doing things that I didn’t think I was doing. They just creeped up on me and seeped into me like bad habits all to often do.

I’ve been wanting to make the switch from VB to C# for some time now. Doing this conversion turned out to be a good thing for one very compelling reason: it opened my eyes to the mistakes I’ve been making, the bad habits I’ve adopted. I’m sold on C# now as my full time language. I’ll miss the speed of development of VB, but if slowing down means I write higher quality code that contains fewer bugs that have to be squashed later at higher cost, isn’t that worth it?

In the end, the point of this post is this: C# and VB are not simply different by the syntactic sugar that distinguishes them. The power of their compilers and the strictness of their adherence to OO principles also separates them. I’d wager a guess that it’s C#’s strictness that makes its compiler so much more powerful than VB’s. I certainly don’t see messages about unreachable code, expressions never being of the provided type, static method invocation, and so forth from VB. 

So please. Don’t over-simplify the issue. View the languages for what they are, and use the one that’s appropriate for what you’re doing, and how you work. For me, I’m making the switch. It makes sense for me. It won’t for everyone. But I am, by definition, an obsessive-compulsive control freak. I demand to know what I’m doing wrong and then I want to ruthlessly correct it. And I can’t reasonably ask for a harsher taskmaster at this point than an unforgiving, absolutist object oriented compiler.

Thursday, January 24, 2008

Certified at Last!

Okay, so it's not my Microsoft certification yet. But it's something:

NerdTests.com says I'm a High Nerd.  What are you?  Click here!

 

Being a big fan of measurable, quantifiable scores, I'd say it's nice that when someone asks, I can give them hard numbers to justify my claims when I say "Yes, I am, in fact, a nerd. Thank you very much for asking."

This ought to look really good in my email signature:

Michael Hofer
High Nerd
One More Pointless Blog

I can't wait to use it!

Tuesday, January 22, 2008

Heath Ledger Has Died

I was stunned to read that Heath Ledger has passed away while cruising the City of Heroes forums.

I'm stunned. As many know, Heath had just wrapped up his portrayal of the Joker in The Dark Knight. His passing will make it very eery to watch him on the screen when the movie comes out.

For those who don't know much about him, check out his IMDB profile. Heath wasn't just another pretty face around Hollywood. He picked his projects carefully to avoid being pigeonholed. He'll be missed.

Monday, January 14, 2008

Curious Perversions in Usability

We've all seen them, and we've all used them: applications foisted upon us by the well-meaning management masses who wanted us to conform to the standard in order to boost our productivity, or the latest whiz-bang website promising to revolutionize its niche market. While the product itself might actually solve a unique problem, or offer a plethora of enticing, well designed features under the hood, its user interface frustrates, confuses, obscures, and clutters.

When it comes to user interfaces, drill this one simple idea into your mind: Simple, clean interfaces will win out over flash, pomp and circumstance every single time. Why? Because a user interface should get out of a user's way, it should not impede them, confuse them, obscure the information they need to find, or be cluttered with crap they're really not interested in.

We all have different views about what makes any given piece of software more usable. But certain things tend to peeve users fairly consistently. These tend to be mine when it comes to Web pages:

  • Don't make me wait. Don't make the mistake of thinking that performance isn't a part of your user interface. There are always numerous ways to display the same piece of information, and some are faster than others. Given a choice between having to wait for a Flash or PDF download of a pretty picture, or a flat GIF/JPEG, which do you think most users would prefer? (And if you think Flash hasn't been proposed for this, think again.) Use the smallest, most compact presentation format that will get the job done right.
  • Don't make me scroll. Especially not horizontally. Smaller pages work better. Horizontally scrolling pages are counter-intuitive, and people tend to have a hard time shifting into a mode where they're comfortable scrolling in that direction. Sometimes, you simply can't get around it, and that's the exception rather than the rule. In those cases, under no circumstances should you move the main navigational controls off the screen. In Web applications, embed the scrolling content in scrolling DIVs (or other suitable controls) to ensure that users can still reach your navigational controls without having to page through the document.
  • Don't make me squint. Use a reasonable font size that even the visually impaired can comfortably read. Better yet, use a font size that scales when the user chooses a different size in the browser. Don't force your font size on the user, simply because not everyone has 20/20 vision.
  • Don't make me guess what language you've written that document in. Use a clear, legible, font. You may like your decorative fonts, but they're not suitable for body text, forms, or general deployment on Web pages. Most users won't have them, and they won't look the same. Use standard fonts.
  • Don't make me wonder if there's something written in any area of the page or screen. Don't use dark text on a dark background. Don't use light text on a light background. Strong contrast enhances legibility.
  • Don't hide important information from me. Place important information at eye level. Use font weights, color, and styles to emphasize important information. Place this information prominently on the page, where I can easily see it. Don't obscure it in the page.
  • Don't hyperlink everything on the page. A hyperlink should indicate that there's something worth investigating. If everything on the page is hyperlinked, the hyperlink loses its value, and I'll tend to ignore them. Hyperlink the important topics. If you need to hyperlink lots of topics, provide a section at the bottom of the page called See Also or References and include those links there.
  • Don't obscure or complicate hyperlinks. If you change the style of a hyperlink so that I don't know it's a hyperlink, I won't know what to look for. Don't overly complicate them. Hyperlinks are an established navigational paradigm for the Web (and even desktop software) and everyone knows what they are and how they work. Leave them alone. Users already know how to use them.
  • Don't make me jump through hoops to find the commands or features I need to use. Don't invent an entirely new way of navigating your web site or application. There are a number of existing navigational paradigms that are well established and with which users are very familiar: drop down menus, tree views, bread crumbs, tabs and commands, and so forth. Don't confuse users by making them learn something completely new.
  • Don't surprise me by reconfiguring the user interface when I do something. If I click a button or a menu command and the entire user interface changes, or entire menus disappear, we've got a problem. The user interface, and the navigational system in particular, needs to be consistent and predictable. If it's not, users will be playing a constant guessing game about what they can and should do next. Users playing a guessing game are dangerous users.
  • Don't baffle me with technical jargon or confusing messages.  When something goes wrong, or when I've done something wrong, communicate it clearly and concisely. Tell me what I can do about it. Recover gracefully. Don't just throw up some message box that announces "An error occurred. Press OK to continue." Duh. What should I do next? Should I tell someone? If so, whom? Is my data safe? Do I need to start over?
  • Use consistent language. Don't call it Cancel on one screen and Abort on another. Don't use Logon Name on one screen and Sign In on another. Be consistent. Establish a vocabulary and stick to it.
  • Don't waste my time prompting me in an intrusive way to take part in your survey. I'm not interested in taking part in your survey. Put the offer to take part in a prominent place in your site or program that isn't intrusive. If I'm interested, I'll take you up on the offer. Otherwise, I'm going to close the DIV because you were rude enough to cover up the content that I was looking for with your intrusive popup. The same goes for popup ads. (But we all know how well that's going to go over.)
  • Don't play sound or streaming video as soon as the page loads. If I want to see it, I'll start it myself. You're chewing up my bandwidth, thank you very much. If I'm from an area where that's a precious commodity, that's the height of rudeness. Give me the opportunity to start the sound or video when I want to and if I choose to do so. This includes all forms of linked and embedded media, including Flash.
  • Don't order me to get a better browser. You don't know what browser is best for me. I may like FireFox, IE, Safari, Opera, Navigator, or some as yet unnamed browser still emerging. You may be able to say that your site doesn't support browsers outside a certain set, but it is gauche to insist that your browser of choice is the one and only true browser, whichever browser that may be. Competition is actually good for the industry.
  • Don't assume that my monitor is as big as your monitor. Just because your company has a standard video configuration that supports 1024×768 doesn't mean that's what your users are configured for. A vast amount of users are still set at 800×600. This resolution isn't a matter of laziness, but of simple visual acuity: they can't see anything if it's at a higher resolution. Design for 800×600. Ensure your pages fit on a monitor at that resolution. Doing so means users don't have to scroll horizontally. It also means that your pages will print properly if the user hits the Print button from the browser and is printing in landscape mode.

Yes, this is surely opinionated. Yes, I'm sure I'll take heat for it. But here's where I'm coming from: I've both used lots of Web sites, and I've had to design lots of Web pages for The Average Computer User(tm). For those users, all of these things have turned out to be true. Ask yourself why Google's search engine is insanely popular. It's not just that their search engine covers the vast majority of the Internet; most people don't know how to effectively use the search engine to get the results they really want. It's because their search page is so simple that it's almost pristine. It's foolproof. Type what you want in the box and click Search. The results pages come up and show you what matches your search criteria. It's simplicity defined.

Apple's computers have always been lauded as a breathtaking departure from the technical complexity inherent in Windows. Their user interfaces are simple, clean, and easy to use. They're the hallmark of Apple's software. One could argue that the simplicity of Apple's user interfaces is what defines them more than their hardware. Again, simplicity prevails, because the user interface gets out of the user's way, and lets the user get her job done.

This is what we should be striving for. Design a Web page that is simple, clean, and gets out of the user's way. Don't confuse them. Be predictable in the way you behave. Be forthright and clear in the way you communicate. Use strong contrasting colors, legible fonts and sizes, don't reinvent the Web navigation paradigm, keep the navigation system where users can reach it, and avoid technologies that will degrade the user's experience.

A Web site can do all that and still be beautiful. CSS allows us to do that. There's no reason you can't be clean, predictable, communicative, unobtrusive, and beautiful all at the same time. You just have to choose to do so. And you have to put your users' needs above your own desire to use the latest flashy, slow, whiz-bang technologies that don't really get you anything more than older, stable, less impressive technologies that accomplish the same thing.

Who's Testing Your Software?

There's a common mistake in software development: trusting the developers to test the software. Historically speaking, developers are the worst kind of testers, because we tend to use the software only as we designed it to be used. It takes a special kind of developer to be able to think outside the box and think like a user with little or no computer savvy.

In the comment thread to the article, Microsoft Admits Vista Update Glitch, one poster made this point:

Beta testing is not getting the bugs out of software because they got the wrong people doing it. Don't use computer savy [sic] people to beta test, use people like my wife who don't have a clue what makes the computer work. She can discover any glitch in software code, guaranteed. Her gift also applies to use of TV remote controls, etc.

To which came this reply (edited for brevity):

This is the best answer I have read for several years. Beta testers are people who do not do things that cause problems, rather, they look for features and bugs that are sometimes not there...The best Beta testers are people who are not knowledgeable and those who don't know the difference of double or single click.

These folks are referring specifically to Microsoft's beta tests for its operating systems (more specifically, for Windows Vista). But the general sentiment is true and universal: users who have never been exposed to your software in the first place, and have had little exposure to technology are frequently the best ones to determine whether or not it actually works. They have a disturbingly accurate ability to ferret out bugs that borders on the psychic.

As developers, we like to believe that our software is rock solid, easy to use, painfully obvious, and bulletproof. A user who can't tell the difference between clicking and double clicking, or why it's a bad idea to keep lots of applications open at once on a machine with limited resources, is the prime candidate for testing your software. If it's a Web application, find someone who's rarely used the Web or who only uses it for the basics: IM and email. One thing that they'll be able to tell you right away is whether or not the user interface is actually usable. And if you think for one minute that you shouldn't be designing clean, minimalist interfaces for the lowest common denominator of user, you've probably never met the average computer user. There are far more of them than there are of us.

We have some pretty interesting users for our Web applications. Some of them are fond of ignoring on-screen instructions. Tooltips, online help, field prompts, clearly written button text, user training...not much of that seems to make a difference. When all of that fails, what does the application do? How robust is it? How gracefully does it handle bad user behavior? For that matter, how gracefully does it recover from bad application, network, or hardware behavior? And does it alert the user to that kind of thing in a clear, friendly, and meaningful way?

You can't determine that sort of behavior by trusting your developers or your unit tests to find them. Inexperienced users will find far more than your tech savvy users will. That's not to say that your testing team shouldn't include tech savvy users; it absolutely should. But make sure that you include novice computer and Web users in your testing team.

Sunday, January 13, 2008

Software Release Engineering

In his Coding Horror post titled How Should We Teach Computer Science?, Jeff Atwood blogs about the lack of coverage of release engineering in computer science courses. At best, he points out, it's given cursory coverage in these courses.

Now, I'm a self-taught developer. I started programming computers in 1985 or so, and I've taught myself everything I know. So I can't really comment about what the courses in a college or university are like. But I can say this: experience has taught me that a few things he says are absolutely, undeniably true. So in this article, I'm going to enumerate those things I think are really important, and how I built a software release process at the company I worked for.

The Ugly Truth About Release Engineering

  1. Release Engineering is not simply deploying your product. There's a reason it's called engineering. It involves getting the latest version of the build from source code control, building the software, executing unit tests, building installers, and labeling the build if it's correctly built. It may involve pinning or branching the build. It requires a daily "good code" check-in time policy. It requires daily builds to ensure that you have software the compiles every day, and a means of notifying folks when the build is broken, and fixing the build-breaking code right away. It's NOT simple.
  2. Consistent, disciplined use of source-code control is the bedrock of release engineering. At any given time, you might need to fix bug in the prior release. That's hard to do if you've already started changing the code for the new release. Branched builds allow you to do that. Also, versioned files in the repository allow you to view the history of changes to a file to recover from unintentional changes. You can also develop for multiple platforms while using many of the same files, sharing them across projects without having to worry if they're out of synch. Labels on files and projects tell you exactly which version of a file was used to create any given build so that you can recreate a project from the repository if you need to.
  3. Building for your environment is not enough. You need a test environment that mimics your client's environment as closely as you can make it, down to the OS, the browser, the applications and the add-ons. Just because it runs on your machine when you press F5 from Visual Studio does not mean it's going to run on the client's machine. If you're developing for multiple browsers, install those browsers and test for them.
    (Ugly true story: our company accidentally allowed IE7 through the group policies. We had IE7 deployed everywhere. Our clients don't plan to upgrade to IE7 for another year, at least. Our product must run on IE6. I had to create a separate machine that was safe from IE7 downloads and strictly ran IE6 to be certain the product ran correctly.)
  4. F5 is not enough. Every build should be a clean build. Every build. Don't ship files to the customer that aren't required to run the software. Create a build script that does the job. Excluding a file from a Visual Studio project doesn't delete the file from the folder, but does leave it in source code control (a good thing for versioning). To ensure  clean release, have your release script remove files you aren't using prior to shipment.
  5. You need a checkin-time policy. All good source code must be checked in by a certain time every day. Code that isn't checked in does not make it into the daily build. This check-in time should be early enough that the release manager can start the daily build (if it's a manual process), or make the rounds and make sure that all code is checked in prior to it. I favor end-of day check-ins (around 4 PM) for nightly builds, but each organization is different.
  6. The software must successfully build every day. A successful build is a good sign of project health. An automated build tool can be set up to execute the build in the off hours after everyone has gone home, and after all files are checked in. Once the build is complete, the pass/fail report is sent to your release manager. However, just because it compiles doesn't mean that it's entirely healthy or bug-free. Therefore...
  7. Automated unit tests should be executed on every build. If you aren't using automated unit tests, you should be. They're not hard to learn, the tools to create them are freely available and they can improve the stability and quality of your code immeasurably. Incorporate the unit tests into your build script so that they're executed every time you build the software. Correctly written unit tests alert you to build-breaking defects quickly and immediately.
  8. Build-breaking defects must be resolved before anything else. This includes any defect that causes the software to fail to compile or any defect that causes a unit test to fail. The team must adopt a "drop everything and fix the build" mentality. In my own personal experience, this view is not easily accepted in the early stages of a project, but during the later stages, when there's typically a "crunch" mode, and the build isn't riddled with build-breaking defects, developers are thankful that those defects simply aren't there.
  9. You need a release manager. While you might have many people who contribute to the build, checking in changes and adding new content, you need one person whose primary responsibility is to ensure that the software builds properly every day. That individual is also responsible for your installer, and for identifying the code that breaks the build and ensuring that it gets resolved. The release manager doesn't resolve the defects himself unless he checked in the build-breaking defect (since he doesn't know anything about the defect); rather, he must play the role of the hard-nosed drill sergeant ensuring that the coder who checked it in drops everything to fix the build right now. If you can't build the product, you can't ship the product, and anything else that developer might be working on is a moot point. It's an ugly, painful job, but it's crucial.
  10. You need a dedicated build server. This machine is clean, and does nothing but build your software. This guarantees that it injects no artifacts into your final product. It runs the daily build, executes the unit tests, and sends out the notifications when the build passes or fails. It might also house archived copies of each build's source code and binaries. It must be on the network, and should be backed up regularly. The Release Manager should have access to it, but no one else on the development team.

My Own Personal Release Process

It bears noting here that I've done the release process for two different companies. At one, it was for a full team of developers (about twenty of them), and the release process there was a nightmare. At that time, we couldn't get a release out in a month if we tried. So I volunteered to take on the job, and redesigned the process. It took about a week to get the process reengineered and everyone on board, but after two weeks we had daily builds working and everything was going much more smoothly.

I took many of those same principles and applied them to my new job. Clearly, some of them don't apply in a single-developer shop. But the basic principles are the same.

Source Code Control
  1. All developers must use source code control.
  2. All working, compilable code that does not break the build must be checked in by 3 PM every day. Code that is not checked in at this time does not make it into the daily build.
  3. User names and passwords are required for accessing source code control.
  4. The admin password is written on a piece of paper, sealed in an envelope, and stored in the CIO's desk. No one else has it.
  5. Minimal rights to access the repository based on need are granted.
  6. The main tree has the following subprojects: Build, Dev. Each tree's subprojects are mirrors of each other. Build is where the branched and pinned copies of the successful builds are. Mainline development takes place under the Dev tree.
  7. Every file that is required to create or ship the project is included in source code control: source files, SQL scripts, Web pages, images, build scripts, unit tests, test plans, requirements documentation, etc.
  8. Because we use SourceSafe, every weekend, during off-peak hours, regularly scheduled maintenance is performed on the repository to keep it in tip-top shape.
  9. The repository is stored on on the network. This folder is backed up incrementally nightly, and fully weekly.
Build Process
  1. Every afternoon, at 3 PM, all developers must have code they want included in the build checked into the repository.
  2. The release manager does a final verification at 3:15 to ensure that all code is checked in.
  3. An automated script fires off the build at 3:30 PM. It does the following:
    1. Clean the build folders on the build server. This involves deleting all files and folders from the project's build folder, ensuring a clean build.
    2. Get the latest version of the software from the DEV tree in the repository.
    3. Compiles the software and all of its dependencies. If the compilation fails, an email with high importance is sent to the release manager, notifying him of the failure, and the script aborts.
    4. Executes the unit tests. The unit test results are output to a text log file which are then sent to the release manager in an email.
    5. Executes a cleanup batch file that ensures that any files that should not be shipped with the product are removed.
    6. Creates the installer or archives the build into a ZIP file.
    7. Labels the build in the repository.
    8. If the build was successful, sends a "Build success" message to the release manager.
  4. Note that step 3 may execute multiple times depending on whether you are targeting multiple platforms or releases (such as Debug and Release, or various browsers, or various OSes).
  5. Upon receipt of a build failure email, the Release Manager reviews its contents, and identifies the offending source code. He then determines who checked that code in, and contacts that developer and asks them to resolve the defect as soon as possible.

    Important: Except in the direst of circumstances, the release manager should not attempt to fix someone else's defects. He should ask the developer to fix his own defects. If the release manager takes this task on himself, he'll quickly become inundated trying to fix all the build-breaking defects, and won't have time to do his own work.

  6. The developer resolves the build-breaking defect and checks in the change for inclusion in the next daily build.
  7. If enough build-breaking defects were present, the Release Manager may choose to manually rebuild the software once defect corrections are checked in.
  8. If the build is shipped to the customer, it is labeled, pinned and branched into the BUILD tree in the source code repository.
In closing

We're a Microsoft shop. Although I've worked in Java houses, my limited experiences have largely focused on the Microsoft stack, and the process that I've outlined above is primarily geared for Microsoft Visual Studio and SourceSafe. But the basic principles should be pretty universal. You should be able to take them and apply them to just about any combination of source code repository tools, unit testing tools, and IDE (or text editor).

The primary thing to remember is this: if you can't build it reliably, predictably, and on a moment's notice, you're in trouble. When a development team knows they can't build the software, and when the testing team is sitting around for days or weeks at a time wondering when they're going to get a new release to test, morale suffers, tempers flare, and things rapidly go downhill. I've been there. I've seen it. It ain't pretty.

Every project needs a good, solid release process. I'm tempted to say that any release process is better than no release process, but that wouldn't be entirely true. A release process needs to be trim, make sense, bolster confidence in the project, and help propel the team forward towards success. That's what this process is designed to do.

I'm sure that others have some ideas on how to improve the process above. I'd love to hear those ideas. I'm sure that others have different ways of doing things. I'd love to hear that too. There is no silver bullet, and I'm not anywhere stupid enough to think that this plan is perfect. But I hope it's enough to help someone, somewhere get a little bit closer to a project that gets out the door a bit faster, healthier, and with its developers' sanity in tact.

 

Friday, January 11, 2008

A Return to Blogging

So it's been a long time since I've blogged, and a lot has happened in that time. I've been on hiatus, reassessing many things in my life.

One of the foremost among those things was whether I should be blogging at all. My last post was berated by one reader as badly written. It hit me pretty hard, and is probably the primary reason I stopped blogging at all.

For me, writing is a passion, something I do because I enjoy it, and because it flows naturally out of me. The reader in this case went on to state that the intent of my article was to claim that all software should be open source. That was never the point of the article; rather, the point was that you should strive for the kind of quality and responsible behavior in your code that instilled the confidence that, should anyone actually have the opportunity to view it, you would fear nothing, knowing you weren't doing anything you shouldn't be.

But his criticism struck a resounding chord in me. All to frequently, I fail to get my point across. I am too wordy. I dance around the point, rather than just coming out and saying it. So, perhaps, "badly written" has the unsettling ring of truth to it.

This caused me to reevaluate what I write, and how I write it. If I'm going to write, I want to do it well, and I want to make sure that my point gets across clearly. I haven't always done that. What's difficult for me, however, is that many of the ideas in my mind aren't easily conveyed with step-by-step instructions, or clear cut language. I have a tendency to see things through metaphors, analogies, and the like. My writing has always tended to reflect that.

As an aspiring fiction writer, those are strengths. As a technology writer, I'm not convinced they are. I'm fairly certain people read tech blogs for clear guidance and opinion, not mystical, philosophical musings. That, perhaps, has been my biggest mistake. I need to be clearer. And I will strive in the future to do so.

But there is a caveat to all that. This isn't strictly a tech blog. It wasn't originally intended to be so. While software development is a great passion of mine, it is not my only passion. And so, I would ask that those who read here have patience with me when I choose to blog about nontechnical topics. My emotions will surely shine through when I discuss writing, politics, gaming, society, religion, or any of the other nontechnical aspects of our lives.

Overall, things are improving for the better. For those who knew me, or followed my blog before, I hope that this will mark a return to steady blogging that is enjoyable for both my readers and myself. I've given it some thought, and blogging is just good therapy.