Sunday, August 10, 2008

Top 10 Signs You've Become Indispensable (and Are Therefore About to Be Fired)

A wise man once told me that when someone becomes indispensable the very best thing you can do is get rid of them as soon as possible. I've always thought that was sage advice. I've tried to keep that in mind. So I'm always keeping an eye out for habits that "indispensable" coders have. These are the guys I want eliminated, even if one of those guys is me.

What it comes down to is whether or not one guy on the team can hold an entire project hostage. No company can reasonably afford to be in that position. I certainly wouldn't want to put a company in that position. It's about risk, and managing that risk, and doing so proactively.

So, without further adieu, the Top 10 Signs You've Become Indispensable (and Are Therefore About to Be Fired):

  1. You're the only one who can work on the particular tasks assigned to you, because you're the only one who understands them.
  2. You believe in or practice job security through code obscurity.
  3. You don't communicate, and hoard valuable information that other members of the team need to get their jobs done.
  4. You make technology decisions, implement them, and expect everyone to follow suit, whether they understand them or not.
  5. You frequently make vast, sweeping changes to the underlying architecture of the system, without first discussing those changes or their impact with others on the team.
  6. You don't really understand object oriented analysis and design, but you act like you do.
  7. You resist any suggestions for better, proven ways to implement solutions, simply because someone tried it that way before and it left a bad taste in your mouth.
  8. You use source code control like a backup device, rather than a version control system.
  9. When designing a system, your first thought is the code or data model and not the problem domain.
  10. You have no interest in being a member of the team, and would rather do it your way all the time.

(This is, of course, completely unscientific and totally subjective. Take it or leave it.)

Thursday, August 7, 2008

Why Censor the Internet (Language Warning)

A poster on Digg offered this eloquent response to the article, Internet Censorship is On it's Way. The i-Patriot Act:

WHY THE FUCK ARE THEY CONCERNED ABOUT THE FUCKING INTERNET?!?!

I mean seriously there are much bigger issues in the whole fucking world then the internet. We cannot be in our on privacy doing our own thing without the people watching over us. I mean come on its such bullshit. Soon it will be like the movie Demolition Man and we will get fined when we fucking curse at home! We are slowly creeping into a government who has complete control over everything we do.

Censor the internet... Give me a fucking break.

My apologies for the language, but this impassioned question deserves an equally impassioned answer. Why, indeed?

In a fascist state, the last thing you want is for people to be able to express themselves and speak out against the government. It's all about control. And people who can speak freely can't be controlled. Neither can those who listen to those who speak freely.

Anyone who's been raptly paying attention to what's been going on in our country (particularly over the last 8 years or so) knows that we've been becoming a fascist state. But it's by our own choosing. We elected these people to power, either by choice or by sheer apathy. We refused to entertain the notion of deviating from a two party system, and we allowed them to strip us of our rights and freedoms. We did not cry out in protest when the Patriot Act was put into place; rather, many of us celebrated it, embracing it as a necessary evil in order to hunt down the vile terrorists who had dared to attack us on our own soil.

Thus, we surrendered our rights, our freedoms, our liberties in order to gain a false sense of security and chase after demons that never really existed. And from that day forward our government, whom we put into power and have kept in power, have continued to play upon our fears in order to further strip us of any vestige of the Constitutional rights we had before. They can do what they want, when they want, to whomever they want, and there is little that any of us can do about it.

But we chose this path. We elected it. Sixty percent of the population failed to vote in the last presidential election. It was far more important to watch reality television than it was to secure a meaningful future for our nation, and we allowed the same criminals to maintain their stranglehold on what was once a powerful, respectable democracy. But those same people maintain that their government fails them, that they have no rights, that the economy is in the toilet, that we're sending our troops to senseless deaths overseas in a war we should never have been involved in, and a myriad of other complaints. When asked, though, they'll tell you that they didn't vote because their vote didn't count. Of course it didn't. No uncast vote counts.

But we've learned nothing. Even now we entertain the absurd notion of effecting change by maintaining the status quo. We're going to elect either Obama or McCain. Yet another pawn from a two-party system. Neither will be able to revolutionize the country and restore what it was. Neither will break the back of the military industrial complex. Neither will do what must be done to fix what must be fixed.

That responsibility rests with you and me. Right here. Right now. Every day. But it means getting off our asses, turning off our televisions, and getting involved. We must DEMAND change, DEMAND our rights, and DEMAND the restoration of the Constitution.

You see? A fascist would never want words like these uttered on any medium. And the Internet makes it all too easy to make such statements in a forum where hundreds, thousands, even millions of people can read it.

Sunday, April 6, 2008

The Absurdity of "Don't Reinvent the Wheel"

As developers, we've had this adage drilled into us from the beginning: Don't reinvent the wheel. In short, don't rewrite what's already been written. The idea is sound, in theory. You can save yourself time and money if you'll simply reuse existing code and/or components rather than writing them yourself from scratch. This time and money is saved up front when you write it (or would have written it), and down the road, when you have to maintain your system.

However, I'd like to point out another, equally applicable adage: There's nothing new under the sun. Anyone who's ever tried to write a novel, a short story, a play, a movie, a song, or a piece of software, will know this one simple truth: somewhere, at some point in time, it's already been written.

Every algorithm, every piece of code that we will ever attempt to write has already been written somewhere, at some point in time, by someone. Only the names have been changed. You're not inventing anything that is completely new, that's never been seen before. You should wisely disabuse yourself of that notion as quickly as possible.

In the grand scheme of things, at the application level, you may very well have an idea for a system that is unlike anything that has been done to date. But the algorithms that drive it have already been written. Bubble sorts, hashes, exception handlers, encryption, data access, socket management, shopping carts, entire application frameworks, date management, document management, serialization, port I/O, and all that other stuff has already been done. Further, it's already been done several times over in many different languages to varying degrees of success.

Tragically, if you're using a large application framework, like Java's EE or Microsoft's .NET, the chances are good that the functionality you're looking for is built right into the framework itself. The problem is that the framework is so vast that you'll spend more time looking for it, and determining whether or not it works the way you need it to work than you would just rewriting it yourself.

Application frameworks are stunningly afflicted with feature creep. They must do everything under the sun, must meet every possible need. The problem, then, is that their scope becomes so broad, so vast, that no one in their right mind could possibly grasp the totality of all that they can do. It is inevitable that anyone using them will reinvent some of their functionality. The scale of that functionality might be small (reformatting dates) or it might be substantial (pooled database connections).

In the end, it's absurd to think that we can possibly avoid reinventing the wheel. Of course we're going to reinvent it. Every application we write is a reinvention of someone else's wheel. It just so happens that our wheel is a custom wheel. All this paranoia about reinventing the wheel is blown out of proportion. A proper buy vs. build decision should never be neglected; but don't ever think for one minute that what you're creating hasn't been created before.

Consider the scenario where you're under the gun to get a product out the door. And I mean it's a really tight schedule. And don't act like it's a perfect world, and you have leverage over the schedule. This is reality here. In the real world, the customer controls the schedule, because it's tied to when the product is released, and that's tied to this big, huge monstrosity in another state or another country. The product's delivery schedule is a train barreling down the track at 120mph and no one short of God can stop it. Now, you have a very finite amount of time to work in. You need an algorithm. You know you could write it. Or you could look to see if someone else has written it.

If you do the whole Web search thing, you have to ask yourself a few questions: Is it from a source you trust? Is it in the language you're using, or do you have to convert it? Does it work? Does it need to be tweaked? If any of these fail, you're back to the drawing board. Time's wasting here, and that train's getting closer to its destination. If all the answers pass, you have to make sure you don't run into any copyright or licensing issues with that code. (You are paying attention to that, aren't you?)

If you decide to peruse your application's framework, you'd better hope it's well documented, and very easy to search. Good luck using the search features in .NET. It's not like the ASK.COM interface, where you can ask, "How do I convert a date in DOD format to Gregorian format?" Yeah. Good luck with that. On the other hand, you could ask your coworkers. They might know. Then again, they might not. If they don't, you're off to Google to get the information. Here's hoping you get a timely and accurate response.

Sure, this is an extreme example. But it makes my point: At some point, the work has to get done. You can't afford to spend days or weeks scratching your head about whether or not that wheel's already been invented. Believe me, it has. The problem is, there are a countless number of wheels, and none of them are labeled, and you don't know where to find the wheel you're looking for.

Stop wasting time, and invent your own damned wheel.

After all, whatever code you might reuse, is just someone else's reinvention of the same wheel.

Thursday, March 27, 2008

My First Time: Software Passion

I remember when I was first bitten by the computer programming "Bug."

I was young: still in high school, in fact. At my particular high school (in Fontana, California), we had an Indian Education Department. That small department was lucky enough to have a TRS-80 computer for the kids in the Indian Club to play on. I dare say that the computer itself drew a few kids to the club; we were misfits, outcasts, by and large, but we were drawn to that thing like moths to a flame.

It was nothing to look at really. It was just an old monochrome monitor, with a keyboard, and a tape drive. Our model didn't even have a floppy disk. Everything was on tapes. But we were mesmerized by that thing. I remember watching one of the other kids, Dan, fire up ZORK, and typing, "go north" into the computer. It responded to his simple command, and described the next room to him. It amazed me. I remember sitting there and thinking, "How did they do that?"

I mean, it was just a stupid box, with a keyboard and a cassette drive. It couldn't think. But there it was, responding to him as if it could think. And he could type commands that were, for the day, fairly close to English. "Eat food." "Quaff potion." "Open door."

I remember sitting there, thinking about that and being relentlessly tormented by it. I had to know. How could an inanimate box like that do things like that? How did it know what room he was in? How come the rooms changed every time we played the game? How did it decide if the potion killed him, healed him, or made him sick? How did it decide what color the potion was? How did it decide what was in the room? For a stupid box with no brain, this thing was pretty damned smart.

And then, one day, Dan got stumped by the game. Something, apparently was wrong. A few years later, I'd realize he'd found a bug. So, he fired up a program, and cracked open the game's source code. And there, before my eyes, was the big secret. It was line after line after line of source code: carefully written instructions that told the stupid box exactly what to do. From those cryptic instructions, written in some obscure language called BASIC, you could make that TRS-80 do amazing things!

That was the beginning of the end for me. I had to master that language. I had to know how I, too, could command a stupid, brain-dead box and make it do amazing things. It wasn't long before I had obtained a copy of the language reference for BASIC and taken a computer programming course at our High School. (Yes, we had them, even out in the sticks in Fontana.)

So, in a way, ZORK made me a programmer.

It's been about twenty-five years since I watched Dan crack open the source code to ZORK, and the path of my life was irrevocably altered. Up until that point, I didn't really have any real aspirations. I don't think I really did after that, either. But one thing became very clear: more than any other endeavor to which I applied myself, computer programming proved itself to be my one enduring passion. All these years later, I still have those moments reminiscent of that first day. I'll see a beautiful piece of code, a website design, or an application, and I'll think, "How did they do that?" Any ideas I may have had about leaving software development will be blown away and my passion for software will be rekindled.

It's because I have to know. I can't walk away from these damned stupid boxes without being able to make them do amazing things.

There's a certain, childish delight in figuring out the solution to a problem, or finding a new way to do something. For me, it's like Christmas, and I want to share that joy with others. Sadly, a lot of folks don't understand it--especially if they're not in the same field. But anyone who's done this work, and ever had a EUREKA! moment knows exactly what I'm talking about.

Somewhere, right now, a budding young developer is experiencing his or her first time. They're being bitten by the bug. It's an infection that will take hold and set in for life. For most of us, it's a turbulent ride, filled with ups and downs, and we frequently consider leaving the field. For others, it's pure hell, and we leave it too quickly; for a lucky few, it's nirvana all the way through. I'm not sure I envy the lucky few; I rather like the way my challenges have tempered me over the years.

When you face challenges, think back on what it was about software that caught you in the first place. Think back to your first time. Then think about the many times you've been lured back to it by your own passion. Not because someone offered you money, or material goods, or power, or prestige; think back to those times that your personal passion for software kept you in the game. Then ask yourself why you feel so passionate about software. The answer for me was surprising: I'm not really doing it for anyone else, but because I have to know, and because I have to conquer the stupid box.

For all my noble aspirations, that's a humbling admission.

But that passion is still there. It keeps me in the game. And, in retrospect, it's likely why I feel so passionately about software quality. It's not enough that it works, it has to work well.

What was your first time like?

Monday, March 24, 2008

You are Not the Average Computer User

John Lilly, the CEO of Mozilla, recently blogged about Apple's practice of including a new installation of Safari in Apple's Software Update service, even if you didn't have the application installed in the first place. You can read the full article here. His main point was this: As a matter of trust, update software should update previously installed applications, and not install new applications. Apple pretty much violated that trust when it presented users with this handy little dialog box:

The main issue here is that Safari is not already installed on the end-user's machine. So, the option is not an update, but a fresh download of brand new software. Further, the option is checked by default, and the button in the lower right hand corner clearly says "Install 2 items".

Now, I'm not going to rehash the pros and cons of Apple's tactics in this matter, because that argument has been debated endlessly on John's blog and on Reddit. What I am going to take issue with is the arrogant presumption that many commenters take when they make these sorts of statements:

"I don’t see what the problem is here. If you don’t want the software, you uncheck the box. The product description is listed very clearly in the window, no extra clicking required."

Omar

"I don’t see the big deal. They are promoting their software through their software update program. It’s automatically checked…ok, so? Lots of update programs automatically check everything anyway, not just apple.

"If FF is better then people will use FF. If they like safari then they will switch. These browser “loyalty” wars are getting old. IE came with windows by default and FF is still gaining ground. It is gaining ground because it is better. Just keep making a better browser and stop worrying about this. ppl will flock to the best. We’re not stupid."

Chris

"Oh fer heaven’s sake, uncheck the box and get over it. Are you saying the majority of Windows users of iTunes are too clueless to look and see what they’re downloading? OK, I’ll admit it’s a bit pushy of Apple but beyond that I fail to see what all the fuss is about."

Anne

These are knee-jerk responses. The last one, in particular, is an exemplary case of a poster who clearly doesn't understand the idea that users who read or post to tech blogs or forums are not typical computer users. If you're reading this blog, you're not a typical computer user. (I'm not sure what you are, exactly, but you're not typical.)

Apple's case is interesting because of the enormous success of the iPod, and the vast number of iPod owners who use Windows. Those users will download iTunes so that they can use their iPod with their computer to purchase music and manage their playlists. However, the vast majority of those people are not what we would classify as tech savvy users. Rather, I'd call them click-through users, who implicitly trust the software vendor to make decisions for them. Think about your mom, your dad, your sister, your brother, your aunt, your uncle, the kids at school, the clerks at the nearest retail outlet or fast food joint, your fellow students, or your nontechnical coworkers.

Those people represent the average computer user. They are click-through users.

A couple times a year, I get calls from my family members about their computers. Inevitably, they'll tell me that the computer is suddenly horrifically slow, and that they need me to fix it. So they bring it to me, and I look at it, and it has tons of mystery software on it. I like to have them sit with me when I'm going through it, so that I don't remove anything that they might actually need or use. Nine times out of ten, they'll tell me, "I don't know where that came from." Apple's software update for Safari is likely going to produce an awful lot of these scenarios, because the the average computer user will have just clicked through the dialog, trusting that Apple knew what was best for them.

A tech savvy user isn't likely to just click through that dialog box because they know what can happen, and they're pretty darned picky about what goes on their machine. They don't blindly trust the vendor to make those decisions for them. But the number of users like that is relatively small, and is hardly representative of the world's population.

But the world is full of click-through users. There are far more of them than there are of us. Thinking for one minute that everyone thinks and/or behaves as we do is naive, shortsighted, arrogant and presumptuous.

Again, my point here isn't that Apple was right or wrong. My point is this: never assume for one minute that YOU represent the average computer user. You don't.

  • If you're smart enough to competently read or post an a technical blog or forum, you're not an average computer user.
  • If you know how to correctly fix someone else's machine after they've borked it, you're not an average computer user.
  • If you know the difference between a hash table, a binary tree, and a linked list, you are not an average computer user.
  • If you know what recursion is, you are not an average computer user.
  • If you know how to safely overclock your machine, you're not an average computer user.
  • If you read technical books like they're gripping, fast-paced murder mysteries, you're not an average computer user.

This list is undoubtedly incomplete, but I haven't had enough coffee yet. But you get the point.

So, enough with this arrogant presumption. Stop assuming that all users behave as we do. Because the simple truth is that the vast majority of users do not behave or think as we do. They trust; we suspect.

Sunday, March 23, 2008

LINQ to SQL and the Coming Apocalypse

I'm going to say it, and I'm going to say it for everyone to see: LINQ TO SQL SCARES THE HELL OUT OF ME.

Does anyone remember this from classic ASP?

<%


Set rs = Server.CreateObject("ADODB.RecordSet")
param = Request.Form("lastname")
q = "SELECT * FROM personnel WHERE lastname LIKE '" & param & "'"
rs.Open q, "DSN=mydsn;"

if NOT rs.EOF then
     while NOT rs.EOF
          Response.Write rs("firstname") & " " & rs("lastname") & "<BR>"
          rs.MoveNext
     wend
end if

%>


LINQ to SQL is giving me flashbacks to this kind of code.


No, of course code written in LINQ to SQL won't look anything like that. But it will look like this:


HookedOnLINQ db = 
new HookedOnLINQ("Data Source=(local);Initial Catalog=HookedOnLINQ");  
var q = from c in db.Contact
where c.DateOfBirth.AddYears(35) > DateTime.Now
orderby c.DateOfBirth descending
select c;  
foreach(var c in q)
Console.WriteLine("{0} {1} b.{2}",
c.FirstName.Trim(),
c.LastName.Trim(),c.DateOfBirth.ToString("dd-MMM-yyyy"));


So, what we potentially have here is database code mixed in with our business code. Further, we have no guarantee that this code will not appear in the .aspx page.


What really disturbs me about LINQ to SQL is that it looks like people will begin to use it to do things that really should be left to the database. Looking at the specific code example above, is there any good reason that this couldn't have been done with a stored procedure? I mean, after all, stored procedures are compiled, provide additional security, and aren't subject to some sleepy programmer doing a global search and replace in the IDE and borking the code.


Now, I realize that LINQ to SQL has support for stored procedures. But I'm willing to bet that the vast majority of organizations are going to use that support in conjunction with the syntax shown above to produce truly horrendous code that completely negates the tremendous power available to them in the database.


Database technology has evolved over decades to be extremely efficient at what it does: indexing, sorting, selecting, inserting, updating, and so on. We will never be as efficient doing it client side as the database will be on the server side. Ignoring that power and trying to do it in code is an exercise in futility. The lesson we need to bear in mind here is this: let the database do what it does best, and let the code do what it does best.


Sadly, I don't have a whole lot of confidence that this is going to be the case in many places, because LINQ to SQL makes it far too easy to do the database's job. For crying out loud, you can do a WHERE and an ORDER BY--which should happen in the database so you can take advantage of the indexes--in the code. (Perhaps, under the hood this gets done by generating SQL. Fine. But why am I essentially writing SQL statements in code again?! WHY!? Get that SQL out of the damned code! It doesn't belong there! SQL belongs in the database!)


Now, suppose, for instance, that LINQ to SQL is 100% bug free on its first iteration. Let's assume that it generates flawless SQL to query your database when you write that code. It still has to pass dynamic SQL. That means you can't take advantage of compiled stored procedures. It also means that you have to repeat that code if you want to reuse it--unless, of course, you're savvy enough to refactor your code base to do so. But let's be honest: the folks I'm worried about here probably aren't smart enough to refactor their code base because they're likely in a rush to get the code out as quickly as possible, and refactoring isn't a big ticket item for them. The Single Responsibility Principle likely hasn't bubbled to the top of their list of grave concerns yet.


Why this becomes a serious concern is simple: Eventually, someone has to maintain that code.  


How many of us have nightmares about working with someone else's lamentably bad ASP code that had embedded SQL statements in it? Remember how horrible that was? Remember trying to search all the files to figure out where the SQL was? Which files touched which tables?


Why on earth do we want to go back to that?


Sure, sure; someone, somewhere, has an utterly compelling need for LINQ to SQL. They absolutely, positively must have it. Their business will collapse if they don't have it. Problem is, as I see it, this is going to be abused like a box of needles and a ton of heroin at a recovery clinic. And it's everyone else who's going to pay the price.


So, in closing, I'll just say this:


DOOOOOOOOOOM!

Thursday, March 13, 2008

NValidate: Misunderstood from the Outset

Occasionally, I will post questions about the design or feature set of NValidate on Google Newsgroups. More recently, I posted a question about it to LinkedIn. Almost immediately, I got this response:

I'd suggesting looking at the Validation Application Block portion of the Enterprise Library from the Microsoft Patterns and Practices group.

Now, I'm not belittling the response, because it's perfectly valid, and the Validation Application Block attempts to solve essentially the same problem. But when I talk about NValidate, which I find myself doing a lot as I interview for jobs (it's listed on my résumé), people often ask me questions like it:

  1. How is that any different from the Validator controls in ASP.NET?
  2. Why don't you just use the Validation Application Block?
  3. Why didn't you go with attributes instead?
  4. Why didn't you use interfaces in the design?
  5. Why not just use assertions instead of throwing exceptions?

These days, I find myself answering these questions with alarming frequency. It occurs to me that I should probably get around to answering them, so I'm going to address them here and now.

It helps, before starting, to understand the problem that NValidate is trying to solve: Most programmers don't write consistent, correct parameter validation code because it's tedious, boring, and a pain in the neck. We'd rather be working on something else (like the business logic). Writing parameter validation code is just too difficult. NValidate tries to solve that problem by making it as easy as possible, with a minimal amount of overhead.

Q. How is NValidate any different from the Validator controls in ASP.NET?

A. The Validator controls in ASP.NET can only be used on pages. But what if I'm designing a class library? Isn't it vitally important that I make sure I test the parameters on my public interface to ensure that the caller passes me valid arguments? If I'm not, I'm going to fail spectacularly, and not in a pretty way. You can't use the Validator controls (RangeValidator, CompareValidator, and so on) in a class library you're writing that's intended to be invoked from your Web application.

Q. Why don't you just use the Validation Application Block?

A. This one's pretty easy to answer. NValidate is designed to accommodate lazy programmers (like me).

Here's the theory that essentially drives the design of NValidate: Developers don't write parameter validation code with any sort of consistency because it's a pain in the neck to write it, and because we're in a big hurry to get to the business logic (the meat and potatoes of the software). Let's face it: if the first chunk of the code has to be two to twenty lines of you checking parameters and throwing exceptions, and doing it all over the place, you'd get tired of doing it, too. Especially if that code is extremely repetitive.

if(null == foo) throw new ArgumentNullException(foo);
if(string.Empty == foo) throw new ArgumentException("foo cannot be empty.");
if(foo.length != 5) throw new ArgumentException("foo must be 5 characters.");

We hate writing this stuff. So we skip it, thinking we'll come back to it later and write it. But it never gets done, because we get all wrapped up in the business logic, and we simply forget. Then we're fixing bugs, going to meetings, putting out fires, reading blogs, and it gets overlooked. And the root cause is because it's tedious and boring.

I'm not making this up, folks. I've talked to lots of other developers and they've all admitted (however reluctantly), that it's pretty much the truth. We're all guilty of it. Bugs creep in because we fail to erect that impenetrable wall that prevents invalid parameter values from slipping through. Then, we have to go in after the fact and add the code after we've got egg on our face and fix it, at increased cost.

So, if you want to make sure that developers will write the parameter validation code, or are at least more likely to do it, you have to make it as easy as possible to do so. That means writing as little code as possible.

Now, if we look at the code sample provided by Microsoft on their page for the Validation Application Block, we see this:

using Microsoft.Practices.EnterpriseLibrary.Validation;
using Microsoft.Practices.EnterpriseLibrary.Validation.Validators;
public class Customer
{
    [StringLengthValidator(0, 20)]
    public string CustomerName;
    public Customer(string customerName)
    {
        this.CustomerName = customerName;
    }
}

public class MyExample
{
    public static void Main()
    {
        Customer myCustomer = new Customer("A name that is too long");
        ValidationResults r = Validation.Validate<Customer>(myCustomer);
        if (!r.IsValid)
        {
            throw new InvalidOperationException("Validation error found.");
        }
    }
}

A couple of things worth noting:

  1. You have to import two namespaces.
  2. You have to apply a separate attribute for each test.
  3. In your code that invokes the test, you need to do the following:
    1. Declare a ValidationResults variable.
    2. Execute the Validate method on your ValidationResults variable.
    3. Potentially do a cast.
    4. Check the IsValid result on your ValidationResults variable.
    5. If IsValid returned false, take the appropriate action.

That's a lot of work. If you're trying to get lazy programmers to rigorously validate parameters, that's not going to encourage them a whole lot.

On the other hand, this is the same sample, done in NValidate:

using NValidate.Framework;
public class Customer
{
    public string CustomerName;
    public Customer(string customerName)
    {
        Demand.That(customerName, "customerName").HasLength(0, 20);
        this.CustomerName = customerName;
    }
}

public class MyExample
{
    public static void Main()
    {

        try
        {

            Customer myCustomer = new Customer("A name that is too long");

        }
        catch(ArgumentException e)
        {
            throw new InvalidOperationException("Validation error found.");
        }
    }
}

A couple of things worth noting:

  1. You only have to import one namespace.
  2. In the property, you simply Demand.That your parameter is valid.
  3. In your code that invokes the test, you need to do the following:
    1. Wrap the code in a try...catch block.
    2. Catch the exception and handle it, if appropriate.

See the difference? You don't have to write a lot of code to validate the parameter, and your clients don't have to write a lot of code to use your class, either.

Q. Why didn't you go with attributes instead?

A. I considered attributes in the original design of NValidate. But I ruled them out for a number of reasons:

  1. Using them would have meant introducing a run-time dependency on reflection. While reflection isn't horrendously slow, it is slower than direct method invocation, and I wanted NValidate to be as fast as possible.
  2. I wanted the learning curve for adoption to be as small as possible. I modeled the public interface for NValidate after a product I thought was pretty well known: NUnit. You'll note that Demand.That(param, paramName).IsNotNull() is remarkably similar to NUnit's Assert.IsNotNull(someTestCondition) syntax.
  3. In NValidate, readability and performance are king. Consequently, it uses a fluent interface that allows you to chain the tests together, like so:

    Demand.That(foo, "foo").IsNotNull().HasLength(5).Matches("\\d5");

    This is a performance optimization that results in fewer objects created at runtime. It also allows you to do the tests in a smaller vertical space.

My concerns about attributes and reflection may not seem readily apparent until you consider the following: it's conceivable (in theory) that zealous developers could begin validating parameters in every frame of the stack. If the stack frame is sufficiently deep, the costs of invoking reflection to parse the metadata begins to add up. It may not seem significant yet, but consider the scenario where any one of those methods is recursive; perhaps it walks a binary tree, a DOM object, an XML document, or a directory containing lots of files and folders. When that happens, the costs of reflection can become prohibitively expensive.

In my book, that's simply not acceptable. And since, as a framework developer, I cannot predict or constrain where a user might invoke these methods, I must endeavor to make it as fast as possible. In other words, take the parameter information, create the appropriately typed validator, execute the test, and get the hell out as quickly as possible. Avoid any additional overhead at all costs.

Q. Why didn't you use interfaces in the design?

A. I go back and forth over this one all the time, and I keep coming back to the same answer: Interfaces would tie my hands.

Lets assume, for a moment, that we published NValidate using nothing but interfaces. Now, in a subsequent release, we decided we wanted to add new tests. Now we have a problem. We can't extend the interfaces without breaking the contract with clients who are built against NValidate. Sure, they'll likely have to recompile anyway; but if I add new methods to interfaces, they might have to recompile lots of assemblies. That's something I'd rather not force them to do.

On the other hand, abstract base classes allow me to extend classes and add new tests and new strongly typed validators fairly easily. Further, it eliminates casting (because that's handled by the factory). If, however, the system is using interfaces, some methods will return references to an interface, and some will return references to strongly typed validators, and some casting will have to be done at the point of call. I want to eliminate manual casting whenever I can, to keep that call to Demand.That as clean as possible: the cleaner it is, the more likely someone is to use it, because it's easy to do.

Q. Why not just use assertions instead of throwing exceptions?

A. This should be fairly obvious: Assertions don't survive into the release version of your software. Additionally, they don't work as you'd expect them to in a Web application (and rightly so, since they'd kill the ASP.NET worker process, and abort every session connected to it. [For a truly educational experience, set up a test web server, and issue a Visual Basic Stop statement from a DLL in your Web App. You'll kill the worker process, and it will be reset on the next request. Nifty.]).

Wisdom teaches us that the best laid plans of mice and men frequently fail. Your most thorough testing will miss some points of your code. The chances of achieving 100% code coverage are pretty remote; if you do it with a high degree of frequency, I'm duly impressed (and I'd like to submit my resume). But for the rest of us, we know that some code never gets executed during testing, and some code gets executed, but doesn't get executed under the precise conditions that might reveal a subtle defect. That's why you want to leave those checks in the code. Yes, it's additional overhead. But wouldn't you rather know?

In Summary

Sure, these are tradeoffs in the design. But let's keep in mind who I'm targeting here: lazy programmers who are typically disinclined to write lots of code to validate their parameters. The idea is that we want to make it so easy that they're more likely to do it. In this case, less code hopefully leads to more, which (I hope) leads to fewer defects, and higher quality software.

Thursday, March 6, 2008

The Value of Collaboration in Software Development

I've really come to appreciate the Answers section on LinkedIn. It's amazing the kinds of questions that will be asked, and the kinds of thoughtful and thought-provoking answers you'll find posted there. Today, I stumbled upon this intriguing question posted by Steven Burda of Sungard Data Systems, titled "The Reality: What you know? Who you know? Who they know? Or who knows YOU?!":

“it’s not what you know, but who you know”
“it’s not who you know, but who they know”
“it’s not who you know, but who knows YOU”
Lately, the enormous rise of virtual business and social networks is changing the traditional “networking” environment, and it is evident by the exponential growth of sites such as Linkedin and Facebook, just to name a few… Questions: What about the true value-added result of this human “capital” we’re connecting and “collecting” here… and its direct (and indirect) benefit to YOUR future, on both professional and personal level? But most importantly, please tell me: which of the above three statements do you agree or disagree with, and why?

These kinds of questions are frequently asked on LinkedIn, and they make you stop and seriously think before you just whip out a response. (If they don't, they certainly should.) Among the many responses, however, one struck a chord with me, because it resonates profoundly with how I feel about software development and the kind of culture I'm currently seeking in a company. This eloquent, insightful answer was posted by Charles "Charlie" Levenson of Multnomah Athletic Club (italics were added by me):

It's not who you know.
It's not who they know.
It's not even who knows you.
It's who YOU know who knows the things that you DON'T know.
The secret to networking for me has always been using my network to supplement the gaps in my capabilities and knowledge. Almost every project these days requires some level of COLLABORATION and there just are not that many Orson Wells around any more. I know that I will NEVER be as good at some parts of the process as other people I know already are, so why not collaborate with them. For instance, no matter how well I might write, produce, or direct, when it comes to MUSIC, that is simply a giant hole in my skill-set. I have to have a strong network of people I can work with who can provide music capabilities at the level I need to match the rest of the work I do.
For me, the network is about knowing who can do what I can't do so that together we can do great things.

This simple, forthright statement sums up the value of collaboration so eloquently that there's little more that needs to be said. (But you know I'm going to anyway, right?)

For me, having worked as  lone developer for three years, with no access to peers during the totality of that time, I hunger and thirst for collaboration to such a degree that it's nearly overwhelming. I am keenly aware of several facts regarding my current state as a software developer:

  • Where I work now, we've been working with a very finite set of technologies. During that time, a whole new suite of powerful technologies has emerged that I know nothing about, and with which I have never had the opportunity to work because there simply wasn't a need or a desire on the behalf of the company to look at them. .NET 2.0 (and now 3.5), LINQ, SilverLight, WPF, WCF, Web Services, AJAX, multi-threaded applications, click-once applications, web farms... The list goes on and on and on. I am keenly and painfully aware of the vast gaps in my knowledge that I desperately want to fill. But filling that knowledge, for me, is a daunting task. How does one begin? For me, I learn best by doing and discussing. Reading is slow and tedious, and requires that you trust the author, that you have the time and space to do it quietly, and that the printed materials are accurate and complete. My schedule tends to be quite busy; the only time I really get to read is right before I go to sleep, and that's not the best time to read technical materials at all.
  • I have often made mistakes that could have been prevented through the simple mechanism of peer review. And it didn't even have to be a formal review. Water cooler discussions, informal whiteboarding sessions, lunchtime conversations, inter-cubical discussions about a problem that I just couldn't figure out... any of these things could have helped me to avoid any of the major headaches that I brought on myself. We often look at peer review with suspicion, thinking that it's going to be someone judging or criticizing our code. But the real value of peer review lies in someone being able to point out things that you hadn't considered. "Hey, did you know that there's a class or method in the Framework that already does that?" "This design pattern might actually solve that problem for you." "You might be able to solve that better with an interface instead of a class." "That design might be too tightly coupled; can you explain why you're doing it that way?" (After all, you might have a valid reason for doing it.)
  • I find it all too easy to just fall back on the familiar, cozy way of doing things the way that I have always done them. Without a fresh source of ideas, new perspectives, it's very difficult to think outside the box. When you work alone, it's virtually impossible. You have to rely on books, blogs, newsgroups, and Google. It's a hit-or-miss thing at best. But working with people that you know and trust, whom you know to be reputable and knowledgeable about their field, is a great way to gain insight into new ways of doing things that you might never have considered.

What all of this points to is that Charlie Levenson's comment struck home for me. I've been interviewing with companies for three weeks now. And in every interview, I've made it clear that I'm interviewing them every bit as much as they are interviewing me. I'm looking for a culture: a culture of collaboration, mentorship, and growth. I need to know that when I get up in the morning, I'm going to be excited about going into the office because somewhere, somehow, some piece of my knowledge base is going to expand: I'm going to learn something knew.

I once blogged about refactoring as a way of self-improvement, that it's not just a coding activity, but that we should constantly strive to refactor our skill set. I stand firmly by that blog post. What's interesting now, though, is that I'm taking that mentality and applying it to my job search. I want a job where I can engage in constant refactoring of myself every day, where I can sponge knowledge off of my peers every day, and learn something new from them.

So, what Charlie says is absolutely true: I'm building a network of people who know things that I don't, so that they can fill in the gaps in my knowledge, and we can do great things together. With any luck, I can one day return the favor for them or someone else.

Thursday, February 28, 2008

Youth, Technology, and the Cool Factor

On LinkedIn, Bill Gates asked an interesting question.

How can we do more to encourage young people to pursue careers in science and technology?

There were many beautiful and eloquent responses, touting the need for youth to be engaged in solving the problems of the future, and how folks like Barrack Obama and Bill Gates were shining examples inspiring youth to take up that charge.

I am going to go out on a limb here.

We can talk about "the problems of the future" and "technology" and "science" until we're blue in the face. But that's only going to captivate a small slice of the pie. Granted, it might be the slice that's naturally geared towards computers anyway, but in my mind, you want to grab as much of that pie as you possibly can, to plumb the rest of them to find out if they have latent talent that they don't even realize is there.

Look around you at the youth of today. Science, technology, the environment, Web Applications, and all the things that interest us are not foremost on their minds. What is on their minds is fun, excitement and the cool factor.

Apple knows this.

And please, don't take that as an Apple Fanboi comment. I am most certainly not an Apple Fanboi. I am an ardent believer in the No Silver Bullet tenet. I've been working with Microsoft technologies for 18 years now, and still swear by them. But clearly, Apple is the winner when it comes to cool unless you're talking about desktop games. The iPod, the iPhone, OSX--they simply get out of the user's way and let them get to work and be cool when they want to do something.

Can we honestly say that about the Windows platform?

What Microsoft can do, and the technology industry in general, is make science, technology, and the problems of the future cool and fun to work on. Make it uncool not to be doing those things.

That means less emphasis on the geekiness, less technobabble, less bombarding everyone with acronyms that baffle even the most experienced developers.

Youth are excited by music, color, style, action, and the ability to do things with their friends. They're not typically excited by sitting alone in a darkened room, writing reams of code while munching on Cheetos.

You want to get youth involved in technology and science? Make it appealing to them at their level. Cut the costs associated with pursuing it for one. Higher education is simply too damned expensive. And then, change our fundamental approaches to it in school. I don't know about you, but Biology and Physics courses and my junior high school years were enough to put me in a zombie state. There was zero excitement. And I'm a science nut.

Appeal to youth; absolutely, appeal to youth. We need them. They ARE the future. Without them, we're doomed. But capture their enthusiasm early, and do it in ways that are sure to capture their imagination. Pie charts, timelines, Bunsen burners, Periodic Tables of the Elements...yeah, that's not working to capture their attention.

Monday, February 18, 2008

Interview Questions...the Flip Side of the Coin

So, I'm doing the interview thing again. It's taken me a long time to reach this point, but I'm hitting the interview trail, and seeking new employment. I've decided that I'm no longer comfortable working in a vacuum, and that what I really, deeply, truly crave is a collaborative environment and access to peers. I've been on a few interviews, and while they've gone well (a few have been grueling, and I've even gotten some offers from them), I've always found myself stumped by the end of the interview, when the interviewers turn to me and ask, "Do you have any questions?"

Now, I view an interview as an experience akin to that of buying a house or a car. Sure, the company is interviewing you, but you're interviewing them as well. It's a life-altering decision. For me, it's going to potentially set the next three to five years of my life (hopefully more, if the fit is right). But by the time I get to that part of the interview, my brain has been fried by the intense barrage of information, and I'm lucky if I can form a coherent sentence.

Today, I'm interviewing again. It'll be another face-to-face, and I'm talking to the HR director, and the technical lead. Having learned from the mistakes in the past, I wanted to have my questions for them ready up front. So, I reviewed the job posting, and the company's web site. I looked at their About Us page, and talked to a few friends about interview questions they had asked. Here are the questions I plan to ask today:

HR
  • How long does the average employee remain employed at your company?
  • Describe the benefits package.
  • In your website, you describe the company as having a “family oriented” culture. Describe what you mean by “family oriented.”
  • In general, do employees at company associate with one another after work, either as a product of work-sponsored events, or because they’ve formed friendships because of the environment?
  • What is the dress code at your company?
Technology
  • Has the system already been specified and fleshed out? If not, how much time has been allocated to its specification and design?
  • Is the system to be designed a straight port of an existing application, or a whole new system being designed from scratch?
  • How large is the current development team and how is it organized?
  • Are you using automated unit testing? Refactoring? Code reviews? Any other similar processes?
  • What kind of software development process is in place?
  • How aggressive is the development schedule?
  • Can you describe your release process to me?
  • What kind of source code control system are you using? (I’ll need to know it so I can familiarize myself with it if I haven’t used it.)
  • How are defects tracked, monitored, and corrected?

I'm fairly satisfied with most of my tech questions. I have a good idea what I'm looking for in the tech field. But the HR questions revolve around the culture I'm looking for, and I'm not so sure about those. So I leapt onto Google and searched technical interview questions. The results were astonishing. You can find all kinds of sites that tell people what kinds of questions to ask interviewees, but you can't find anything that tells an interviewee who wants to make sure the company's a good fit for him/her what kinds of questions might be good to ask depending on what they're looking for.

In my particular case, working with people who can help correct deficiencies in my knowledge is more important than the money. I'll take a cut in pay to get access to people and current technologies. So finding the right company is of paramount importance to me. (It's why I've turned down offers.) But what questions do you ask to verify that?

It seems to me that we could really use a good website that addresses this side of the coin. I'll keep looking for one. If anyone has any pointers, I'd appreciate a link!

Tuesday, January 29, 2008

SOX Compliance and the Waterfall Method

So here I am, working away at my job, coding in a vacuum as always, a development team one. Some things never change. But other things, inevitably, do.

Our company was recently purchased. The new company has grand plans to eventually go public. With its eyes set on that prize, they have hired a consulting firm to help them achieve SOX compliance. This firm (who shall remain nameless) is busily churning out reams of process drafts to help us in that endeavor and submitting them to us for approval. It was only a matter of time before the SDLC for software development arrived on my desk for review.

Now, for those not familiar with how things run at our company, I'll simply refer you to this post, which rather succinctly sums it up. While some things have changed, most things, by and large, remain status quo. I have managed to convince them of the value of hiring temp testers prior to releasing builds, so we've shown moderate improvement there. But I'm still wearing tons of hats, and completely driving the entire development process single-handedly. And the company adamantly refuses to hire any other developers to help out. It's also worth noting that since the acquisition, the number of new software projects piling up on my to-do list is rapidly approaching the double-digits. So any process that these guys throw at me is going to affect me and it's going to affect me pretty damned profoundly.

You can imagine my utter shock and amazement when the plan that was presented to me for review and acceptance clearly stated that we were to implement, in excruciating detail, the Waterfall Method.

This presented several problems to me right off the bat:

  1. Whoever presented this plan is clearly unaware of the fact that the waterfall method clearly doesn't work. The very man who initially described it (Winston Royce), pointed out quite clearly that it doesn't work, and suggested an iterative model as a clearly superior alternative. See the actual article for proof.
  2. There is no way that we'd be able to implement that process with a development staff of one person. The process outlined requires that the roles are separately defined and filled by distinct individuals. We don't have individuals to fill those separate roles, and the company refuses to hire them.
  3. Even if we did implement the process, the timeline to implement software solutions for our customers would become so bloated that the customers would drop us like a rock. Our biggest customer demands a release every three months. If we adopted the Waterfall Model as it's spelled out in the SOX compliant process they submitted, it would take three months just to spec out the iteration. Not that the model would permit the iteration.

Consider this quote, dated in 2004, for crying out loud:

Asked for the chief reasons project success rates have improved, Standish Chairman Jim Johnson says, “The primary reason is the projects have gotten a lot smaller. Doing projects with iterative processing as opposed to the waterfall method, which called for all project requirements to be defined up front, is a major step forward.”

In his blog entry, Waterfall Method: A Colossal Blunder, Jeff Sutherland points out the following interesting tidbits in his comments:

The Waterfall process is a "colossal" blunder because it has cost 100s of billions of dollars of failed projects in the U.S. alone. Capers Jones noted 63% failure rates in projects over 1M lines of code in 1993. By the late 1990's, military analysts were documenting a 75% failure rate on billions of dollars worth of projects. In the U.K. the failure rate was 87%.

...

Let me reiterate, for projects over $3M-$5M, the Waterfall has an 85% failure rate. For those projects that are successful, an average of 65% of the software is never used. The Waterfall is a collosal blunder. The most successful Waterfall company I have worked with had a 100% Waterfall project success rate with on time, on features, and on budget. This led to a 100% failure rate in customer acceptance because the customer's business had changed or because the customer did not understand the requirements.

In his article, Improve Your Odds of Project Success hosted to SAP NetWeaver Magazine, David Bromlow provides the following chart that shows how the Waterfall Method makes it difficult to start effectively managing project risk until much later in the project compared to more agile methodologies:

In their article, From Waterfall to Evolutionary Development (EVO), Trand Johnson and Tom Gilb had this to say:

After a few years with the Waterfall model, we experienced aspects of the model that we didn’t like:

  • Risk mitigation was postponed until late stages;
  • Document-based verification was postponed until late stages;
  • Attempts to stipulate unstable requirements too early: change of requirements is perceived as a bad thing in waterfall;
  • Operational problems discovered too late in the process (acceptance testing);
  • Lengthy modification cycles, and much rework;
  • Most importantly, the requirements were nearly entirely focused on functionality, not on quality attributes.

Others have reported similar experiences:

  • In a study of failure factors in 1027 IT projects in the UK, scope management related to Waterfall practices was cited to be the largest problems in 82% of the projects. Only approximately 13% of the projects surveyed didn’t fail (Taylor 2000);
  • A large project study, Chaos 2000 by The Standish Group showed that 45% of requirements in early specifications were never used (Johnson 2002).

Finally, I'll offer this, from the article Proof Positive by Scott Ambler in Dr. Dobb's Journal:

Agility’s been around long enough now that a significant amount of proof is emerging. Craig Larman, in his new book Agile and Iterative Development: A Manager’s Guide (Addison-Wesley, 2003), summarizes a vast array of writings pertaining to both iterative and incremental (I&I) development, two of agility’s most crucial tenets, noting the positive I&I experiences of software thought leaders (including Harlan Mills, Barry Boehm, Tom Gilb, Tom DeMarco, Ed Yourdon, Fred Brooks and James Martin). More importantly, he discusses extensive studies that examine the success factors of software development. For example, he quotes a 2003 study conducted by Allen MacCormack and colleagues, to be published in IEEE Software, which looked at a collection of project teams of a median size of nine developers and 14 months’ duration. Seventy-five percent of the project teams took an iterative and incremental approach, and 25 percent used the waterfall method. The study found that releasing an iteration’s result earlier in the lifecycle seems to contribute to a lower defect rate and higher productivity, and also revealed a weak relationship between the completeness of a detailed design specification and a lower defect rate. Larman also cites a 2003 Australian study of agile methods, in which 88 percent of organizations found improved productivity, 84 percent experienced improved quality, 46 percent had no change to the cost of development, and 49 percent lowered costs. He also cites evidence that serial approaches to development, larger projects and longer release cycles lead to a greater incidence of project failure. A 2001 British study of 1,027 projects, for example, revealed that scope management related to waterfall practices, including detailed design up-front, was the single largest factor contributing to failure, cited by 82 percent of project teams.

So, with all this overwhelming information at our disposal (which is just the little bit I could scrape up with Google in about an hour), and years of historical evidence that proves empirically that Waterfall doesn't work, why on earth would you wield impose it as the one and only process to be used for all projects, regardless of size or complexity across your entire organization?

It's like voluntarily picking up a cursed +4 Vorpal Sword of Mighty Cleaving: it chops your own head off the moment you touch it.

It's sheer, absolute lunacy. Particularly in our case, where we lack the time, the resources, or the desire to acquire the resources to properly implement it as it's written. We'll be bogged down in a bureaucratic quagmire of Dagoban proportions.

You'll have to forgive me if this seems like a rant. But that's exactly what it is.

It might be time to brush off that resume. Sometimes, enough lunacy just piles up that you start to realize that there's no one behind the wheel who has any firing synapses in the brain.

Friday, January 25, 2008

Is VB.NET vs. C# Really Just Syntactic Sugar?

I recently read somewhere, as I have read before, that there aren’t any really compelling differences between C# and VB.NET. As has often been repeated, the differences all really boil down to “syntactic sugar.” C# is nice and terse, deriving its tight syntax from C and C++, while Visual Basic uses verbose language in an attempt to achieve greater clarity. Once the compilers get the code, though, it’s all supposed to be the same MSIL that gets generated, because you’re targeting the same .NET Framework.

So, it’s been asked, why would you choose one over the other?

That’s a fairly intriguing question. I’ve been working with VB and VB.NET for a really long time, and I’ve also had the opportunity to work with C, C++, Java, and C#. I like them all. I’d have to say that you can’t really beat VB for getting something up and running really damn fast.

But I’ve started to get this really deep-seated gnawing in the pit of my gut about what kinds of bad habits I’ve picked up over the years. VB has a reputation for doing things “automagically” to ease your life for you. Implicit type casting, dynamic variable allocation, case insensitivity, and a host of other little time-savers are designed to shield you from the nitty-gritty details of brain-cramping compiler complexities.

As a thought experiment, I took a large code base here at the office and ran it through C-Sharpener, a utility that converts VB.NET code to C#. Now, as a rule, I figure I write fairly safe code. I try to avoid reliance on the Microsoft.VisualBasic namespace, and use the Framework code instead. I always use Option Strict On (except for one particular class that used Reflection), and always explicitly define my variables. I’m a huge fan of type safety, so that wasn’t a concern to me.

What I didn’t expect to find were the things that C# complained about that I’d been doing and it told me, in no uncertain terms, were foolishness.

For instance, in Visual Basic, this is perfectly acceptable:

Imports System.Diagnostics
Dim log As New EventLog("Application", Environment.MachineName, "MyApp")
log.WriteEntry("MyApp", "Message Text", EventLogEntryType.Information)

(Ignore the crappy code. Just focus on the point I’m making.)

This will get you a wrist-slap from the C# compiler. Why? Because that particular overload of the WriteEntry method is static. You can’t invoke static methods from an instance variable in C#. The compiler flatly refuses to let you do so. Visual Basic, on the other hand, thinks that’s just fine and dandy; it resolves the issue on the fly for you.

Does that sound like syntactic sugar to you?

In Visual Basic, this is just fine and dandy:

If CInt(txtQuantity.Text) Then
   ' Do something spectactular
End If

Visual Basic helpfully converts the result of CInt to a Boolean to help evaluate the If...Then statement. If it’s nonzero, you get True and something spectacular happens.  In C#, you get a lovely compiler error about not being able to cast an int to a bool. Why? Because an int isn’t a bool, stupid!

"Yeah, yeah. So what? But I always want to do that." Good. So prove it. Explicitly cast, and for Pete’s sake work with the right data type. It shows me that you’ve thought about that when you wrote it. Visual Basic doesn’t force you to make your intent clear. C# does.

if ( 0 != int.Parse(txtQuantity.Text) ) {
  // do something spectactular
}

Again, does that sound like syntactic sugar to you? Remember, intent != syntax.

In Visual Basic, you can do this for days and the compiler will pat you on the back while you do it:

Public Function FooBar() As Integer
   Dim result As Integer
   Return
result  
   ' Do some more real work--that's unreachable
   Return result
End Function

Does the compiler care? Nope. Not a peep. C#, on the other hand, gives you this nifty warning: “Unreachable code detected.” Then it gives you the file name and line number where it’s at. It’s like your best friend saying, “Hey man, you really don’t want to do that.”

There’s no way that’s just syntactic sugar.

So here I am, looking at this project that I’ve converted, and I’m both pleased and shocked. Pleased because the number of conversion issues and errors is relatively minor. Shocked because I found myself doing things that I didn’t think I was doing. They just creeped up on me and seeped into me like bad habits all to often do.

I’ve been wanting to make the switch from VB to C# for some time now. Doing this conversion turned out to be a good thing for one very compelling reason: it opened my eyes to the mistakes I’ve been making, the bad habits I’ve adopted. I’m sold on C# now as my full time language. I’ll miss the speed of development of VB, but if slowing down means I write higher quality code that contains fewer bugs that have to be squashed later at higher cost, isn’t that worth it?

In the end, the point of this post is this: C# and VB are not simply different by the syntactic sugar that distinguishes them. The power of their compilers and the strictness of their adherence to OO principles also separates them. I’d wager a guess that it’s C#’s strictness that makes its compiler so much more powerful than VB’s. I certainly don’t see messages about unreachable code, expressions never being of the provided type, static method invocation, and so forth from VB. 

So please. Don’t over-simplify the issue. View the languages for what they are, and use the one that’s appropriate for what you’re doing, and how you work. For me, I’m making the switch. It makes sense for me. It won’t for everyone. But I am, by definition, an obsessive-compulsive control freak. I demand to know what I’m doing wrong and then I want to ruthlessly correct it. And I can’t reasonably ask for a harsher taskmaster at this point than an unforgiving, absolutist object oriented compiler.

Thursday, January 24, 2008

Certified at Last!

Okay, so it's not my Microsoft certification yet. But it's something:

NerdTests.com says I'm a High Nerd.  What are you?  Click here!

 

Being a big fan of measurable, quantifiable scores, I'd say it's nice that when someone asks, I can give them hard numbers to justify my claims when I say "Yes, I am, in fact, a nerd. Thank you very much for asking."

This ought to look really good in my email signature:

Michael Hofer
High Nerd
One More Pointless Blog

I can't wait to use it!

Tuesday, January 22, 2008

Heath Ledger Has Died

I was stunned to read that Heath Ledger has passed away while cruising the City of Heroes forums.

I'm stunned. As many know, Heath had just wrapped up his portrayal of the Joker in The Dark Knight. His passing will make it very eery to watch him on the screen when the movie comes out.

For those who don't know much about him, check out his IMDB profile. Heath wasn't just another pretty face around Hollywood. He picked his projects carefully to avoid being pigeonholed. He'll be missed.

Monday, January 14, 2008

Curious Perversions in Usability

We've all seen them, and we've all used them: applications foisted upon us by the well-meaning management masses who wanted us to conform to the standard in order to boost our productivity, or the latest whiz-bang website promising to revolutionize its niche market. While the product itself might actually solve a unique problem, or offer a plethora of enticing, well designed features under the hood, its user interface frustrates, confuses, obscures, and clutters.

When it comes to user interfaces, drill this one simple idea into your mind: Simple, clean interfaces will win out over flash, pomp and circumstance every single time. Why? Because a user interface should get out of a user's way, it should not impede them, confuse them, obscure the information they need to find, or be cluttered with crap they're really not interested in.

We all have different views about what makes any given piece of software more usable. But certain things tend to peeve users fairly consistently. These tend to be mine when it comes to Web pages:

  • Don't make me wait. Don't make the mistake of thinking that performance isn't a part of your user interface. There are always numerous ways to display the same piece of information, and some are faster than others. Given a choice between having to wait for a Flash or PDF download of a pretty picture, or a flat GIF/JPEG, which do you think most users would prefer? (And if you think Flash hasn't been proposed for this, think again.) Use the smallest, most compact presentation format that will get the job done right.
  • Don't make me scroll. Especially not horizontally. Smaller pages work better. Horizontally scrolling pages are counter-intuitive, and people tend to have a hard time shifting into a mode where they're comfortable scrolling in that direction. Sometimes, you simply can't get around it, and that's the exception rather than the rule. In those cases, under no circumstances should you move the main navigational controls off the screen. In Web applications, embed the scrolling content in scrolling DIVs (or other suitable controls) to ensure that users can still reach your navigational controls without having to page through the document.
  • Don't make me squint. Use a reasonable font size that even the visually impaired can comfortably read. Better yet, use a font size that scales when the user chooses a different size in the browser. Don't force your font size on the user, simply because not everyone has 20/20 vision.
  • Don't make me guess what language you've written that document in. Use a clear, legible, font. You may like your decorative fonts, but they're not suitable for body text, forms, or general deployment on Web pages. Most users won't have them, and they won't look the same. Use standard fonts.
  • Don't make me wonder if there's something written in any area of the page or screen. Don't use dark text on a dark background. Don't use light text on a light background. Strong contrast enhances legibility.
  • Don't hide important information from me. Place important information at eye level. Use font weights, color, and styles to emphasize important information. Place this information prominently on the page, where I can easily see it. Don't obscure it in the page.
  • Don't hyperlink everything on the page. A hyperlink should indicate that there's something worth investigating. If everything on the page is hyperlinked, the hyperlink loses its value, and I'll tend to ignore them. Hyperlink the important topics. If you need to hyperlink lots of topics, provide a section at the bottom of the page called See Also or References and include those links there.
  • Don't obscure or complicate hyperlinks. If you change the style of a hyperlink so that I don't know it's a hyperlink, I won't know what to look for. Don't overly complicate them. Hyperlinks are an established navigational paradigm for the Web (and even desktop software) and everyone knows what they are and how they work. Leave them alone. Users already know how to use them.
  • Don't make me jump through hoops to find the commands or features I need to use. Don't invent an entirely new way of navigating your web site or application. There are a number of existing navigational paradigms that are well established and with which users are very familiar: drop down menus, tree views, bread crumbs, tabs and commands, and so forth. Don't confuse users by making them learn something completely new.
  • Don't surprise me by reconfiguring the user interface when I do something. If I click a button or a menu command and the entire user interface changes, or entire menus disappear, we've got a problem. The user interface, and the navigational system in particular, needs to be consistent and predictable. If it's not, users will be playing a constant guessing game about what they can and should do next. Users playing a guessing game are dangerous users.
  • Don't baffle me with technical jargon or confusing messages.  When something goes wrong, or when I've done something wrong, communicate it clearly and concisely. Tell me what I can do about it. Recover gracefully. Don't just throw up some message box that announces "An error occurred. Press OK to continue." Duh. What should I do next? Should I tell someone? If so, whom? Is my data safe? Do I need to start over?
  • Use consistent language. Don't call it Cancel on one screen and Abort on another. Don't use Logon Name on one screen and Sign In on another. Be consistent. Establish a vocabulary and stick to it.
  • Don't waste my time prompting me in an intrusive way to take part in your survey. I'm not interested in taking part in your survey. Put the offer to take part in a prominent place in your site or program that isn't intrusive. If I'm interested, I'll take you up on the offer. Otherwise, I'm going to close the DIV because you were rude enough to cover up the content that I was looking for with your intrusive popup. The same goes for popup ads. (But we all know how well that's going to go over.)
  • Don't play sound or streaming video as soon as the page loads. If I want to see it, I'll start it myself. You're chewing up my bandwidth, thank you very much. If I'm from an area where that's a precious commodity, that's the height of rudeness. Give me the opportunity to start the sound or video when I want to and if I choose to do so. This includes all forms of linked and embedded media, including Flash.
  • Don't order me to get a better browser. You don't know what browser is best for me. I may like FireFox, IE, Safari, Opera, Navigator, or some as yet unnamed browser still emerging. You may be able to say that your site doesn't support browsers outside a certain set, but it is gauche to insist that your browser of choice is the one and only true browser, whichever browser that may be. Competition is actually good for the industry.
  • Don't assume that my monitor is as big as your monitor. Just because your company has a standard video configuration that supports 1024×768 doesn't mean that's what your users are configured for. A vast amount of users are still set at 800×600. This resolution isn't a matter of laziness, but of simple visual acuity: they can't see anything if it's at a higher resolution. Design for 800×600. Ensure your pages fit on a monitor at that resolution. Doing so means users don't have to scroll horizontally. It also means that your pages will print properly if the user hits the Print button from the browser and is printing in landscape mode.

Yes, this is surely opinionated. Yes, I'm sure I'll take heat for it. But here's where I'm coming from: I've both used lots of Web sites, and I've had to design lots of Web pages for The Average Computer User(tm). For those users, all of these things have turned out to be true. Ask yourself why Google's search engine is insanely popular. It's not just that their search engine covers the vast majority of the Internet; most people don't know how to effectively use the search engine to get the results they really want. It's because their search page is so simple that it's almost pristine. It's foolproof. Type what you want in the box and click Search. The results pages come up and show you what matches your search criteria. It's simplicity defined.

Apple's computers have always been lauded as a breathtaking departure from the technical complexity inherent in Windows. Their user interfaces are simple, clean, and easy to use. They're the hallmark of Apple's software. One could argue that the simplicity of Apple's user interfaces is what defines them more than their hardware. Again, simplicity prevails, because the user interface gets out of the user's way, and lets the user get her job done.

This is what we should be striving for. Design a Web page that is simple, clean, and gets out of the user's way. Don't confuse them. Be predictable in the way you behave. Be forthright and clear in the way you communicate. Use strong contrasting colors, legible fonts and sizes, don't reinvent the Web navigation paradigm, keep the navigation system where users can reach it, and avoid technologies that will degrade the user's experience.

A Web site can do all that and still be beautiful. CSS allows us to do that. There's no reason you can't be clean, predictable, communicative, unobtrusive, and beautiful all at the same time. You just have to choose to do so. And you have to put your users' needs above your own desire to use the latest flashy, slow, whiz-bang technologies that don't really get you anything more than older, stable, less impressive technologies that accomplish the same thing.

Who's Testing Your Software?

There's a common mistake in software development: trusting the developers to test the software. Historically speaking, developers are the worst kind of testers, because we tend to use the software only as we designed it to be used. It takes a special kind of developer to be able to think outside the box and think like a user with little or no computer savvy.

In the comment thread to the article, Microsoft Admits Vista Update Glitch, one poster made this point:

Beta testing is not getting the bugs out of software because they got the wrong people doing it. Don't use computer savy [sic] people to beta test, use people like my wife who don't have a clue what makes the computer work. She can discover any glitch in software code, guaranteed. Her gift also applies to use of TV remote controls, etc.

To which came this reply (edited for brevity):

This is the best answer I have read for several years. Beta testers are people who do not do things that cause problems, rather, they look for features and bugs that are sometimes not there...The best Beta testers are people who are not knowledgeable and those who don't know the difference of double or single click.

These folks are referring specifically to Microsoft's beta tests for its operating systems (more specifically, for Windows Vista). But the general sentiment is true and universal: users who have never been exposed to your software in the first place, and have had little exposure to technology are frequently the best ones to determine whether or not it actually works. They have a disturbingly accurate ability to ferret out bugs that borders on the psychic.

As developers, we like to believe that our software is rock solid, easy to use, painfully obvious, and bulletproof. A user who can't tell the difference between clicking and double clicking, or why it's a bad idea to keep lots of applications open at once on a machine with limited resources, is the prime candidate for testing your software. If it's a Web application, find someone who's rarely used the Web or who only uses it for the basics: IM and email. One thing that they'll be able to tell you right away is whether or not the user interface is actually usable. And if you think for one minute that you shouldn't be designing clean, minimalist interfaces for the lowest common denominator of user, you've probably never met the average computer user. There are far more of them than there are of us.

We have some pretty interesting users for our Web applications. Some of them are fond of ignoring on-screen instructions. Tooltips, online help, field prompts, clearly written button text, user training...not much of that seems to make a difference. When all of that fails, what does the application do? How robust is it? How gracefully does it handle bad user behavior? For that matter, how gracefully does it recover from bad application, network, or hardware behavior? And does it alert the user to that kind of thing in a clear, friendly, and meaningful way?

You can't determine that sort of behavior by trusting your developers or your unit tests to find them. Inexperienced users will find far more than your tech savvy users will. That's not to say that your testing team shouldn't include tech savvy users; it absolutely should. But make sure that you include novice computer and Web users in your testing team.

Sunday, January 13, 2008

Software Release Engineering

In his Coding Horror post titled How Should We Teach Computer Science?, Jeff Atwood blogs about the lack of coverage of release engineering in computer science courses. At best, he points out, it's given cursory coverage in these courses.

Now, I'm a self-taught developer. I started programming computers in 1985 or so, and I've taught myself everything I know. So I can't really comment about what the courses in a college or university are like. But I can say this: experience has taught me that a few things he says are absolutely, undeniably true. So in this article, I'm going to enumerate those things I think are really important, and how I built a software release process at the company I worked for.

The Ugly Truth About Release Engineering

  1. Release Engineering is not simply deploying your product. There's a reason it's called engineering. It involves getting the latest version of the build from source code control, building the software, executing unit tests, building installers, and labeling the build if it's correctly built. It may involve pinning or branching the build. It requires a daily "good code" check-in time policy. It requires daily builds to ensure that you have software the compiles every day, and a means of notifying folks when the build is broken, and fixing the build-breaking code right away. It's NOT simple.
  2. Consistent, disciplined use of source-code control is the bedrock of release engineering. At any given time, you might need to fix bug in the prior release. That's hard to do if you've already started changing the code for the new release. Branched builds allow you to do that. Also, versioned files in the repository allow you to view the history of changes to a file to recover from unintentional changes. You can also develop for multiple platforms while using many of the same files, sharing them across projects without having to worry if they're out of synch. Labels on files and projects tell you exactly which version of a file was used to create any given build so that you can recreate a project from the repository if you need to.
  3. Building for your environment is not enough. You need a test environment that mimics your client's environment as closely as you can make it, down to the OS, the browser, the applications and the add-ons. Just because it runs on your machine when you press F5 from Visual Studio does not mean it's going to run on the client's machine. If you're developing for multiple browsers, install those browsers and test for them.
    (Ugly true story: our company accidentally allowed IE7 through the group policies. We had IE7 deployed everywhere. Our clients don't plan to upgrade to IE7 for another year, at least. Our product must run on IE6. I had to create a separate machine that was safe from IE7 downloads and strictly ran IE6 to be certain the product ran correctly.)
  4. F5 is not enough. Every build should be a clean build. Every build. Don't ship files to the customer that aren't required to run the software. Create a build script that does the job. Excluding a file from a Visual Studio project doesn't delete the file from the folder, but does leave it in source code control (a good thing for versioning). To ensure  clean release, have your release script remove files you aren't using prior to shipment.
  5. You need a checkin-time policy. All good source code must be checked in by a certain time every day. Code that isn't checked in does not make it into the daily build. This check-in time should be early enough that the release manager can start the daily build (if it's a manual process), or make the rounds and make sure that all code is checked in prior to it. I favor end-of day check-ins (around 4 PM) for nightly builds, but each organization is different.
  6. The software must successfully build every day. A successful build is a good sign of project health. An automated build tool can be set up to execute the build in the off hours after everyone has gone home, and after all files are checked in. Once the build is complete, the pass/fail report is sent to your release manager. However, just because it compiles doesn't mean that it's entirely healthy or bug-free. Therefore...
  7. Automated unit tests should be executed on every build. If you aren't using automated unit tests, you should be. They're not hard to learn, the tools to create them are freely available and they can improve the stability and quality of your code immeasurably. Incorporate the unit tests into your build script so that they're executed every time you build the software. Correctly written unit tests alert you to build-breaking defects quickly and immediately.
  8. Build-breaking defects must be resolved before anything else. This includes any defect that causes the software to fail to compile or any defect that causes a unit test to fail. The team must adopt a "drop everything and fix the build" mentality. In my own personal experience, this view is not easily accepted in the early stages of a project, but during the later stages, when there's typically a "crunch" mode, and the build isn't riddled with build-breaking defects, developers are thankful that those defects simply aren't there.
  9. You need a release manager. While you might have many people who contribute to the build, checking in changes and adding new content, you need one person whose primary responsibility is to ensure that the software builds properly every day. That individual is also responsible for your installer, and for identifying the code that breaks the build and ensuring that it gets resolved. The release manager doesn't resolve the defects himself unless he checked in the build-breaking defect (since he doesn't know anything about the defect); rather, he must play the role of the hard-nosed drill sergeant ensuring that the coder who checked it in drops everything to fix the build right now. If you can't build the product, you can't ship the product, and anything else that developer might be working on is a moot point. It's an ugly, painful job, but it's crucial.
  10. You need a dedicated build server. This machine is clean, and does nothing but build your software. This guarantees that it injects no artifacts into your final product. It runs the daily build, executes the unit tests, and sends out the notifications when the build passes or fails. It might also house archived copies of each build's source code and binaries. It must be on the network, and should be backed up regularly. The Release Manager should have access to it, but no one else on the development team.

My Own Personal Release Process

It bears noting here that I've done the release process for two different companies. At one, it was for a full team of developers (about twenty of them), and the release process there was a nightmare. At that time, we couldn't get a release out in a month if we tried. So I volunteered to take on the job, and redesigned the process. It took about a week to get the process reengineered and everyone on board, but after two weeks we had daily builds working and everything was going much more smoothly.

I took many of those same principles and applied them to my new job. Clearly, some of them don't apply in a single-developer shop. But the basic principles are the same.

Source Code Control
  1. All developers must use source code control.
  2. All working, compilable code that does not break the build must be checked in by 3 PM every day. Code that is not checked in at this time does not make it into the daily build.
  3. User names and passwords are required for accessing source code control.
  4. The admin password is written on a piece of paper, sealed in an envelope, and stored in the CIO's desk. No one else has it.
  5. Minimal rights to access the repository based on need are granted.
  6. The main tree has the following subprojects: Build, Dev. Each tree's subprojects are mirrors of each other. Build is where the branched and pinned copies of the successful builds are. Mainline development takes place under the Dev tree.
  7. Every file that is required to create or ship the project is included in source code control: source files, SQL scripts, Web pages, images, build scripts, unit tests, test plans, requirements documentation, etc.
  8. Because we use SourceSafe, every weekend, during off-peak hours, regularly scheduled maintenance is performed on the repository to keep it in tip-top shape.
  9. The repository is stored on on the network. This folder is backed up incrementally nightly, and fully weekly.
Build Process
  1. Every afternoon, at 3 PM, all developers must have code they want included in the build checked into the repository.
  2. The release manager does a final verification at 3:15 to ensure that all code is checked in.
  3. An automated script fires off the build at 3:30 PM. It does the following:
    1. Clean the build folders on the build server. This involves deleting all files and folders from the project's build folder, ensuring a clean build.
    2. Get the latest version of the software from the DEV tree in the repository.
    3. Compiles the software and all of its dependencies. If the compilation fails, an email with high importance is sent to the release manager, notifying him of the failure, and the script aborts.
    4. Executes the unit tests. The unit test results are output to a text log file which are then sent to the release manager in an email.
    5. Executes a cleanup batch file that ensures that any files that should not be shipped with the product are removed.
    6. Creates the installer or archives the build into a ZIP file.
    7. Labels the build in the repository.
    8. If the build was successful, sends a "Build success" message to the release manager.
  4. Note that step 3 may execute multiple times depending on whether you are targeting multiple platforms or releases (such as Debug and Release, or various browsers, or various OSes).
  5. Upon receipt of a build failure email, the Release Manager reviews its contents, and identifies the offending source code. He then determines who checked that code in, and contacts that developer and asks them to resolve the defect as soon as possible.

    Important: Except in the direst of circumstances, the release manager should not attempt to fix someone else's defects. He should ask the developer to fix his own defects. If the release manager takes this task on himself, he'll quickly become inundated trying to fix all the build-breaking defects, and won't have time to do his own work.

  6. The developer resolves the build-breaking defect and checks in the change for inclusion in the next daily build.
  7. If enough build-breaking defects were present, the Release Manager may choose to manually rebuild the software once defect corrections are checked in.
  8. If the build is shipped to the customer, it is labeled, pinned and branched into the BUILD tree in the source code repository.
In closing

We're a Microsoft shop. Although I've worked in Java houses, my limited experiences have largely focused on the Microsoft stack, and the process that I've outlined above is primarily geared for Microsoft Visual Studio and SourceSafe. But the basic principles should be pretty universal. You should be able to take them and apply them to just about any combination of source code repository tools, unit testing tools, and IDE (or text editor).

The primary thing to remember is this: if you can't build it reliably, predictably, and on a moment's notice, you're in trouble. When a development team knows they can't build the software, and when the testing team is sitting around for days or weeks at a time wondering when they're going to get a new release to test, morale suffers, tempers flare, and things rapidly go downhill. I've been there. I've seen it. It ain't pretty.

Every project needs a good, solid release process. I'm tempted to say that any release process is better than no release process, but that wouldn't be entirely true. A release process needs to be trim, make sense, bolster confidence in the project, and help propel the team forward towards success. That's what this process is designed to do.

I'm sure that others have some ideas on how to improve the process above. I'd love to hear those ideas. I'm sure that others have different ways of doing things. I'd love to hear that too. There is no silver bullet, and I'm not anywhere stupid enough to think that this plan is perfect. But I hope it's enough to help someone, somewhere get a little bit closer to a project that gets out the door a bit faster, healthier, and with its developers' sanity in tact.