Thursday, March 27, 2008

My First Time: Software Passion

I remember when I was first bitten by the computer programming "Bug."

I was young: still in high school, in fact. At my particular high school (in Fontana, California), we had an Indian Education Department. That small department was lucky enough to have a TRS-80 computer for the kids in the Indian Club to play on. I dare say that the computer itself drew a few kids to the club; we were misfits, outcasts, by and large, but we were drawn to that thing like moths to a flame.

It was nothing to look at really. It was just an old monochrome monitor, with a keyboard, and a tape drive. Our model didn't even have a floppy disk. Everything was on tapes. But we were mesmerized by that thing. I remember watching one of the other kids, Dan, fire up ZORK, and typing, "go north" into the computer. It responded to his simple command, and described the next room to him. It amazed me. I remember sitting there and thinking, "How did they do that?"

I mean, it was just a stupid box, with a keyboard and a cassette drive. It couldn't think. But there it was, responding to him as if it could think. And he could type commands that were, for the day, fairly close to English. "Eat food." "Quaff potion." "Open door."

I remember sitting there, thinking about that and being relentlessly tormented by it. I had to know. How could an inanimate box like that do things like that? How did it know what room he was in? How come the rooms changed every time we played the game? How did it decide if the potion killed him, healed him, or made him sick? How did it decide what color the potion was? How did it decide what was in the room? For a stupid box with no brain, this thing was pretty damned smart.

And then, one day, Dan got stumped by the game. Something, apparently was wrong. A few years later, I'd realize he'd found a bug. So, he fired up a program, and cracked open the game's source code. And there, before my eyes, was the big secret. It was line after line after line of source code: carefully written instructions that told the stupid box exactly what to do. From those cryptic instructions, written in some obscure language called BASIC, you could make that TRS-80 do amazing things!

That was the beginning of the end for me. I had to master that language. I had to know how I, too, could command a stupid, brain-dead box and make it do amazing things. It wasn't long before I had obtained a copy of the language reference for BASIC and taken a computer programming course at our High School. (Yes, we had them, even out in the sticks in Fontana.)

So, in a way, ZORK made me a programmer.

It's been about twenty-five years since I watched Dan crack open the source code to ZORK, and the path of my life was irrevocably altered. Up until that point, I didn't really have any real aspirations. I don't think I really did after that, either. But one thing became very clear: more than any other endeavor to which I applied myself, computer programming proved itself to be my one enduring passion. All these years later, I still have those moments reminiscent of that first day. I'll see a beautiful piece of code, a website design, or an application, and I'll think, "How did they do that?" Any ideas I may have had about leaving software development will be blown away and my passion for software will be rekindled.

It's because I have to know. I can't walk away from these damned stupid boxes without being able to make them do amazing things.

There's a certain, childish delight in figuring out the solution to a problem, or finding a new way to do something. For me, it's like Christmas, and I want to share that joy with others. Sadly, a lot of folks don't understand it--especially if they're not in the same field. But anyone who's done this work, and ever had a EUREKA! moment knows exactly what I'm talking about.

Somewhere, right now, a budding young developer is experiencing his or her first time. They're being bitten by the bug. It's an infection that will take hold and set in for life. For most of us, it's a turbulent ride, filled with ups and downs, and we frequently consider leaving the field. For others, it's pure hell, and we leave it too quickly; for a lucky few, it's nirvana all the way through. I'm not sure I envy the lucky few; I rather like the way my challenges have tempered me over the years.

When you face challenges, think back on what it was about software that caught you in the first place. Think back to your first time. Then think about the many times you've been lured back to it by your own passion. Not because someone offered you money, or material goods, or power, or prestige; think back to those times that your personal passion for software kept you in the game. Then ask yourself why you feel so passionate about software. The answer for me was surprising: I'm not really doing it for anyone else, but because I have to know, and because I have to conquer the stupid box.

For all my noble aspirations, that's a humbling admission.

But that passion is still there. It keeps me in the game. And, in retrospect, it's likely why I feel so passionately about software quality. It's not enough that it works, it has to work well.

What was your first time like?

Monday, March 24, 2008

You are Not the Average Computer User

John Lilly, the CEO of Mozilla, recently blogged about Apple's practice of including a new installation of Safari in Apple's Software Update service, even if you didn't have the application installed in the first place. You can read the full article here. His main point was this: As a matter of trust, update software should update previously installed applications, and not install new applications. Apple pretty much violated that trust when it presented users with this handy little dialog box:

The main issue here is that Safari is not already installed on the end-user's machine. So, the option is not an update, but a fresh download of brand new software. Further, the option is checked by default, and the button in the lower right hand corner clearly says "Install 2 items".

Now, I'm not going to rehash the pros and cons of Apple's tactics in this matter, because that argument has been debated endlessly on John's blog and on Reddit. What I am going to take issue with is the arrogant presumption that many commenters take when they make these sorts of statements:

"I don’t see what the problem is here. If you don’t want the software, you uncheck the box. The product description is listed very clearly in the window, no extra clicking required."

Omar

"I don’t see the big deal. They are promoting their software through their software update program. It’s automatically checked…ok, so? Lots of update programs automatically check everything anyway, not just apple.

"If FF is better then people will use FF. If they like safari then they will switch. These browser “loyalty” wars are getting old. IE came with windows by default and FF is still gaining ground. It is gaining ground because it is better. Just keep making a better browser and stop worrying about this. ppl will flock to the best. We’re not stupid."

Chris

"Oh fer heaven’s sake, uncheck the box and get over it. Are you saying the majority of Windows users of iTunes are too clueless to look and see what they’re downloading? OK, I’ll admit it’s a bit pushy of Apple but beyond that I fail to see what all the fuss is about."

Anne

These are knee-jerk responses. The last one, in particular, is an exemplary case of a poster who clearly doesn't understand the idea that users who read or post to tech blogs or forums are not typical computer users. If you're reading this blog, you're not a typical computer user. (I'm not sure what you are, exactly, but you're not typical.)

Apple's case is interesting because of the enormous success of the iPod, and the vast number of iPod owners who use Windows. Those users will download iTunes so that they can use their iPod with their computer to purchase music and manage their playlists. However, the vast majority of those people are not what we would classify as tech savvy users. Rather, I'd call them click-through users, who implicitly trust the software vendor to make decisions for them. Think about your mom, your dad, your sister, your brother, your aunt, your uncle, the kids at school, the clerks at the nearest retail outlet or fast food joint, your fellow students, or your nontechnical coworkers.

Those people represent the average computer user. They are click-through users.

A couple times a year, I get calls from my family members about their computers. Inevitably, they'll tell me that the computer is suddenly horrifically slow, and that they need me to fix it. So they bring it to me, and I look at it, and it has tons of mystery software on it. I like to have them sit with me when I'm going through it, so that I don't remove anything that they might actually need or use. Nine times out of ten, they'll tell me, "I don't know where that came from." Apple's software update for Safari is likely going to produce an awful lot of these scenarios, because the the average computer user will have just clicked through the dialog, trusting that Apple knew what was best for them.

A tech savvy user isn't likely to just click through that dialog box because they know what can happen, and they're pretty darned picky about what goes on their machine. They don't blindly trust the vendor to make those decisions for them. But the number of users like that is relatively small, and is hardly representative of the world's population.

But the world is full of click-through users. There are far more of them than there are of us. Thinking for one minute that everyone thinks and/or behaves as we do is naive, shortsighted, arrogant and presumptuous.

Again, my point here isn't that Apple was right or wrong. My point is this: never assume for one minute that YOU represent the average computer user. You don't.

  • If you're smart enough to competently read or post an a technical blog or forum, you're not an average computer user.
  • If you know how to correctly fix someone else's machine after they've borked it, you're not an average computer user.
  • If you know the difference between a hash table, a binary tree, and a linked list, you are not an average computer user.
  • If you know what recursion is, you are not an average computer user.
  • If you know how to safely overclock your machine, you're not an average computer user.
  • If you read technical books like they're gripping, fast-paced murder mysteries, you're not an average computer user.

This list is undoubtedly incomplete, but I haven't had enough coffee yet. But you get the point.

So, enough with this arrogant presumption. Stop assuming that all users behave as we do. Because the simple truth is that the vast majority of users do not behave or think as we do. They trust; we suspect.

Sunday, March 23, 2008

LINQ to SQL and the Coming Apocalypse

I'm going to say it, and I'm going to say it for everyone to see: LINQ TO SQL SCARES THE HELL OUT OF ME.

Does anyone remember this from classic ASP?

<%


Set rs = Server.CreateObject("ADODB.RecordSet")
param = Request.Form("lastname")
q = "SELECT * FROM personnel WHERE lastname LIKE '" & param & "'"
rs.Open q, "DSN=mydsn;"

if NOT rs.EOF then
     while NOT rs.EOF
          Response.Write rs("firstname") & " " & rs("lastname") & "<BR>"
          rs.MoveNext
     wend
end if

%>


LINQ to SQL is giving me flashbacks to this kind of code.


No, of course code written in LINQ to SQL won't look anything like that. But it will look like this:


HookedOnLINQ db = 
new HookedOnLINQ("Data Source=(local);Initial Catalog=HookedOnLINQ");  
var q = from c in db.Contact
where c.DateOfBirth.AddYears(35) > DateTime.Now
orderby c.DateOfBirth descending
select c;  
foreach(var c in q)
Console.WriteLine("{0} {1} b.{2}",
c.FirstName.Trim(),
c.LastName.Trim(),c.DateOfBirth.ToString("dd-MMM-yyyy"));


So, what we potentially have here is database code mixed in with our business code. Further, we have no guarantee that this code will not appear in the .aspx page.


What really disturbs me about LINQ to SQL is that it looks like people will begin to use it to do things that really should be left to the database. Looking at the specific code example above, is there any good reason that this couldn't have been done with a stored procedure? I mean, after all, stored procedures are compiled, provide additional security, and aren't subject to some sleepy programmer doing a global search and replace in the IDE and borking the code.


Now, I realize that LINQ to SQL has support for stored procedures. But I'm willing to bet that the vast majority of organizations are going to use that support in conjunction with the syntax shown above to produce truly horrendous code that completely negates the tremendous power available to them in the database.


Database technology has evolved over decades to be extremely efficient at what it does: indexing, sorting, selecting, inserting, updating, and so on. We will never be as efficient doing it client side as the database will be on the server side. Ignoring that power and trying to do it in code is an exercise in futility. The lesson we need to bear in mind here is this: let the database do what it does best, and let the code do what it does best.


Sadly, I don't have a whole lot of confidence that this is going to be the case in many places, because LINQ to SQL makes it far too easy to do the database's job. For crying out loud, you can do a WHERE and an ORDER BY--which should happen in the database so you can take advantage of the indexes--in the code. (Perhaps, under the hood this gets done by generating SQL. Fine. But why am I essentially writing SQL statements in code again?! WHY!? Get that SQL out of the damned code! It doesn't belong there! SQL belongs in the database!)


Now, suppose, for instance, that LINQ to SQL is 100% bug free on its first iteration. Let's assume that it generates flawless SQL to query your database when you write that code. It still has to pass dynamic SQL. That means you can't take advantage of compiled stored procedures. It also means that you have to repeat that code if you want to reuse it--unless, of course, you're savvy enough to refactor your code base to do so. But let's be honest: the folks I'm worried about here probably aren't smart enough to refactor their code base because they're likely in a rush to get the code out as quickly as possible, and refactoring isn't a big ticket item for them. The Single Responsibility Principle likely hasn't bubbled to the top of their list of grave concerns yet.


Why this becomes a serious concern is simple: Eventually, someone has to maintain that code.  


How many of us have nightmares about working with someone else's lamentably bad ASP code that had embedded SQL statements in it? Remember how horrible that was? Remember trying to search all the files to figure out where the SQL was? Which files touched which tables?


Why on earth do we want to go back to that?


Sure, sure; someone, somewhere, has an utterly compelling need for LINQ to SQL. They absolutely, positively must have it. Their business will collapse if they don't have it. Problem is, as I see it, this is going to be abused like a box of needles and a ton of heroin at a recovery clinic. And it's everyone else who's going to pay the price.


So, in closing, I'll just say this:


DOOOOOOOOOOM!

Thursday, March 13, 2008

NValidate: Misunderstood from the Outset

Occasionally, I will post questions about the design or feature set of NValidate on Google Newsgroups. More recently, I posted a question about it to LinkedIn. Almost immediately, I got this response:

I'd suggesting looking at the Validation Application Block portion of the Enterprise Library from the Microsoft Patterns and Practices group.

Now, I'm not belittling the response, because it's perfectly valid, and the Validation Application Block attempts to solve essentially the same problem. But when I talk about NValidate, which I find myself doing a lot as I interview for jobs (it's listed on my résumé), people often ask me questions like it:

  1. How is that any different from the Validator controls in ASP.NET?
  2. Why don't you just use the Validation Application Block?
  3. Why didn't you go with attributes instead?
  4. Why didn't you use interfaces in the design?
  5. Why not just use assertions instead of throwing exceptions?

These days, I find myself answering these questions with alarming frequency. It occurs to me that I should probably get around to answering them, so I'm going to address them here and now.

It helps, before starting, to understand the problem that NValidate is trying to solve: Most programmers don't write consistent, correct parameter validation code because it's tedious, boring, and a pain in the neck. We'd rather be working on something else (like the business logic). Writing parameter validation code is just too difficult. NValidate tries to solve that problem by making it as easy as possible, with a minimal amount of overhead.

Q. How is NValidate any different from the Validator controls in ASP.NET?

A. The Validator controls in ASP.NET can only be used on pages. But what if I'm designing a class library? Isn't it vitally important that I make sure I test the parameters on my public interface to ensure that the caller passes me valid arguments? If I'm not, I'm going to fail spectacularly, and not in a pretty way. You can't use the Validator controls (RangeValidator, CompareValidator, and so on) in a class library you're writing that's intended to be invoked from your Web application.

Q. Why don't you just use the Validation Application Block?

A. This one's pretty easy to answer. NValidate is designed to accommodate lazy programmers (like me).

Here's the theory that essentially drives the design of NValidate: Developers don't write parameter validation code with any sort of consistency because it's a pain in the neck to write it, and because we're in a big hurry to get to the business logic (the meat and potatoes of the software). Let's face it: if the first chunk of the code has to be two to twenty lines of you checking parameters and throwing exceptions, and doing it all over the place, you'd get tired of doing it, too. Especially if that code is extremely repetitive.

if(null == foo) throw new ArgumentNullException(foo);
if(string.Empty == foo) throw new ArgumentException("foo cannot be empty.");
if(foo.length != 5) throw new ArgumentException("foo must be 5 characters.");

We hate writing this stuff. So we skip it, thinking we'll come back to it later and write it. But it never gets done, because we get all wrapped up in the business logic, and we simply forget. Then we're fixing bugs, going to meetings, putting out fires, reading blogs, and it gets overlooked. And the root cause is because it's tedious and boring.

I'm not making this up, folks. I've talked to lots of other developers and they've all admitted (however reluctantly), that it's pretty much the truth. We're all guilty of it. Bugs creep in because we fail to erect that impenetrable wall that prevents invalid parameter values from slipping through. Then, we have to go in after the fact and add the code after we've got egg on our face and fix it, at increased cost.

So, if you want to make sure that developers will write the parameter validation code, or are at least more likely to do it, you have to make it as easy as possible to do so. That means writing as little code as possible.

Now, if we look at the code sample provided by Microsoft on their page for the Validation Application Block, we see this:

using Microsoft.Practices.EnterpriseLibrary.Validation;
using Microsoft.Practices.EnterpriseLibrary.Validation.Validators;
public class Customer
{
    [StringLengthValidator(0, 20)]
    public string CustomerName;
    public Customer(string customerName)
    {
        this.CustomerName = customerName;
    }
}

public class MyExample
{
    public static void Main()
    {
        Customer myCustomer = new Customer("A name that is too long");
        ValidationResults r = Validation.Validate<Customer>(myCustomer);
        if (!r.IsValid)
        {
            throw new InvalidOperationException("Validation error found.");
        }
    }
}

A couple of things worth noting:

  1. You have to import two namespaces.
  2. You have to apply a separate attribute for each test.
  3. In your code that invokes the test, you need to do the following:
    1. Declare a ValidationResults variable.
    2. Execute the Validate method on your ValidationResults variable.
    3. Potentially do a cast.
    4. Check the IsValid result on your ValidationResults variable.
    5. If IsValid returned false, take the appropriate action.

That's a lot of work. If you're trying to get lazy programmers to rigorously validate parameters, that's not going to encourage them a whole lot.

On the other hand, this is the same sample, done in NValidate:

using NValidate.Framework;
public class Customer
{
    public string CustomerName;
    public Customer(string customerName)
    {
        Demand.That(customerName, "customerName").HasLength(0, 20);
        this.CustomerName = customerName;
    }
}

public class MyExample
{
    public static void Main()
    {

        try
        {

            Customer myCustomer = new Customer("A name that is too long");

        }
        catch(ArgumentException e)
        {
            throw new InvalidOperationException("Validation error found.");
        }
    }
}

A couple of things worth noting:

  1. You only have to import one namespace.
  2. In the property, you simply Demand.That your parameter is valid.
  3. In your code that invokes the test, you need to do the following:
    1. Wrap the code in a try...catch block.
    2. Catch the exception and handle it, if appropriate.

See the difference? You don't have to write a lot of code to validate the parameter, and your clients don't have to write a lot of code to use your class, either.

Q. Why didn't you go with attributes instead?

A. I considered attributes in the original design of NValidate. But I ruled them out for a number of reasons:

  1. Using them would have meant introducing a run-time dependency on reflection. While reflection isn't horrendously slow, it is slower than direct method invocation, and I wanted NValidate to be as fast as possible.
  2. I wanted the learning curve for adoption to be as small as possible. I modeled the public interface for NValidate after a product I thought was pretty well known: NUnit. You'll note that Demand.That(param, paramName).IsNotNull() is remarkably similar to NUnit's Assert.IsNotNull(someTestCondition) syntax.
  3. In NValidate, readability and performance are king. Consequently, it uses a fluent interface that allows you to chain the tests together, like so:

    Demand.That(foo, "foo").IsNotNull().HasLength(5).Matches("\\d5");

    This is a performance optimization that results in fewer objects created at runtime. It also allows you to do the tests in a smaller vertical space.

My concerns about attributes and reflection may not seem readily apparent until you consider the following: it's conceivable (in theory) that zealous developers could begin validating parameters in every frame of the stack. If the stack frame is sufficiently deep, the costs of invoking reflection to parse the metadata begins to add up. It may not seem significant yet, but consider the scenario where any one of those methods is recursive; perhaps it walks a binary tree, a DOM object, an XML document, or a directory containing lots of files and folders. When that happens, the costs of reflection can become prohibitively expensive.

In my book, that's simply not acceptable. And since, as a framework developer, I cannot predict or constrain where a user might invoke these methods, I must endeavor to make it as fast as possible. In other words, take the parameter information, create the appropriately typed validator, execute the test, and get the hell out as quickly as possible. Avoid any additional overhead at all costs.

Q. Why didn't you use interfaces in the design?

A. I go back and forth over this one all the time, and I keep coming back to the same answer: Interfaces would tie my hands.

Lets assume, for a moment, that we published NValidate using nothing but interfaces. Now, in a subsequent release, we decided we wanted to add new tests. Now we have a problem. We can't extend the interfaces without breaking the contract with clients who are built against NValidate. Sure, they'll likely have to recompile anyway; but if I add new methods to interfaces, they might have to recompile lots of assemblies. That's something I'd rather not force them to do.

On the other hand, abstract base classes allow me to extend classes and add new tests and new strongly typed validators fairly easily. Further, it eliminates casting (because that's handled by the factory). If, however, the system is using interfaces, some methods will return references to an interface, and some will return references to strongly typed validators, and some casting will have to be done at the point of call. I want to eliminate manual casting whenever I can, to keep that call to Demand.That as clean as possible: the cleaner it is, the more likely someone is to use it, because it's easy to do.

Q. Why not just use assertions instead of throwing exceptions?

A. This should be fairly obvious: Assertions don't survive into the release version of your software. Additionally, they don't work as you'd expect them to in a Web application (and rightly so, since they'd kill the ASP.NET worker process, and abort every session connected to it. [For a truly educational experience, set up a test web server, and issue a Visual Basic Stop statement from a DLL in your Web App. You'll kill the worker process, and it will be reset on the next request. Nifty.]).

Wisdom teaches us that the best laid plans of mice and men frequently fail. Your most thorough testing will miss some points of your code. The chances of achieving 100% code coverage are pretty remote; if you do it with a high degree of frequency, I'm duly impressed (and I'd like to submit my resume). But for the rest of us, we know that some code never gets executed during testing, and some code gets executed, but doesn't get executed under the precise conditions that might reveal a subtle defect. That's why you want to leave those checks in the code. Yes, it's additional overhead. But wouldn't you rather know?

In Summary

Sure, these are tradeoffs in the design. But let's keep in mind who I'm targeting here: lazy programmers who are typically disinclined to write lots of code to validate their parameters. The idea is that we want to make it so easy that they're more likely to do it. In this case, less code hopefully leads to more, which (I hope) leads to fewer defects, and higher quality software.

Thursday, March 6, 2008

The Value of Collaboration in Software Development

I've really come to appreciate the Answers section on LinkedIn. It's amazing the kinds of questions that will be asked, and the kinds of thoughtful and thought-provoking answers you'll find posted there. Today, I stumbled upon this intriguing question posted by Steven Burda of Sungard Data Systems, titled "The Reality: What you know? Who you know? Who they know? Or who knows YOU?!":

“it’s not what you know, but who you know”
“it’s not who you know, but who they know”
“it’s not who you know, but who knows YOU”
Lately, the enormous rise of virtual business and social networks is changing the traditional “networking” environment, and it is evident by the exponential growth of sites such as Linkedin and Facebook, just to name a few… Questions: What about the true value-added result of this human “capital” we’re connecting and “collecting” here… and its direct (and indirect) benefit to YOUR future, on both professional and personal level? But most importantly, please tell me: which of the above three statements do you agree or disagree with, and why?

These kinds of questions are frequently asked on LinkedIn, and they make you stop and seriously think before you just whip out a response. (If they don't, they certainly should.) Among the many responses, however, one struck a chord with me, because it resonates profoundly with how I feel about software development and the kind of culture I'm currently seeking in a company. This eloquent, insightful answer was posted by Charles "Charlie" Levenson of Multnomah Athletic Club (italics were added by me):

It's not who you know.
It's not who they know.
It's not even who knows you.
It's who YOU know who knows the things that you DON'T know.
The secret to networking for me has always been using my network to supplement the gaps in my capabilities and knowledge. Almost every project these days requires some level of COLLABORATION and there just are not that many Orson Wells around any more. I know that I will NEVER be as good at some parts of the process as other people I know already are, so why not collaborate with them. For instance, no matter how well I might write, produce, or direct, when it comes to MUSIC, that is simply a giant hole in my skill-set. I have to have a strong network of people I can work with who can provide music capabilities at the level I need to match the rest of the work I do.
For me, the network is about knowing who can do what I can't do so that together we can do great things.

This simple, forthright statement sums up the value of collaboration so eloquently that there's little more that needs to be said. (But you know I'm going to anyway, right?)

For me, having worked as  lone developer for three years, with no access to peers during the totality of that time, I hunger and thirst for collaboration to such a degree that it's nearly overwhelming. I am keenly aware of several facts regarding my current state as a software developer:

  • Where I work now, we've been working with a very finite set of technologies. During that time, a whole new suite of powerful technologies has emerged that I know nothing about, and with which I have never had the opportunity to work because there simply wasn't a need or a desire on the behalf of the company to look at them. .NET 2.0 (and now 3.5), LINQ, SilverLight, WPF, WCF, Web Services, AJAX, multi-threaded applications, click-once applications, web farms... The list goes on and on and on. I am keenly and painfully aware of the vast gaps in my knowledge that I desperately want to fill. But filling that knowledge, for me, is a daunting task. How does one begin? For me, I learn best by doing and discussing. Reading is slow and tedious, and requires that you trust the author, that you have the time and space to do it quietly, and that the printed materials are accurate and complete. My schedule tends to be quite busy; the only time I really get to read is right before I go to sleep, and that's not the best time to read technical materials at all.
  • I have often made mistakes that could have been prevented through the simple mechanism of peer review. And it didn't even have to be a formal review. Water cooler discussions, informal whiteboarding sessions, lunchtime conversations, inter-cubical discussions about a problem that I just couldn't figure out... any of these things could have helped me to avoid any of the major headaches that I brought on myself. We often look at peer review with suspicion, thinking that it's going to be someone judging or criticizing our code. But the real value of peer review lies in someone being able to point out things that you hadn't considered. "Hey, did you know that there's a class or method in the Framework that already does that?" "This design pattern might actually solve that problem for you." "You might be able to solve that better with an interface instead of a class." "That design might be too tightly coupled; can you explain why you're doing it that way?" (After all, you might have a valid reason for doing it.)
  • I find it all too easy to just fall back on the familiar, cozy way of doing things the way that I have always done them. Without a fresh source of ideas, new perspectives, it's very difficult to think outside the box. When you work alone, it's virtually impossible. You have to rely on books, blogs, newsgroups, and Google. It's a hit-or-miss thing at best. But working with people that you know and trust, whom you know to be reputable and knowledgeable about their field, is a great way to gain insight into new ways of doing things that you might never have considered.

What all of this points to is that Charlie Levenson's comment struck home for me. I've been interviewing with companies for three weeks now. And in every interview, I've made it clear that I'm interviewing them every bit as much as they are interviewing me. I'm looking for a culture: a culture of collaboration, mentorship, and growth. I need to know that when I get up in the morning, I'm going to be excited about going into the office because somewhere, somehow, some piece of my knowledge base is going to expand: I'm going to learn something knew.

I once blogged about refactoring as a way of self-improvement, that it's not just a coding activity, but that we should constantly strive to refactor our skill set. I stand firmly by that blog post. What's interesting now, though, is that I'm taking that mentality and applying it to my job search. I want a job where I can engage in constant refactoring of myself every day, where I can sponge knowledge off of my peers every day, and learn something new from them.

So, what Charlie says is absolutely true: I'm building a network of people who know things that I don't, so that they can fill in the gaps in my knowledge, and we can do great things together. With any luck, I can one day return the favor for them or someone else.