Wednesday, October 27, 2010

Visual Studio 2008 Detaching from the Debugger

At work, my machine has always been problematic. Getting Visual Studio 2008 to stay attached to the debugger has been an excercise in futility. I've done everything that every site on Google suggested. And still, a few seconds into debugging an ASP.NET web app under IIS or WebDev, the debugger would detach.

Note: This happens for me under Windows 7, against both IIS 7 and Visual Studio's WebDev.

  1. I was running Visual Studio as an administrator. I made sure of this by setting the shortcut properties, and even right-clicking the icon and selecting "Run as Administrator."
  2. I set up exclusions in the antivirus software to tell it not to scan my web project's folders.
  3. I disabled protected mode in Internet Explorer.
  4. I added localhost to the trusted sites in IE.
  5. I set the TabProcGrowth setting to 0 in the registry to disable IE's LCIE.

Still, the debugger would predictably detach. And then, the day before yesterday, I stumbled across something very, very curious. Windows Defender was still running on my machine.

Now that struck me as very odd, because my understanding is that you're not supposed to run two different antivirus packages at the same time: they'll wreak havoc with each other. So, I contacted IT and we decided it was time to shut that blasted thing off in favor of our preferred antivirus application. (Note that the antivirus application should have shut it off when it installed, but didn't for some reason.)

Once I did, Visual Studio fired up, and for the first time the word "(Administrator)" appeared in the title bar. That was the first clue that something drastic had changed. And the best news is that I haven't had a single disconnect from the debugger since.

I can't guarantee that this is the problem that you're having, but you might want to look at it. If you're running an antivirus application on your machine, Windows Defender should not be running at the same time.

To Disable Windows Defender

  1. Right click on My Computer.
  2. Select Manage.
  3. Click Services and Applications.
  4. Click Services.
  5. In the services list, double-click Windows Defender.
  6. In the Windows Defender Properties dialog, change the settings as follows:
    1. Change the Startup Type to Disabled.
    2. Click the Stop button.

    Click OK to close the dialog.

  7. Close Computer Management.

I hope this helps someone besides me, who's been battling this issue for months.

Sunday, August 15, 2010

JavaScript...For Good or Ill

So, I've been spending a lot of time working with JavaScript, lately. For whatever it's worth, I've been thrust into a role in which I find myself working with a large volume of JavaScript code that needs to be maintained and it falls on me to do that.

Now, generally speaking, in my eyes, code is code. I don't really care what language you're writing in, but some things are just generally true regardless of the language. Cryptic code is to be shunned At All Costs™. A coding standard--even a minimal standard--should apply. Variable names should be clear. The code should document itself, mitigating the need for comments. Every variable should be predeclared. And so on, and so on, and so on.

But over the long course of my career, I've observed something about scripting languages like JavaScript and VBScript. There's just something about them that seems to encourage people to write bad code.

I've seen and had to debug a lot of script code in my lifetime. And that code tends to be riddled with subtle and not-so-subtle defects that could easily have been avoided if we simply treated scripting languages like they were real languages and not toys.

Don't Blame the Browser. One could argue that many of the defects that crop up in script code are due to browser incompatibilities. But let's be honest with ourselves: it's our job to know what those incompatibilities are and write the software to take them into account so that script errors don't occur. If script errors are occurring "due to a browser incompatibility," that's not really the reason: they're occurring because the developers didn't account for the browser incompatibility.

Functions, Formal Parameters, and Arguments. How many times have you seen a function that takes a set of arguments and then starts working with them without performing any sort of argument validation? How much time have you spent scratching your head, trying to figure out what type of data the function expected to receive for any given formal parameter? And have you ever wondered what guarantee the function has that it is going to receive arguments of the correct type, or that it will behave correctly if it doesn't?

The truth is, the vast majority of the functions in script code never check their arguments to ensure that they are what they expect them to be. They don't check to ensure that the values are present, that they're of the correct data type, that they fall within the allowed ranges or have the right formats. Any number of errors that occur in the browser in front of end-users in production could be avoided if those checks were put in place early on, when the functions were first written, and a developer found out about them during development.

But we're lazy, and scripting languages somehow encourage us to be even lazier. I'm not sure why this is so, it just seems to be so. Now, when ou consider a language like JavaScript, where any variable, object, or function can be modified at any time by anyone, it's not a good idea to assume that the arguments you've received are what you expect them to be. To quote a classic mantra:

Assert, assert, assert.


Assume Nothing. Too often I'm presented with code that grabs an element from the document using document.getElementById. This code then proceeds to use the control as if there was never any doubt in the world that the element existed.

If you're writing code like this, stop it right now. What if the content of the page unexpectedly changed on you (a write to document.innerHTML), or you're reading from an iframe and it doesn't have the correct document loaded into it? What if the control you're after was never created? If you aren't checking for these conditions yourself early in the development cycle, the customer is bound to find out for you.
var theElement = document.getElementById("myElement");
if(theElement === null)
{
throw new Error("myElement was not found.");
}
else
{
// Code to work with the control.
}

We tend to make these kinds of assumptions all the time in scripting languages. We assume that a property or method exists on an object, that a given variable is an array, that a variable is not undefined or null, that a variable has not been preinitialized by someone else to hold a value that is different from what we want to store in it. All of these are unsafe assumptions. Unsafe assumptions lead to subtle defects that are notoriously difficult to track down and correct. We need to make it a priority to ruthlessly eliminate them by adopting a Zero-Assumption policy.

What Happened to Black Box Programming? Remember the idea behind black box programming? The basic principle is simple:

A function or object knows nothing about the outside world except what is passed into it.

At some point in your career, you should have been introduced to this fundamental concept. Functions and objects don't rely on global variables. They just don't. Everything they need to know is passed to them through their formal parameter list. In this way, the function has an opportunity to validate that its arguments are sound, and it is decoupled from its host so that it can be tested more easily.

These days, we push the notions of loose coupling and high cohesion to explain black box development in greater detail. We also talk about things like inversion of control, which should be nothing new to anyone who's ever written an event handler before. In short, the function assumes nothing about its external environment (sound familiar?); we pass it everything it needs to know to get its job done.

But if you look at JavaScript or VBScript code you will be inundated be vast swaths of code that references global variables with complete abandon. Of course it does. Scripting languages embrace global variables like flies embrace a fertilizer factory.

  • History Lesson #1: global variables are a loaded weapon with a hairpin trigger. When you give that many sketchy loaded weapons to people with no training, Bad Things Will Happen.

  • History Lesson #2: Humans frequently fail to learn from history.

I implore you. I beg you. Please, please stop using global variables. It is, of course, necessary atthe topmost layer of your application, but once you're past that, there's no reason whatsoever for your functions and objects to have any knowledge of global objects (unless they're provided by JavaScript or VBScript itself). Do not assume anything about the global,document or top objects. When you write a function, if it needs some piece of information to do its work, insist that your callers pass it to you.

In Closing... I think you can see where I'm going with this. We've got decades of script code out there, and a lot of it is really badly written. But if we want to be perfectly honest about it, we have no one to blame for its state but ourselves. We look at what JavaScript can do, and we never stop to ask ourselves if what it can do is somethig we should do.

Making a bad situation worse, we don't apply the same disciplined coding practices to JavaScript that we do to languages like C++, C#, Java, or even VB.NET. And that's a shame, because there's nor real reason we shouldn't or couldn't. Perhaps, in the near future, we'll see things like code contracts, automated unit tests, argument validaton frameworks, StyleCop for JavaScript, and so on. Pipe dreams, probably, but it would be nice if we could have all of that and take a huge, collective leap forward in improving the quality of the script code we have to maintain every day.
Blogged with the Flock Browser

Wednesday, June 30, 2010

Knowing How is Not Enough

There’s an old adage that I heard once, and it’s stuck with me through the years:

He who knows how to do a thing is a good employee. He who knows why is his boss.

I’m also fond of this one:

If you can’t explain it, you don’t understand it.

So I’ve been ramping up on some technology that I’ve really not had an opportunity to really use before, and I’m very excited about it. To make sure I understand it, I’ve decided to go back to the MSDN examples, reproduce them one line at a time, and then document the source code as I understand it. It’s a great way to learn, and sheds a great deal of light on what you think is happening, versus what’s actually happening.

To be perfectly honest, the technology is AJAX. Over the last few years, I’ve predominantly worked for companies that haven’t had any use for Web services, so there’s been no compelling need for it. I’m starting a new job soon and it will rely heavily on Web services, and I really want to make sure I understand them well before I set my foot in the door. It has never been enough for me to know that you just drag a control onto a form or page, set a few properties and press F5. To me, that degree of abstraction is a double-edged sword.

When abstraction reaches the level that it has with Microsoft AJAX, you start to run into some fairly significant issues when it comes time to test and debug the application. The MS AJAX framework is no small accomplishment, and it hides a lot of complexity from you. It makes it so easy to write AJAX applications that you really don’t need to understand the underlying fundamentals of Asynchronous Javascript and XML that make the whole thing work. Consequently, when things go wrong, you could very well be left scratching your head, without a clue, and no idea where to begin looking.

Where, in all of this enormously layered abstraction did something go wrong? Was it my code? Was it the compiler? Was it IIS? Was it permissions? Was it an update? Was it a configuration setting? Was it a misunderstanding of the protocol? Did the Web service go down or move? Was the proxy even generated? If it was, was it generated correctly? Do I even know what a proxy is and why I need it?!

When I started learning about AJAX, we coded simple calls against pages that could return anything to you in an HTTP request using the XMLHTTPRequest object. Sure, it was supposed to be XML, but that was by convention only. The stuff I wrote back then (and I only wrote this stuff on extremely rare occasions, thank the gods), returned the smallest piece of data possible: a single field of data in flat text. It was enough to satisfy the business need, and didn’t require XML DOM parsing.

But even with DOM parsing, the code to make a request and get its data back via XMLHTTPRequest was a lot smaller than all the scaffolding you have to erect now. You might argue that you don’t have create a lot of code now, but that is just an illusion. You’re not writing it, but Microsoft is. Just because you don’t see it doesn’t mean it’s not there. Do you know what that code is doing?

In theory, the Why of Microsoft AJAX, or any AJAX library is to make our lives easier when it comes time to write dynamic Web applications that behave more like desktop applications. To a certain degree, they have. When they work. But when they don’t, I wonder if the enormous degree of abstraction they’ve introduced hasn’t dumbed us down to the point where we’ve ignored essential knowledge that we should have.

If you’re going to write Web services, or consume them, you should, at a minimum, understand what they are, and how they work. You should understand their history, how they evolved, and the problem that AJAX tries to solve. It’s not enough to know how to write a Web service, you have to know why you’re doing it, and why you’re doing it the way you are. That sort of knowledge can be crucial in making the right choices about algorithms, protocols, frameworks, caching, security, and so on.

But this could be true of any technology or practice we learn. AJAX, LINQ, design patterns, TDD, continuous integration, pair programming, and so on. Know why.

Try this simple litmus test. Explain something you think you know to one of your peers. If you can’t explain it clearly without having to pull out a reference or go online, you don’t understand it the way you think you did. Consider relearning it. It’ll only improve your value to yourself, your peers, and your employer.

Tuesday, June 29, 2010

64-bit Browsers: Useless, but not for the reasons you think

I was really stoked when I recently upgraded from a 32-bit OS to a 64-bit OS. I was even more pleased to learn than IE came in a 64-bit flavor. Then, the reality hit me.

Broad-based support for 64-bit browser addons still hasn’t arrived. Silverlight, Flash, Adobe Acrobat, and all their ilk are not 64-bit compatible. So, you can view generic HTML, but that’s pretty much it.

So I have this spiffy 64-bit browser that’s (theoretically) faster and can (theoretically) address far more memory, but it’s fairly useless to me because I can’t view a vast amount of content on the Web.

Sure, there’s content I can view, but having to switch back and forth between 64-bit and 32-bit versions of a browser—and that’s any browser—is a pain: a useless time sink.

So, it looks like it’s back to 32-bit browsers for me. As a web developer, that’s fairly sad, because it means that I won’t be testing any really cool, dynamic “flashy” content in 64-bit browsers for a long time. And that’s fairly sad. User demand is what drives the corporations to get these things fixed, and if we all have to set those 64-bit browsers aside because there’s just no support for them, well, those corporations feel less compelled to provide support for them with any kind of urgency.

It’s a vicious circle, really. They aren’t caught up, we can’t use it, so we go back to older tech, so they relax on the implementation, so it’s not done as quickly as we might like, so we remain entrenched in 32-bit technology longer.

Now, 32-bit technology is fine, if that’s all you really need. But software is not getting any smaller. Nor are operating systems, or corporate computing needs. Eventually, the amount of memory we can address effectively with 32-bit operating systems and software will fail to be sufficient. That day may be far off, but it may not. The truth is, you’re better off being ahead of the curve than behind it. That’s not always possible, but if you can, you should.

In the meanwhile, I’m back to a 32-bit browser, because it’s the only way I can view Silverlight, Flash, and PDF online. Here’s hoping that all changes sometime in the near future.

Saturday, June 26, 2010

Generics, Value Types, and Six Years of Silence

It’s been a long time since I worked on the code for NValidate, but in a fit of creative zeal I decided to dust it off and take a look at it.

As one of the posters on the old NValidate forums pointed out, it was full of duplicate code. Granted, that was a conscious design choice at the time: it was written well before generics came out, and performance was a key consideration. I didn’t want a lot of boxing and unboxing going on, and since a lot of the tests dealt with value types, the only way I could see to get that stuff done was to duplicate the test code for specific types.

Well, time marches on and languages advance. .NET 2.0 came out, and along with it came generics. I figured that they would provide a fantastic way for me to eliminate a lot of the duplicate code. I mean, it would be awesome if we could just write a single validator class for all the numeric data types and be done with it. And that’s where I hit the Infamous Brick Wall&tm;.

It turns out that generics and ValueType objects do not play well together. At all. Consider the following piece of code:

public class IntegralValidatr<T> where T : ValueType
{
}

This, as it turns out, is forbidden. For some reason, the compiler treats ValueType as a “special class” that cannot be used as the constraint for a generic class. Fascinating. The end result is that you cannot create a generic class that requires that its parameters are derived from ValueType. You know: Boolean, Byte, Char, DateTime, Decimal, Double, Int, SByte, Short, Single, UInt, and ULong. Types you might actually want to work with on a daily basis, for all kinds of reasons.


The workaround, they say, is to specify struct. The problem is that struct is a pretty loose constraint. Lots of things are structs but aren't necessarily the types you want. I assume that's why they call it a workaround and not a solution.


So, anyway, here I am with a basic class definition. I can at least admit to myself that I can build the class outline as follows:


public class IntegralValidator<T> where T : struct
{
public T ActualValue { get; internal set; }
public string Name { get; internal set; }
public IntegralValidator (string name, T actualValue)
{
this.Name = name;
this.ActualValue = actualValue;
}
}

But now it’s time to create a test. The problem is determining how to perform basic comparisons between value types when you can’t seem to get to the value types now that they’ve been genericized. Understand that NValidate needs to be able to do the following with numeric values:



  • Compare two values for equality and inequality

  • Compare a value to a range, and fail if it falls outside that range

  • Compare a value to zero, and fail if it isn’t zero

  • Compare a value to zero, and fail if it is zero.

  • Compare a value to the Max value for its type, and fail if it is or isn’t equal to that value.

  • Compare a value to the Min value for its type, and fail if it is or isn’t equal to that value.

  • Compare a value to a second value, and fail if it is less than than the second value.

  • Compare a value to a second value and fail if it is greater than the second value.

You get the picture.


The problem I’m experiencing is that it’s become clear to me that it’s really very difficult to convert a genericized value type back to its original value type. Consider the following code:


private byte ToByte(){
if (ActualValue is byte>)
// The as operator must be used with a reference type or
// nullable type ('byte' is a non-nullable value type)
return ActualValue as byte;
if (ActualValue is byte)
// Cannot convert type 'T' to 'byte'
return byte ActualValue;
}

So, if neither of these approaches works, how do I get to the original values? Generics appear to demand an actual object, which would, in turn, demand boxing and unboxing of value types (which I’m staunchly opposed to for performance reasons).


So, we go back to the drawing board, and eventually we discover that you can, in fact, get to the type through a bit of trickery with the System.Convert class:


private byte ToByte() {
if (ActualValue is byte)
// The as operator must be used with a reference type or
// nullable type ('byte' is a non-nullable value type)
return Convert.ChangeType(ActualValue, TypeCode.Byte);
}

Well, the problem I’m faced with now, upon careful reflection is this: If I’m drilling down to the original data type, I’m kind of defeating the whole point of generics in the first place. And that brings us to the whole point of this article.


I should be able to write a line of code like this:


Demand.That(x).IsNonZero().IsBetween(-45, 45);

And that code should be handled by generics that correctly infer the type of x and select the right code to execute, but I can’t. And the reason I can’t is because (1) you can’t use ValueType as a constraint for generics and (2) there is no common interface for numeric types in the BCL.


This is an egregious oversight in the Framework. Worse, it’s been an outstanding complaint on Microsoft Connect since 2004. Through multiple iterations of the Framework, despite numerous postings on the site and clamorings for the oversight to be corrected, Microsoft has yet to do anything about it. For some reason, they seem to think it’s less important than things like Office Integration, UI overhauls, destroying the usability of online help, and making Web services as difficult and unpredictable to use as possible.


It baffles me that Microsoft’s own responses to the issue have been questions like “How would you use this functionality?” Are they kidding me? There are so many uses for this it’s not even funny.


  • What happens when you have an array or list of heterogenous numeric types and need to work with them in some meaningful way using a generic (possibly a delegate)?

  • What happens when you want to write a program that converts 16-bit data to 32-bit data, or floating point data to Long data, or work with both at the same time using a common algorithm?

  • What happens when you need to work with graphics algorithms common to photo processing software?

  • What happens when you need to work with the many different types of value types and convert them back and forth quickly and efficiently as you would in, say, an online game?

  • Or, as in my case, what happens when you’re writing a reusable framework for validation and boxing and unboxing are simply not an option, and a generic solution would handily solve the problem, but you can’t because there’s no common interface – no One Ring that binds them all together?

It’s about time this issue was resolved. And this isn’t something the open source community can fix. This is something Microsoft has to fix, in the BCL, for the good of all humanity. Six years is far too long to leave something this painfully obvious outstanding.


Thursday, June 10, 2010

On Work at Home Programs

In his article, Work from home. Save the Planet, David Gewirtz lays out numerous benefits for embracing Work at Home programs throughout the country. I find some of the author's arguments questionable, but I haven't read his book, either.

I do know, living in New England, that the vast majority of road repairs are not due to traffic, but seasonal weather changes. So that argument goes flying out the window.

Also, it's quite evident that many jobs simply cannot be done from home. Let's be realistic. Security, plumbing, firefighting, surgery, shop keeping, termite control, landscaping, road repair, construction, and so on simply cannot be done from your home.

On the other hand, densely populated urban areas packed with office buildings and utilizing the latest technologies can reap the benefits of work-at-home programs. We are not bereft of the technologies that make this possible: instant messaging, email, online Web applications, video conferencing, VPN networks, and the ever growing "Cloud" enable us to get more done from geographically separate locations than ever before.

The trick is to ensure that the work is actually getting done with as much zeal as it would if people were in the office, where they are observed by their coworkers. Let's be honest: people tend to be more disciplined about getting the work done if their peers are able to walk in on a moment's notice and see whatever it is they're doing. That's not the case when you're working from home.

At home workers must possess a greater amount of self-discipline than workers in the office, by simple virtue of the fact that they must manage their own time and not be distracted by daily annoyances that might be present in the home.

And yet, if this program can be made to work, the benefits to the community (and the planet) can be numerous. Reductions in carbon emissions, fossil fuel consumption, traffic jams, traffic fatalities, and overall travel expenses are virtually (but not necessarily) a given.

In any event, it's not a simple case of black and white. No issue ever is.

Monday, March 15, 2010

Strange Days

These are strange days.

The pundits and economists claim that the economy is on the rebound. But for those of us down here in the trenches, those of us whore are jobless, with bills to pay and families to feed, there's no economic recovery in sight. The bills are piling up, the job offers aren't materializing, and money is getting harder and harder to come by. All the while, the fat cats in Washington bicker over health care reform while getting nothing done aside from playing the blame game, trying to decide who was responsible for the economic disaster in the first place.

My dad, dead now ten years or more, used to say that the more things changed, the more they stayed the same. He was right. Things are no different today than they were ten, twenty, fifty years ago. Politicians, in their luxurious offices, with their limos and travel budgets and vastly superior health care plans, have no real idea what those of us in the trenches are going through on a day by day basis. We have very basic needs. We have to be able to pay the rent. We have to be able to clothe our kids. We have to be able to pay for things like gas, electricity, and hot water. We have to be able to pay for public transportation or gas to get to work every day. We have to be able to afford decent medical care and prescriptions when we need them—which isn't always a planned thing.

But if you're rich, you likely don't even have to think about those things. They're just taken care of. Someone handles all that stuff for you. It's inconsequential, beneath your notice. You might stand there on television and claim to be a man of the people, but after a little while in office with that big salary and all those incredible benefits, your memories of what it was like to be one of us will quickly fade.

But we are your constituents. We're the ones you're supposed to be helping. We're the ones having to decide, day in and day out, what we're going to have to cut out of our lives to be able to afford food or rent. Not because we want to, but because we have to. We make little choices, and lots of them.

We turn off our cell phones because it's not necessary, and the phone service that anchors us to the house through our cable service is cheaper. Sure, we're screwed if we get stuck on the side of the road, but we have to make that choice because we can't afford it.

We choose to give our pets up for adoption, because we can't afford to feed or care for them anymore. Sure, they'll likely be euthanized, because no one else can care for them either. But we do it, because we can't afford it.

We choose to cut our automobile insurance, because we can't afford the egregious rates we're being charged. Sure, we're out a vehicle if something really bad happens. But we do it. And you know why.

We choose to endure pain and suffering without visiting a doctor, or to try alternative therapies which are either ineffective or potentially dangerous, because we can't afford the copays.

We choose to buy cheap, processed foods to feed our families, rather than fresh meat and produce. Sure, it'll have a longer term impact on our overall health. But the price difference makes you wonder if they're trying to discourage you from eating healthy. And we simply can't afford to eat well.

We use up our savings, dive into college funds, remortgage our homes, do anything to survive. We sell our cherished belongings. We move into smaller apartments, crammed with multiple roommates trying to pool their resources against the financial crush. And even then, we struggle.

Those of us who have jobs hold onto them for dear life, afraid of what might happen to us if we lost them. It doesn't matter if we hate the work we're doing, who we're working with, the hours we keep. The world outside that job is far, far worse.

And yet, to hear the news, the economy is in a rebound, and things are looking up. Consumer spending is on the rise. You wouldn't know it at my house, or the homes of any of the people I know. We're all feeling the pinch. And this pinch leaves major bruises that aren't going to heal any time soon.

Washington, and all the local politicians, need to take off their blinders and remember that the world does not live like they do. In the real world, the vast majority of the population is poor, and struggling to survive from one day to the next. Sure, the rich line your pockets. We all know that. But the poor cast the votes. Step out of that bright light that blinds you to the realities of life that every day people face, and look at it with an unfettered vision for once. And then, without just dismissing what you've seen and heard, do something about it.

Stop bickering. Party lines should be irrelevant in this crisis. And it doesn't matter who started it. What matters is what we're going to do about it.

Wednesday, March 3, 2010

Farewell to Software Development

So here I am, staring at this screen, thinking about what to say. It's been a long time since I blogged. It's been funny, in a way, that my blog was never really very technical. I always blogged about the esoteric aspects of software development. About personal improvement, about striving to become a better software developer, about questioning the status quo, about reevaluating yourself at every step of the way. And now, I find that those words come back to haunt me. But not in any way that I would ever have expected.

I've been unemployed since December 1st. I've been seeking employment since then, and the experience has been eye-opening. In the past, getting work was never difficult. I like to consider myself both a competent developer and a proficient interviewer. In the past, it's never been difficult to find work. Within the first three or four interviews I had an offer on the table--usually several, and I could take my pick from them. But this time, I'm finding myself faced by rejections for reasons that escape me. One company refused to hire me because I hesitated when asked to describe the difference between natural and left joins. They thought a senior level developer should have provided an immediate response, without hesitation. I thought a senior level developer would have thought his answer through, worded his response carefully, and not just blurted out the first thing that leapt to mind. Apparently, it was irrelevant that my answer was correct.

I'm also faced by the prospect that technology is passing me by. This has been a growing concern of mine for years. There's a conundrum we all face when we seek long-term employment with any company. You see, companies seek candidates who want to stay with them for years. The problem is, companies tend to be entrenched in a particular technology stack. They have a vested interest in maintaining whatever software they've developed with that stack, and once you've invested yourself in the maintenance of that software, you're pretty much mired in it for the long haul. Whatever tools and technology were used in its initial development tend to become a ball and chain that anchor you where you are for years at a shot. Technology moves forward, but the product likely does not. And you, as the maintenance developer, stay tethered to that product and its associated technologies as everyone else moves forward.

New technologies emerge all the time. There are so many technologies out there right now that no one could possibly grasp them all, let alone consider himself an expert in them all. And yet, interviewers expect you to have this commanding expertise in such a wide variety of technologies that their expectations can be considered unreasonable. A jack of all trades is master of none. The truth of the matter is that most of us will never learn a new technology until a particular project demands exposure to it. Only then will we learn it. And then, we'll only learn enough of it to get by. Few companies have the budget to send their development staff to seminars or their ilk to receive formal training. Few developers make enough money to seek out and pay for formal training out of pocket. And few companies that provide that training are motivated to lower their prices to make it accessible to dirt-poor developers hungry for the knowledge.

And so it's a vicious cycle. Technology moves forward. Developers are stuck with the old technology, and cling to their jobs because, in this economy, they know it could be really difficult to find new work. Further, they know any new work they find could be worse than the work they're doing now. It's a buyer's market. Wages have declined, benefits are being cut, and the work still has to get done.

Over the last few years, I've been stuck in this vicious cycle. But since December, as I've been interviewing, one thing has started to become increasingly clear to me: software development is leaving me behind.

It doesn't matter how much I love it. It doesn't matter how good I am at it. It doesn't matter how passionate I am about quality, good design, team cohesion, self improvement, or any of those things. What matters is that I am tired of playing this catch-up game, of trying to appease people with unreasonable expectations, of being expected to know everything about everything, and being held accountable when I don't.

I know my limitations. It's time to get out. It's time to leave an industry behind that is full of falsified estimates, rushed deliveries, power grinds to meet deadlines, badly defined requirements, insane amounts of finger pointing, and unreasonable expectations. The days when developing software was fun are long gone. Somewhere along the road, it became work. And for me, that's the day it died.

So I'll turn from that path, now, and find a new path. I'm not sure what it is yet, but with any luck, it will be far less stressful.

To all of you still in the quagmire, I wish you well and good luck. Software can be tons of fun, if you keep it in the proper perspective. Somewhere along the way, I suppose I just lost mine. Try to keep yours. But for me, this is farewell to software development.