Tuesday, March 15, 2011

The Sanctity of Life

Over the years, I’ve often heard those of a religious background preach about the “sanctity of life.” They do so as if their membership in a religious organization gives them some special insight into what “sanctity of life” means. To a certain degree, I suppose they’re right. After all, sanctity  means, in essence, holiness. But to the layman, what it means is that you shouldn’t take it lightly, you shouldn’t corrupt it, and you screw with it at your own peril.

Now, if you don’t believe in God, or any particular god, sanctity is probably best replaced with a more apt term. Something akin to rare, though it fails to capture the essence of the idea on a colossal scale.

See, religious folks seem to think that you need to have religious morals to appreciate the sanctity of life. But that’s simply not so. Let me explain my point of view to you.

We’ll start by putting things in perspective. The age of our universe is currently calculated at somewhere between 12 and 14 billion years old. That’s about 2 years for every single person living on this planet in the year 2010 (6.91 billion, according to the US Census Bureau). Our sun, however, has only existed for 4.5 billion of those years. So we have more people on the planet than we have years for the sun. Further, human beings have only existed as a genus for a few million years. We have way more people on the planet than years that people, in general, have been hanging around.

So what does that mean? What it means is that the universe is ancient. You, however, will live, at most, if you are extremely lucky and blessed with very long life, good genes, and a phenomenal health and dental plan, 120 years. Get that? 120 years compared to the 4,500,000,000 that mark the age of the sun alone. 120 compared to the 14,000,000,000 that mark the age of the universe.

Don’t ever tell me you feel old again. There are dust particles floating in orbit that were around during the Triassic period.

You get 120 lousy years. Billions of years the universe has been around, and you get, at most 120. Most of us won’t ever get anywhere near that. We’ll be lucky to hit 70.

Now, let’s leave our age out of the picture, and consider not just the age of the universe, but its sheer size for a moment.

Our nearest neighbor is a lovely little star named Proxima Centauri. It’s a paltry 4.2 light years distant. Now, as anyone knows, light travels at the mind-numbingly slow velocity of 186,000 miles per second. Here on earth, that’s so fast that it’s virtually instantaneous. But when you start travelling across interstellar distances, things change, and quickly.

Imagine, for example, that you had a colony on a planet circling Proxima Centauri. Then, imagine, that you wanted to send a text message to the folks on Proxima Centauri, updating them about the latest episode of Glee. (Hey, someone’s gotta keep them updated.) It would take 4.2 years for the message to arrive, and 4.2 years for the response to get back. Total round trip: 9.4 years.

The vast majority of our universe is cold, vacuous space. It’s empty, with the possible exception of solar wind. Stars and planets are so far removed from each other that the distances are measured in either millions of miles, astronomical units, or light years. They have to be. If they were any closer, gravity would have smashed them together, and they’d no longer exist.

So think about it. The night sky is filled with countless millions of stars that you can see. Turn a telescope at it, and you can see BILLIONS of stars. A mind-boggling number of them turn out to be galaxies, or star clusters composed of billions of stars. Every star has a chance of hosting its own system of planets. But (and here’s the gotcha) the planets must be located within a certain “safety zone” for them to harbor life. Too far, and everything freezes. Too close, and everything’s incinerated. Or, its atmosphere is blown right off. Or it might not have the right chemical makeup to support life. Or it could be so volcanic that there’s no way in hell (pardon the pun) that anything could ever exist there. Or, a rogue comet or asteroid could smash into it and obliterate whatever life had managed to begin evolving.

You see, the universe craves life, but goes out of its way to destroy it. It surrounds it in the death grip of the cold vacuum of space, but once it gets going, life flourishes and evolves in myriad ways and in surprising environments that constantly amaze even the most diehard skeptic.

For me, this is a simple idea. When I look at the night sky, I see tiny pinpoints of light, separated by millions and billions of miles of empty dead space. Most of them don’t have planets. Most of those that do can’t support life. And of those that can, they’re so far away that I’d never be able to have a full on conversation with them from earth in real time. We are so far removed removed from any other form of life in the universe that may exist as to make us alone.

Couple that with our catastrophically short life span, and you should be able to see where I’m going with this.

I don’t need God to explain the sanctity of life to me. All I have to do is look at the night sky.

Monday, March 14, 2011

Insurmountable Obstacles

So here’s the thing. I’ve been living a fairly hermetic lifestyle for a pretty long time now. (And that’s hermetic as in the hermit, not the jar.) I don’t get out, I don’t socialize, I’m pretty much consumed by my work and the things I do at home that can be defined as one-person tasks: reading, writing, programming, and watching movies.

Now, there’s a reason for the way my life is the way it is. A few years or more ago, my epilepsy spiraled out of control. My body basically built up a tolerance to the medication I was on, and I started having grand mal seizures with no warning and at an alarming rate. At one point, I had them every week without fail. During that time period, I had an active social life. I went out every weekend, I had a circle of friends, and things were going pretty well.

Well, when the seizures started acting up, I realized that having a grand mal seizure at a gay club (or any social venue, for that matter) is a real buzz-killer. 911 is called. With that, the police, fire, and emergency responders are dispatched. Any fun you may be having comes to a grinding halt. Once the dust settles, and I’m all healed up, if I ever walk back into the club after that, I’ll be “that guy who flopped around on the floor and ruined the fun for everyone.” I’ll also be “That guy who might do it again.”

I’m not worried so much about my reputation. I’m a nerd. And a geek. (There is a difference. Subtle, but important.) I’ve dealt with those stigmas all my life. What I am worried about is ruining everyone else’s fun.

And let’s be honest: most people have no idea what to do when they see someone having a seizure. People still think you can swallow your tongue. I mean, really. It’s attached to the bottom of your mouth. Seriously, people. Have you never brushed your teeth and observed this for yourselves?

Of course, television and movies don’t help. Just recently I watched a television show where a patient started having a seizure and the doctor ordered a spinal tap STAT! During a seizure. Pray tell, why would you inject a foot-long needle into anyone’s spinal column while they were convulsing so violently that you’d likely paralyze them for the rest of their lives?

Le sigh.

So you can imagine all the horrifying things I can imagine people trying to do to me in their efforts to “help” while I’m having a seizure. I don’t blame them. It’s just that they don’t know any better.

So the best thing that I can do is remove myself from the equation. Put myself where I can’t expose people to danger. Because believe me, I’m a danger to people when I’m seizing. Not only do I have epilepsy, I have HIV. As I’m seizing, I may bite my tongue. And if some hapless Samaritan decides to help me out by sticking his fingers in my mouth, I will likely bite him and infect him with HIV. I won’t know it, because the truth of the matter is that I am not there. And when I came to, I would live with that guilt for the rest of my life.

And there you have it. To protect people, I stopped going out. I became a recluse. It was safer for everyone that way. My doctor changed my medications, and the seizures came back under control, but my confidence was badly shaken. And here I am, years later, still a recluse, sitting alone at 10:35 PM, typing this blog entry alone.

And I’m tired of being alone.

I feel like there’s this gaping void in my life. I don’t need a circle of admirers. I’ve never needed that. I think that what I need is some sort of connection to the world that isn’t based on bytes flying through cyberspace. I want to discuss politics, religion, philosophy, science, books, and the arts. I want to be able to have my opinions challenged, to expand my horizons, to be lured out of my cave because someone makes me want to do it.

But I’m afraid to do it. I’ve grown so accustomed to being here, to hiding from the world, for a myriad of reasons, that I find that taking that first step is a seemingly insurmountable challenge. I find myself longing for someone to take my hand, tell me it will be okay, and help me take that first step. Because I don’t think I can do it by myself. As strong as I’ve always thought I was, I know now that I’m not.

The funny thing is, I don’t even know why I’m writing this. My blog has no readership. Perhaps that is why I’m writing it. Maybe I’m putting this here because it’s safe. Because my blog is a safe part of my isolated little world, and I don’t have to worry about anyone finding it. In which case nothing will change, and everything will remain the same.

Wednesday, October 27, 2010

Visual Studio 2008 Detaching from the Debugger

At work, my machine has always been problematic. Getting Visual Studio 2008 to stay attached to the debugger has been an excercise in futility. I've done everything that every site on Google suggested. And still, a few seconds into debugging an ASP.NET web app under IIS or WebDev, the debugger would detach.

Note: This happens for me under Windows 7, against both IIS 7 and Visual Studio's WebDev.

  1. I was running Visual Studio as an administrator. I made sure of this by setting the shortcut properties, and even right-clicking the icon and selecting "Run as Administrator."
  2. I set up exclusions in the antivirus software to tell it not to scan my web project's folders.
  3. I disabled protected mode in Internet Explorer.
  4. I added localhost to the trusted sites in IE.
  5. I set the TabProcGrowth setting to 0 in the registry to disable IE's LCIE.

Still, the debugger would predictably detach. And then, the day before yesterday, I stumbled across something very, very curious. Windows Defender was still running on my machine.

Now that struck me as very odd, because my understanding is that you're not supposed to run two different antivirus packages at the same time: they'll wreak havoc with each other. So, I contacted IT and we decided it was time to shut that blasted thing off in favor of our preferred antivirus application. (Note that the antivirus application should have shut it off when it installed, but didn't for some reason.)

Once I did, Visual Studio fired up, and for the first time the word "(Administrator)" appeared in the title bar. That was the first clue that something drastic had changed. And the best news is that I haven't had a single disconnect from the debugger since.

I can't guarantee that this is the problem that you're having, but you might want to look at it. If you're running an antivirus application on your machine, Windows Defender should not be running at the same time.

To Disable Windows Defender

  1. Right click on My Computer.
  2. Select Manage.
  3. Click Services and Applications.
  4. Click Services.
  5. In the services list, double-click Windows Defender.
  6. In the Windows Defender Properties dialog, change the settings as follows:
    1. Change the Startup Type to Disabled.
    2. Click the Stop button.

    Click OK to close the dialog.

  7. Close Computer Management.

I hope this helps someone besides me, who's been battling this issue for months.

Sunday, August 15, 2010

JavaScript...For Good or Ill

So, I've been spending a lot of time working with JavaScript, lately. For whatever it's worth, I've been thrust into a role in which I find myself working with a large volume of JavaScript code that needs to be maintained and it falls on me to do that.

Now, generally speaking, in my eyes, code is code. I don't really care what language you're writing in, but some things are just generally true regardless of the language. Cryptic code is to be shunned At All Costs™. A coding standard--even a minimal standard--should apply. Variable names should be clear. The code should document itself, mitigating the need for comments. Every variable should be predeclared. And so on, and so on, and so on.

But over the long course of my career, I've observed something about scripting languages like JavaScript and VBScript. There's just something about them that seems to encourage people to write bad code.

I've seen and had to debug a lot of script code in my lifetime. And that code tends to be riddled with subtle and not-so-subtle defects that could easily have been avoided if we simply treated scripting languages like they were real languages and not toys.

Don't Blame the Browser. One could argue that many of the defects that crop up in script code are due to browser incompatibilities. But let's be honest with ourselves: it's our job to know what those incompatibilities are and write the software to take them into account so that script errors don't occur. If script errors are occurring "due to a browser incompatibility," that's not really the reason: they're occurring because the developers didn't account for the browser incompatibility.

Functions, Formal Parameters, and Arguments. How many times have you seen a function that takes a set of arguments and then starts working with them without performing any sort of argument validation? How much time have you spent scratching your head, trying to figure out what type of data the function expected to receive for any given formal parameter? And have you ever wondered what guarantee the function has that it is going to receive arguments of the correct type, or that it will behave correctly if it doesn't?

The truth is, the vast majority of the functions in script code never check their arguments to ensure that they are what they expect them to be. They don't check to ensure that the values are present, that they're of the correct data type, that they fall within the allowed ranges or have the right formats. Any number of errors that occur in the browser in front of end-users in production could be avoided if those checks were put in place early on, when the functions were first written, and a developer found out about them during development.

But we're lazy, and scripting languages somehow encourage us to be even lazier. I'm not sure why this is so, it just seems to be so. Now, when ou consider a language like JavaScript, where any variable, object, or function can be modified at any time by anyone, it's not a good idea to assume that the arguments you've received are what you expect them to be. To quote a classic mantra:

Assert, assert, assert.


Assume Nothing. Too often I'm presented with code that grabs an element from the document using document.getElementById. This code then proceeds to use the control as if there was never any doubt in the world that the element existed.

If you're writing code like this, stop it right now. What if the content of the page unexpectedly changed on you (a write to document.innerHTML), or you're reading from an iframe and it doesn't have the correct document loaded into it? What if the control you're after was never created? If you aren't checking for these conditions yourself early in the development cycle, the customer is bound to find out for you.
var theElement = document.getElementById("myElement");
if(theElement === null)
{
throw new Error("myElement was not found.");
}
else
{
// Code to work with the control.
}

We tend to make these kinds of assumptions all the time in scripting languages. We assume that a property or method exists on an object, that a given variable is an array, that a variable is not undefined or null, that a variable has not been preinitialized by someone else to hold a value that is different from what we want to store in it. All of these are unsafe assumptions. Unsafe assumptions lead to subtle defects that are notoriously difficult to track down and correct. We need to make it a priority to ruthlessly eliminate them by adopting a Zero-Assumption policy.

What Happened to Black Box Programming? Remember the idea behind black box programming? The basic principle is simple:

A function or object knows nothing about the outside world except what is passed into it.

At some point in your career, you should have been introduced to this fundamental concept. Functions and objects don't rely on global variables. They just don't. Everything they need to know is passed to them through their formal parameter list. In this way, the function has an opportunity to validate that its arguments are sound, and it is decoupled from its host so that it can be tested more easily.

These days, we push the notions of loose coupling and high cohesion to explain black box development in greater detail. We also talk about things like inversion of control, which should be nothing new to anyone who's ever written an event handler before. In short, the function assumes nothing about its external environment (sound familiar?); we pass it everything it needs to know to get its job done.

But if you look at JavaScript or VBScript code you will be inundated be vast swaths of code that references global variables with complete abandon. Of course it does. Scripting languages embrace global variables like flies embrace a fertilizer factory.

  • History Lesson #1: global variables are a loaded weapon with a hairpin trigger. When you give that many sketchy loaded weapons to people with no training, Bad Things Will Happen.

  • History Lesson #2: Humans frequently fail to learn from history.

I implore you. I beg you. Please, please stop using global variables. It is, of course, necessary atthe topmost layer of your application, but once you're past that, there's no reason whatsoever for your functions and objects to have any knowledge of global objects (unless they're provided by JavaScript or VBScript itself). Do not assume anything about the global,document or top objects. When you write a function, if it needs some piece of information to do its work, insist that your callers pass it to you.

In Closing... I think you can see where I'm going with this. We've got decades of script code out there, and a lot of it is really badly written. But if we want to be perfectly honest about it, we have no one to blame for its state but ourselves. We look at what JavaScript can do, and we never stop to ask ourselves if what it can do is somethig we should do.

Making a bad situation worse, we don't apply the same disciplined coding practices to JavaScript that we do to languages like C++, C#, Java, or even VB.NET. And that's a shame, because there's nor real reason we shouldn't or couldn't. Perhaps, in the near future, we'll see things like code contracts, automated unit tests, argument validaton frameworks, StyleCop for JavaScript, and so on. Pipe dreams, probably, but it would be nice if we could have all of that and take a huge, collective leap forward in improving the quality of the script code we have to maintain every day.
Blogged with the Flock Browser

Wednesday, June 30, 2010

Knowing How is Not Enough

There’s an old adage that I heard once, and it’s stuck with me through the years:

He who knows how to do a thing is a good employee. He who knows why is his boss.

I’m also fond of this one:

If you can’t explain it, you don’t understand it.

So I’ve been ramping up on some technology that I’ve really not had an opportunity to really use before, and I’m very excited about it. To make sure I understand it, I’ve decided to go back to the MSDN examples, reproduce them one line at a time, and then document the source code as I understand it. It’s a great way to learn, and sheds a great deal of light on what you think is happening, versus what’s actually happening.

To be perfectly honest, the technology is AJAX. Over the last few years, I’ve predominantly worked for companies that haven’t had any use for Web services, so there’s been no compelling need for it. I’m starting a new job soon and it will rely heavily on Web services, and I really want to make sure I understand them well before I set my foot in the door. It has never been enough for me to know that you just drag a control onto a form or page, set a few properties and press F5. To me, that degree of abstraction is a double-edged sword.

When abstraction reaches the level that it has with Microsoft AJAX, you start to run into some fairly significant issues when it comes time to test and debug the application. The MS AJAX framework is no small accomplishment, and it hides a lot of complexity from you. It makes it so easy to write AJAX applications that you really don’t need to understand the underlying fundamentals of Asynchronous Javascript and XML that make the whole thing work. Consequently, when things go wrong, you could very well be left scratching your head, without a clue, and no idea where to begin looking.

Where, in all of this enormously layered abstraction did something go wrong? Was it my code? Was it the compiler? Was it IIS? Was it permissions? Was it an update? Was it a configuration setting? Was it a misunderstanding of the protocol? Did the Web service go down or move? Was the proxy even generated? If it was, was it generated correctly? Do I even know what a proxy is and why I need it?!

When I started learning about AJAX, we coded simple calls against pages that could return anything to you in an HTTP request using the XMLHTTPRequest object. Sure, it was supposed to be XML, but that was by convention only. The stuff I wrote back then (and I only wrote this stuff on extremely rare occasions, thank the gods), returned the smallest piece of data possible: a single field of data in flat text. It was enough to satisfy the business need, and didn’t require XML DOM parsing.

But even with DOM parsing, the code to make a request and get its data back via XMLHTTPRequest was a lot smaller than all the scaffolding you have to erect now. You might argue that you don’t have create a lot of code now, but that is just an illusion. You’re not writing it, but Microsoft is. Just because you don’t see it doesn’t mean it’s not there. Do you know what that code is doing?

In theory, the Why of Microsoft AJAX, or any AJAX library is to make our lives easier when it comes time to write dynamic Web applications that behave more like desktop applications. To a certain degree, they have. When they work. But when they don’t, I wonder if the enormous degree of abstraction they’ve introduced hasn’t dumbed us down to the point where we’ve ignored essential knowledge that we should have.

If you’re going to write Web services, or consume them, you should, at a minimum, understand what they are, and how they work. You should understand their history, how they evolved, and the problem that AJAX tries to solve. It’s not enough to know how to write a Web service, you have to know why you’re doing it, and why you’re doing it the way you are. That sort of knowledge can be crucial in making the right choices about algorithms, protocols, frameworks, caching, security, and so on.

But this could be true of any technology or practice we learn. AJAX, LINQ, design patterns, TDD, continuous integration, pair programming, and so on. Know why.

Try this simple litmus test. Explain something you think you know to one of your peers. If you can’t explain it clearly without having to pull out a reference or go online, you don’t understand it the way you think you did. Consider relearning it. It’ll only improve your value to yourself, your peers, and your employer.

Tuesday, June 29, 2010

64-bit Browsers: Useless, but not for the reasons you think

I was really stoked when I recently upgraded from a 32-bit OS to a 64-bit OS. I was even more pleased to learn than IE came in a 64-bit flavor. Then, the reality hit me.

Broad-based support for 64-bit browser addons still hasn’t arrived. Silverlight, Flash, Adobe Acrobat, and all their ilk are not 64-bit compatible. So, you can view generic HTML, but that’s pretty much it.

So I have this spiffy 64-bit browser that’s (theoretically) faster and can (theoretically) address far more memory, but it’s fairly useless to me because I can’t view a vast amount of content on the Web.

Sure, there’s content I can view, but having to switch back and forth between 64-bit and 32-bit versions of a browser—and that’s any browser—is a pain: a useless time sink.

So, it looks like it’s back to 32-bit browsers for me. As a web developer, that’s fairly sad, because it means that I won’t be testing any really cool, dynamic “flashy” content in 64-bit browsers for a long time. And that’s fairly sad. User demand is what drives the corporations to get these things fixed, and if we all have to set those 64-bit browsers aside because there’s just no support for them, well, those corporations feel less compelled to provide support for them with any kind of urgency.

It’s a vicious circle, really. They aren’t caught up, we can’t use it, so we go back to older tech, so they relax on the implementation, so it’s not done as quickly as we might like, so we remain entrenched in 32-bit technology longer.

Now, 32-bit technology is fine, if that’s all you really need. But software is not getting any smaller. Nor are operating systems, or corporate computing needs. Eventually, the amount of memory we can address effectively with 32-bit operating systems and software will fail to be sufficient. That day may be far off, but it may not. The truth is, you’re better off being ahead of the curve than behind it. That’s not always possible, but if you can, you should.

In the meanwhile, I’m back to a 32-bit browser, because it’s the only way I can view Silverlight, Flash, and PDF online. Here’s hoping that all changes sometime in the near future.

Saturday, June 26, 2010

Generics, Value Types, and Six Years of Silence

It’s been a long time since I worked on the code for NValidate, but in a fit of creative zeal I decided to dust it off and take a look at it.

As one of the posters on the old NValidate forums pointed out, it was full of duplicate code. Granted, that was a conscious design choice at the time: it was written well before generics came out, and performance was a key consideration. I didn’t want a lot of boxing and unboxing going on, and since a lot of the tests dealt with value types, the only way I could see to get that stuff done was to duplicate the test code for specific types.

Well, time marches on and languages advance. .NET 2.0 came out, and along with it came generics. I figured that they would provide a fantastic way for me to eliminate a lot of the duplicate code. I mean, it would be awesome if we could just write a single validator class for all the numeric data types and be done with it. And that’s where I hit the Infamous Brick Wall&tm;.

It turns out that generics and ValueType objects do not play well together. At all. Consider the following piece of code:

public class IntegralValidatr<T> where T : ValueType
{
}

This, as it turns out, is forbidden. For some reason, the compiler treats ValueType as a “special class” that cannot be used as the constraint for a generic class. Fascinating. The end result is that you cannot create a generic class that requires that its parameters are derived from ValueType. You know: Boolean, Byte, Char, DateTime, Decimal, Double, Int, SByte, Short, Single, UInt, and ULong. Types you might actually want to work with on a daily basis, for all kinds of reasons.


The workaround, they say, is to specify struct. The problem is that struct is a pretty loose constraint. Lots of things are structs but aren't necessarily the types you want. I assume that's why they call it a workaround and not a solution.


So, anyway, here I am with a basic class definition. I can at least admit to myself that I can build the class outline as follows:


public class IntegralValidator<T> where T : struct
{
public T ActualValue { get; internal set; }
public string Name { get; internal set; }
public IntegralValidator (string name, T actualValue)
{
this.Name = name;
this.ActualValue = actualValue;
}
}

But now it’s time to create a test. The problem is determining how to perform basic comparisons between value types when you can’t seem to get to the value types now that they’ve been genericized. Understand that NValidate needs to be able to do the following with numeric values:



  • Compare two values for equality and inequality

  • Compare a value to a range, and fail if it falls outside that range

  • Compare a value to zero, and fail if it isn’t zero

  • Compare a value to zero, and fail if it is zero.

  • Compare a value to the Max value for its type, and fail if it is or isn’t equal to that value.

  • Compare a value to the Min value for its type, and fail if it is or isn’t equal to that value.

  • Compare a value to a second value, and fail if it is less than than the second value.

  • Compare a value to a second value and fail if it is greater than the second value.

You get the picture.


The problem I’m experiencing is that it’s become clear to me that it’s really very difficult to convert a genericized value type back to its original value type. Consider the following code:


private byte ToByte(){
if (ActualValue is byte>)
// The as operator must be used with a reference type or
// nullable type ('byte' is a non-nullable value type)
return ActualValue as byte;
if (ActualValue is byte)
// Cannot convert type 'T' to 'byte'
return byte ActualValue;
}

So, if neither of these approaches works, how do I get to the original values? Generics appear to demand an actual object, which would, in turn, demand boxing and unboxing of value types (which I’m staunchly opposed to for performance reasons).


So, we go back to the drawing board, and eventually we discover that you can, in fact, get to the type through a bit of trickery with the System.Convert class:


private byte ToByte() {
if (ActualValue is byte)
// The as operator must be used with a reference type or
// nullable type ('byte' is a non-nullable value type)
return Convert.ChangeType(ActualValue, TypeCode.Byte);
}

Well, the problem I’m faced with now, upon careful reflection is this: If I’m drilling down to the original data type, I’m kind of defeating the whole point of generics in the first place. And that brings us to the whole point of this article.


I should be able to write a line of code like this:


Demand.That(x).IsNonZero().IsBetween(-45, 45);

And that code should be handled by generics that correctly infer the type of x and select the right code to execute, but I can’t. And the reason I can’t is because (1) you can’t use ValueType as a constraint for generics and (2) there is no common interface for numeric types in the BCL.


This is an egregious oversight in the Framework. Worse, it’s been an outstanding complaint on Microsoft Connect since 2004. Through multiple iterations of the Framework, despite numerous postings on the site and clamorings for the oversight to be corrected, Microsoft has yet to do anything about it. For some reason, they seem to think it’s less important than things like Office Integration, UI overhauls, destroying the usability of online help, and making Web services as difficult and unpredictable to use as possible.


It baffles me that Microsoft’s own responses to the issue have been questions like “How would you use this functionality?” Are they kidding me? There are so many uses for this it’s not even funny.


  • What happens when you have an array or list of heterogenous numeric types and need to work with them in some meaningful way using a generic (possibly a delegate)?

  • What happens when you want to write a program that converts 16-bit data to 32-bit data, or floating point data to Long data, or work with both at the same time using a common algorithm?

  • What happens when you need to work with graphics algorithms common to photo processing software?

  • What happens when you need to work with the many different types of value types and convert them back and forth quickly and efficiently as you would in, say, an online game?

  • Or, as in my case, what happens when you’re writing a reusable framework for validation and boxing and unboxing are simply not an option, and a generic solution would handily solve the problem, but you can’t because there’s no common interface – no One Ring that binds them all together?

It’s about time this issue was resolved. And this isn’t something the open source community can fix. This is something Microsoft has to fix, in the BCL, for the good of all humanity. Six years is far too long to leave something this painfully obvious outstanding.