Wednesday, June 30, 2010

Knowing How is Not Enough

There’s an old adage that I heard once, and it’s stuck with me through the years:

He who knows how to do a thing is a good employee. He who knows why is his boss.

I’m also fond of this one:

If you can’t explain it, you don’t understand it.

So I’ve been ramping up on some technology that I’ve really not had an opportunity to really use before, and I’m very excited about it. To make sure I understand it, I’ve decided to go back to the MSDN examples, reproduce them one line at a time, and then document the source code as I understand it. It’s a great way to learn, and sheds a great deal of light on what you think is happening, versus what’s actually happening.

To be perfectly honest, the technology is AJAX. Over the last few years, I’ve predominantly worked for companies that haven’t had any use for Web services, so there’s been no compelling need for it. I’m starting a new job soon and it will rely heavily on Web services, and I really want to make sure I understand them well before I set my foot in the door. It has never been enough for me to know that you just drag a control onto a form or page, set a few properties and press F5. To me, that degree of abstraction is a double-edged sword.

When abstraction reaches the level that it has with Microsoft AJAX, you start to run into some fairly significant issues when it comes time to test and debug the application. The MS AJAX framework is no small accomplishment, and it hides a lot of complexity from you. It makes it so easy to write AJAX applications that you really don’t need to understand the underlying fundamentals of Asynchronous Javascript and XML that make the whole thing work. Consequently, when things go wrong, you could very well be left scratching your head, without a clue, and no idea where to begin looking.

Where, in all of this enormously layered abstraction did something go wrong? Was it my code? Was it the compiler? Was it IIS? Was it permissions? Was it an update? Was it a configuration setting? Was it a misunderstanding of the protocol? Did the Web service go down or move? Was the proxy even generated? If it was, was it generated correctly? Do I even know what a proxy is and why I need it?!

When I started learning about AJAX, we coded simple calls against pages that could return anything to you in an HTTP request using the XMLHTTPRequest object. Sure, it was supposed to be XML, but that was by convention only. The stuff I wrote back then (and I only wrote this stuff on extremely rare occasions, thank the gods), returned the smallest piece of data possible: a single field of data in flat text. It was enough to satisfy the business need, and didn’t require XML DOM parsing.

But even with DOM parsing, the code to make a request and get its data back via XMLHTTPRequest was a lot smaller than all the scaffolding you have to erect now. You might argue that you don’t have create a lot of code now, but that is just an illusion. You’re not writing it, but Microsoft is. Just because you don’t see it doesn’t mean it’s not there. Do you know what that code is doing?

In theory, the Why of Microsoft AJAX, or any AJAX library is to make our lives easier when it comes time to write dynamic Web applications that behave more like desktop applications. To a certain degree, they have. When they work. But when they don’t, I wonder if the enormous degree of abstraction they’ve introduced hasn’t dumbed us down to the point where we’ve ignored essential knowledge that we should have.

If you’re going to write Web services, or consume them, you should, at a minimum, understand what they are, and how they work. You should understand their history, how they evolved, and the problem that AJAX tries to solve. It’s not enough to know how to write a Web service, you have to know why you’re doing it, and why you’re doing it the way you are. That sort of knowledge can be crucial in making the right choices about algorithms, protocols, frameworks, caching, security, and so on.

But this could be true of any technology or practice we learn. AJAX, LINQ, design patterns, TDD, continuous integration, pair programming, and so on. Know why.

Try this simple litmus test. Explain something you think you know to one of your peers. If you can’t explain it clearly without having to pull out a reference or go online, you don’t understand it the way you think you did. Consider relearning it. It’ll only improve your value to yourself, your peers, and your employer.

Tuesday, June 29, 2010

64-bit Browsers: Useless, but not for the reasons you think

I was really stoked when I recently upgraded from a 32-bit OS to a 64-bit OS. I was even more pleased to learn than IE came in a 64-bit flavor. Then, the reality hit me.

Broad-based support for 64-bit browser addons still hasn’t arrived. Silverlight, Flash, Adobe Acrobat, and all their ilk are not 64-bit compatible. So, you can view generic HTML, but that’s pretty much it.

So I have this spiffy 64-bit browser that’s (theoretically) faster and can (theoretically) address far more memory, but it’s fairly useless to me because I can’t view a vast amount of content on the Web.

Sure, there’s content I can view, but having to switch back and forth between 64-bit and 32-bit versions of a browser—and that’s any browser—is a pain: a useless time sink.

So, it looks like it’s back to 32-bit browsers for me. As a web developer, that’s fairly sad, because it means that I won’t be testing any really cool, dynamic “flashy” content in 64-bit browsers for a long time. And that’s fairly sad. User demand is what drives the corporations to get these things fixed, and if we all have to set those 64-bit browsers aside because there’s just no support for them, well, those corporations feel less compelled to provide support for them with any kind of urgency.

It’s a vicious circle, really. They aren’t caught up, we can’t use it, so we go back to older tech, so they relax on the implementation, so it’s not done as quickly as we might like, so we remain entrenched in 32-bit technology longer.

Now, 32-bit technology is fine, if that’s all you really need. But software is not getting any smaller. Nor are operating systems, or corporate computing needs. Eventually, the amount of memory we can address effectively with 32-bit operating systems and software will fail to be sufficient. That day may be far off, but it may not. The truth is, you’re better off being ahead of the curve than behind it. That’s not always possible, but if you can, you should.

In the meanwhile, I’m back to a 32-bit browser, because it’s the only way I can view Silverlight, Flash, and PDF online. Here’s hoping that all changes sometime in the near future.

Saturday, June 26, 2010

Generics, Value Types, and Six Years of Silence

It’s been a long time since I worked on the code for NValidate, but in a fit of creative zeal I decided to dust it off and take a look at it.

As one of the posters on the old NValidate forums pointed out, it was full of duplicate code. Granted, that was a conscious design choice at the time: it was written well before generics came out, and performance was a key consideration. I didn’t want a lot of boxing and unboxing going on, and since a lot of the tests dealt with value types, the only way I could see to get that stuff done was to duplicate the test code for specific types.

Well, time marches on and languages advance. .NET 2.0 came out, and along with it came generics. I figured that they would provide a fantastic way for me to eliminate a lot of the duplicate code. I mean, it would be awesome if we could just write a single validator class for all the numeric data types and be done with it. And that’s where I hit the Infamous Brick Wall&tm;.

It turns out that generics and ValueType objects do not play well together. At all. Consider the following piece of code:

public class IntegralValidatr<T> where T : ValueType
{
}

This, as it turns out, is forbidden. For some reason, the compiler treats ValueType as a “special class” that cannot be used as the constraint for a generic class. Fascinating. The end result is that you cannot create a generic class that requires that its parameters are derived from ValueType. You know: Boolean, Byte, Char, DateTime, Decimal, Double, Int, SByte, Short, Single, UInt, and ULong. Types you might actually want to work with on a daily basis, for all kinds of reasons.


The workaround, they say, is to specify struct. The problem is that struct is a pretty loose constraint. Lots of things are structs but aren't necessarily the types you want. I assume that's why they call it a workaround and not a solution.


So, anyway, here I am with a basic class definition. I can at least admit to myself that I can build the class outline as follows:


public class IntegralValidator<T> where T : struct
{
public T ActualValue { get; internal set; }
public string Name { get; internal set; }
public IntegralValidator (string name, T actualValue)
{
this.Name = name;
this.ActualValue = actualValue;
}
}

But now it’s time to create a test. The problem is determining how to perform basic comparisons between value types when you can’t seem to get to the value types now that they’ve been genericized. Understand that NValidate needs to be able to do the following with numeric values:



  • Compare two values for equality and inequality

  • Compare a value to a range, and fail if it falls outside that range

  • Compare a value to zero, and fail if it isn’t zero

  • Compare a value to zero, and fail if it is zero.

  • Compare a value to the Max value for its type, and fail if it is or isn’t equal to that value.

  • Compare a value to the Min value for its type, and fail if it is or isn’t equal to that value.

  • Compare a value to a second value, and fail if it is less than than the second value.

  • Compare a value to a second value and fail if it is greater than the second value.

You get the picture.


The problem I’m experiencing is that it’s become clear to me that it’s really very difficult to convert a genericized value type back to its original value type. Consider the following code:


private byte ToByte(){
if (ActualValue is byte>)
// The as operator must be used with a reference type or
// nullable type ('byte' is a non-nullable value type)
return ActualValue as byte;
if (ActualValue is byte)
// Cannot convert type 'T' to 'byte'
return byte ActualValue;
}

So, if neither of these approaches works, how do I get to the original values? Generics appear to demand an actual object, which would, in turn, demand boxing and unboxing of value types (which I’m staunchly opposed to for performance reasons).


So, we go back to the drawing board, and eventually we discover that you can, in fact, get to the type through a bit of trickery with the System.Convert class:


private byte ToByte() {
if (ActualValue is byte)
// The as operator must be used with a reference type or
// nullable type ('byte' is a non-nullable value type)
return Convert.ChangeType(ActualValue, TypeCode.Byte);
}

Well, the problem I’m faced with now, upon careful reflection is this: If I’m drilling down to the original data type, I’m kind of defeating the whole point of generics in the first place. And that brings us to the whole point of this article.


I should be able to write a line of code like this:


Demand.That(x).IsNonZero().IsBetween(-45, 45);

And that code should be handled by generics that correctly infer the type of x and select the right code to execute, but I can’t. And the reason I can’t is because (1) you can’t use ValueType as a constraint for generics and (2) there is no common interface for numeric types in the BCL.


This is an egregious oversight in the Framework. Worse, it’s been an outstanding complaint on Microsoft Connect since 2004. Through multiple iterations of the Framework, despite numerous postings on the site and clamorings for the oversight to be corrected, Microsoft has yet to do anything about it. For some reason, they seem to think it’s less important than things like Office Integration, UI overhauls, destroying the usability of online help, and making Web services as difficult and unpredictable to use as possible.


It baffles me that Microsoft’s own responses to the issue have been questions like “How would you use this functionality?” Are they kidding me? There are so many uses for this it’s not even funny.


  • What happens when you have an array or list of heterogenous numeric types and need to work with them in some meaningful way using a generic (possibly a delegate)?

  • What happens when you want to write a program that converts 16-bit data to 32-bit data, or floating point data to Long data, or work with both at the same time using a common algorithm?

  • What happens when you need to work with graphics algorithms common to photo processing software?

  • What happens when you need to work with the many different types of value types and convert them back and forth quickly and efficiently as you would in, say, an online game?

  • Or, as in my case, what happens when you’re writing a reusable framework for validation and boxing and unboxing are simply not an option, and a generic solution would handily solve the problem, but you can’t because there’s no common interface – no One Ring that binds them all together?

It’s about time this issue was resolved. And this isn’t something the open source community can fix. This is something Microsoft has to fix, in the BCL, for the good of all humanity. Six years is far too long to leave something this painfully obvious outstanding.


Thursday, June 10, 2010

On Work at Home Programs

In his article, Work from home. Save the Planet, David Gewirtz lays out numerous benefits for embracing Work at Home programs throughout the country. I find some of the author's arguments questionable, but I haven't read his book, either.

I do know, living in New England, that the vast majority of road repairs are not due to traffic, but seasonal weather changes. So that argument goes flying out the window.

Also, it's quite evident that many jobs simply cannot be done from home. Let's be realistic. Security, plumbing, firefighting, surgery, shop keeping, termite control, landscaping, road repair, construction, and so on simply cannot be done from your home.

On the other hand, densely populated urban areas packed with office buildings and utilizing the latest technologies can reap the benefits of work-at-home programs. We are not bereft of the technologies that make this possible: instant messaging, email, online Web applications, video conferencing, VPN networks, and the ever growing "Cloud" enable us to get more done from geographically separate locations than ever before.

The trick is to ensure that the work is actually getting done with as much zeal as it would if people were in the office, where they are observed by their coworkers. Let's be honest: people tend to be more disciplined about getting the work done if their peers are able to walk in on a moment's notice and see whatever it is they're doing. That's not the case when you're working from home.

At home workers must possess a greater amount of self-discipline than workers in the office, by simple virtue of the fact that they must manage their own time and not be distracted by daily annoyances that might be present in the home.

And yet, if this program can be made to work, the benefits to the community (and the planet) can be numerous. Reductions in carbon emissions, fossil fuel consumption, traffic jams, traffic fatalities, and overall travel expenses are virtually (but not necessarily) a given.

In any event, it's not a simple case of black and white. No issue ever is.