Entries from Feb 2007 ↓

ASPnix adds ISAPI Rewrite - Finally!

ASPnix Web Hosting Logo Back in July of 2006 someone asked on the forum for ASPnix, the web host that specializes in CommunityServer, to add ISAPI Rewrite to their servers so that customers can clean up their URLs. Seven people including myself chimed in asked for it. Over the past eight months, little was said by ASPnix except by a former staffer who implied it was harm the stablity of their servers and who really gave no indication that any real consideration was being made to offer a solution for URL Rewriting.

Well finally, on Feb 22nd, Roma confirmed that ASPnix has will finally be offering ISAPI Rewrite on ASPnix’s web servers. That’s yet another IIS-centric web host who has finally freed its customers from the shackles of poorly designed URL Hell! Hooray!

Now let’s just hope that Scott Watermasysk can be convinced to add URL Rewriting support in CommunityServer using ISAPI Rewrite to eliminate .ASPX extensions and more on CommunityServer, sooner than later.

OpenDNS to Force Improved DNS Standard?


So I was reading Hanselman and came across his OpenDNS post. I’d not heard of it, but evidently it is a free service comprised of a network of ’smart’ DNS servers that can correct spelling errors (i.e. convert craigslist.ogr to craigslist.org) and provide warnings when users attempt to go to a phishing sites. Cool!

Reading Scott’s post also led me to a discussion on mrneutrongodeon’s LiveJournal about OpenDNS where dr_strych9 commented (emphasis mine):

Part of your problem here is that BIND just plain sucks. I would expect similar results from djbdns, for example.

I also don’t like that “spelling correction” or “anti-phishing” feature. That doesn’t belong in the cache; it belongs at the resolver. … OpenDNS is unsuitable for use as an enterprise DNS cache. It might be a good solution for people who want to run their own personal cache on a local node.

When challenged by someone who did not understand that the term “resolver” had a defined meaning, dr_strych9 clarified (emphasis mine):

The “resolver” in the DNS protocol is the agent that sends questions and receives answers. Contrast with the other two kinds of agents in the DNS protocol, i.e. the “server” and the “cache” agents. The “server” sends answers to recursive questions, and the “cache” sends answers to non-recursive questions.

I’m saying the “resolver” agents are where this name fiddling code belongs, not in the “cache” agents where OpenDNS is doing it. Technically, OpenDNS is running an alternative “public” DNS horizon for its users. I think more than one “public” DNS horizon is a very bad idea. We only need one: the global public DNS horizon.

Also, I really hate designs that try to make the network protect the nodes from one another, particularly designs that outsource security to somebody I have no reason to trust. A much more secure and sensible approach to this problem would be to be the spelling correction in the DNS content servers (by registering multiple spellings and redirecting) and optionally the resolvers (by making them ask the right questions), and put the anti-phishing protection into just the resolvers, i.e. your web browser should protect you, not your DNS server.

And what follows are both my response and my analysis of the situation:

I agree. And I disagree. :)

What OpenDNS has done is recognize a way to improve on the DNS protocol. This could be argued to be a limitation in the vision of the DNS protocol, and OpenDNS have offered a solution that is of interest to a reasonably significant segment of users. Unfortunately, that solution violates the spirit of the existing DNS protocol. You can say that it should be in the client, but the “cost” (in the technological sense) of requiring clients to be updated to get this functionality is unrealistic when you compare it with the cost of updating a well-defined set of servers.

And whenever the spirit of a protocol is violated it causes lots of hand-wringing among the standardistas [1]. That happened a lot during the browser wars, but it forced the standards bodies to address the needs people were having as opposed to pontificating on abstracts at a glacial pace which is the nature of standards bodies when there is no market pressure to drive them. This market pressure spurs standards bodies into action to as quickly as possible reign in fragmenting yet proven technologies and codify them into a standard instead of spending years debating a hypothetical envisioned use (can you say ‘Semantic Web?’)

Yes some negative can result when market pressure is applied to force standards but I also think negative can also result when a hypothetical is standardized without a lot of proven implementations. All-in-all, I believe the accelerated pace of standards development resulting from market pressure is almost always a net positive.

Given OpenDNS has identified a way to add value to the DNS protocol I think it would make sense for the standards bodies to extend the DNS protocol in a backward compatible way to incorporate this functionality. When up-level clients and servers are paired they can use the newer functionality but when a client attaches to or server where one is down-level, the transactions would work as it always has.

And if OpenDNS were to work to update the DNS standard, they could move from being a novelty for most web users and a rouge element to the standardistas to potentially gaining a huge market share and capitalization. At the same time this newer version of the DNS protocol could provide added value across the broader Internet and provide value-appropriate revenue opportunities for a large number of people and vendors to support companies who want to update to their DNS infrastructure.

JMTCW, anyway.

In closing, I just want to remind readers that I definitely do like the idea of OpenDNS, that is unless and until someone points out some aspect of it where it really should be considered harmful that I hadn’t really considered.


  1. NOTE: I don’t mean the term ’standardistas’ perjoratively; I actually consider myself to be one, albeit a little more pragmatic than most.


They say people can’t understand an abstract concept unless they have language to describe it. For example, because Tahitians don’t have a word for sadness they think of sadness as they would a physical illness.

As we are immersed in a world of rapid change we need many new words to describe previously unidentified concepts. And when one of those new concepts inspires the masses, the media latches hold and a buzzword is born. And though everyone scoffs at them, we simply couldn’t discuss so as new concepts without using buzzwords. Like it or not, buzzwords are here to stay as the pace of change accelerates.

Recent examples of Internet buzzwords are ‘AJAX‘ and ‘Web 2.0‘ with the latter often being derided as meaningless and just hype. But ‘Web 2.0‘ is, by definition, not meaningless! Ney, the term ‘Web 2.0‘ identifies the nature and level of activity on the web not seen since the dotcom crash. So if ‘Web 2.0‘ were truly meaningless, there wouldn’t be a buzzword for it! Of course whether or not ‘Web 2.0‘ actually describes anything of tangible value distinct from prior periods is a matter of significant debate. :)

The reason buzzwords are so beneficial and will continue to be used is they give people a shared context in which to efficiently communicate, and that has an incredible value. Of course most buzzwords are merely shorthand for “the next big thing” but that’s just the nature of the hyped-up world we live in.

As an aside, the reason the term ‘Web 2.0‘ has attracted so much derision is it grouped hard-to-pin-down concepts having more in common with the current era than anything else. The shared context for ‘Web 2.0‘ is ‘the period starting around 2003‘ and since there is little value in discussing ‘the benefits of the period starting around 2003‘ the value of the shared context is diminished and dissonance results. It would have been much better had the purveyors of Web 2.0 done more to segment and focus attention on the individual concepts instead of defining the umbrella that covered them. Ah, but easier said than done.

On the other hand when the buzzword defines a concise and well understood concept the shared context can create many orders of magnitude more value than the concept on its own, as has been the case with the term ‘AJAX.’ Of course the downside to buzzwords is that wherever they go hype will follow, and that you just can’t avoid!

Camtasia Studio’s Huge Missed Opportunity

Jon Udel is a big fan of using screencasts to instruct, and I’m a big fan of watching them when I want to learn something. I’d like to start doing some of my own. However, reading his post on screencasting tips today, I was reminded of how I can’t help but think that TechSmith is really missing out on a huge opportunity because of their pricing for Camtasia Studio.

I’ve followed them for a while, and I know that they are pretty much the gold standard for screen recording software. However, their price of $299 is in no-man’s land. It is too low for the market it currently targets, the corporate market, and too high for a much, much larger market; the amateur and semi-pro blogger.

For those company’s who need the software, TechSmith could easily double the price and would probably still sell 90% as many units. But of course, the lost 10% would be well more than made up for by the increased price per unit.  And frankly, a higher price would motive resellers more (which, as a former reseller, I always hated that my business did better financially when I raised prices on customers.)

On the other hand, $299 is way past the threshold where an amateur bloggers would buy a copy. Frankly, I think that is the reason why we see so few screencasts on the web. In my 12+ years experience in selling software tools to developers, I’d say that $69 is probably about the right price for an amatuer to semi-pro blogger to say "Sure, what the heck, I’ll buy a copy and try this screencast thing.

TechSmith could easily cut feature features from this blogger version to differentiate from their professional version. For example, the blogger version could be limited to outputting only to Macromedia Flash, i.e. no AVI, Microsoft Windows Media, RealNetworks RealMedia and QuickTime. The could cut the output-to-EXE feature and the Create a CD-ROM feature. And probably a few more things. 

But TechSmith would need to be extremely careful NOT to cut the features that bloggers would really need. I ran into this over and over with components vendors while running VBxtras/Xtras.Net. I’d suggest a lower-priced version so they could reach a slightly different market, and the vendor would want to cut so many features of the product that it would have been crippled. Instead what’s needed it to look at the features that are needed only by the high end customers and cut those while leaving feature every users could benefit from. For example, if TechSmith were to cut any of the recording, pre-production, or editing features they could very well end of with an expensive demo and lots of frustrated customers badmouthing them on the blogs.

But what they could do, given this market, is to have the screencast on the blogger edition end with a splash-screen/advertisement for Camtasia. Imagine that, having the ability to get advertisements on a larger percentage of the blogs on the web and the only thing requires would be to restructure an existing product! Can you say "No Brainer?"

So, what would this look like?  I think if TechSmith were to offer two editions with the following prices they’d see a surge of new customers, the web would see an explosion of screencasts, and that would be great for (practically) everybody:

  • $69 - Camtasia Studio, Express Edition
  • $599 - Camtasia Studio, Professional Edition

So, if you are a blogger who thinks is a great idea and you’d be anxious to buy a copy of Camtasia Studio for $69 but wouldn’t even consider paying $299, why not go over to TechSmith’s website and send them some feedback on the subject. And be sure to point them to this URL so they can read my justification. Together, we can make a difference. :-)

P.S. One thing the skeptics in the audience should know is that I have recently started playing with the free software called Wink from DebugMode (thanks to Ben Coffey for the recommendation.) While it is great, I’d prefer the polish of Camtasia Studio. However, at $299 they won’t be getting a dime from me. On the other hand, for $69 I’d happy spend the money for the time and frustration it could hopefully save me, and I bet lots of other bloggers feel the same.  So what will it be TechSmith: "$69 in revenue, or nothing?"

On the Hunt for a New Programming Language

When it comes to programming on the modern-day GUI (post-DOS) platform, the vast majority of my coding has been, in order of experience, using T-SQL, VBScript in ASP, and about equal parts classic VB (v3.0 to v6.0) and VB.NET. As you can see from my order of experience, I’m really a database guy, and since the beginning of the web I’ve always viewed the web as somewhat of a database publishing environment (anyone remember the DOS product dbPublisher Pro from Digital Composition Systems?) What’s more the web allows a potentially infinite number of people to use a developer’s database publishing apps without any extra effort to distribute them. Finally, the web provides ability to capture evidence the apps were run, how often, and by how many people. Is it any wonder I have more of inclination to develop for the web as opposed to desktop applications? Back during the period from 1994 to 2006 when I ran VBxtras/Xtras.Net where we where a reseller of ActiveX controls and then later .NET components, I never really thought about the cost of add-on components. Almost anything I wanted to play with I can get an NFR (not-for-resale) copy just by sending an email or picking up the phone. Although I still have many of those relationships from a decade+  in the business, I hesitate to ask for NFRs these days except from my really close friends simply because this business I’m in today has nothing to do with benefiting those people. So numerous facts have me giving up on my prior five year assumption that I would someday learn VB.NET at an advanced level and have me instead actively considering alternatives:

  1. As I just stated, the fact I now have to pay for third party components and tools means I’m paying more attention to cost of acquisition,
  2. My recent favorable impressions of open-source developer tools and components, on par with some of the best tools ever sold by Xtras.Net,
  3. My increasing frustration with the Microsoft developer division’s process and release cycle,
  4. All best web applications seem to target L.A.M.P. such as Mediawiki, WordPress, vBulletin, Subversion, Trac,  Ruby On Rails, Django, etc. and all but one of them are free to use
  5. Completely preconfigured stacks (including O/S) that are becoming available for download as a VMware appliance,
  6. Recognizing that Ubuntu’s has an approach strategic enough to result in Microsoft being profiled in a revised edition of Clayton Christensen’s Innovator’s Dilemma as yet another example of why great companies loose their leadership position,
  7. And lastly my rising disgust for ASP.NET (and I promise I will blog about those specific soon…)

By the way, even though I dislike ASP.NET, I do still really like the .NET Framework and programming model. Oh and a note about the first point; whereas there is good open-source tools available for .NET, the operative word is "tools" not components. When you compare what’s available to freely use for .NET compared to what’s available for any of the "P"s (Perl, Python, and PHP), .NET just can’t compare, at least not in depth or breadth. Of course being commercial products the .NET third party components are more polished and of course have commercial support available. However, unless you are big company that needs to CYA and have a throat to choke, those are often dubious benefits especially when you consider the benefits of open-source (i.e. source code, and the ability to fix something and contribute it back so you’ll know it stays fixed!) Anyway, I could write for hours on the pros and cons for open source vs. commercial developer components and tools but that’s not the subject of this post. The subject is about which language I will focus the majority of my future attentions on learning and using, and I’d love to get your input before I decide. Here are the current contenders:

All the major web apps I mentioned above seem to be built using PHP and I’m currently running many of those apps, PHP is pretty similar to the ASP that I know so well, it’s web-specific, there is a huge support community, it runs on both Windows and Linux, and every Linux web host known to man seems to offer it preinstalled. However, there seems to be lots more crap PHP code examples littering websites than good PHP code examples making it harder to learn so it might be hard to seperate the wheat from the chafe, it is not easy to configure on Windows Servers (especially at a shared web host), and no one individual framework seems to have gotten the lion’s share of the market attention so picking one would be a crap shoot. Oh, and it uses those infernal semi-colons just like C#.
Ruby on Rails
Ruby and it’s framework Rails have gotten tons of attention and it seems all the cool kids are doing it, especially lots of the Web 2.0 startups, it is very database-centric, has very elegant URL mapping functionality, and it seems you can get web apps built really fast using it. And Ruby.NET is also on the horizon meaning I might be able keep my toe in .NET. However, the community comes across as just a little bit too religious and I’m generally alergic to that, AFAIK it doesn’t run on Windows, or at least not for shared hosting. Plus I’ve had people I respect tell me that Ruby doesn’t have nearly as many users as the "P" languages, that Rails it not nearly as mature as its purported to be, and that Rails makes simple thing simple but complex things extremely difficult. And the number of available web hosts that offer it is quite limited.
Unlike PHP, it seems Python is well suited for both web and desktop apps, which might come in handy from time to time, and a shipping IronPython means that I definitely can keep my toe in .NET. The Django framework seems to be a little more mature and have a little less religion than RoR, and Django also has nice URL mapping functionality, albeit slightly less elegant than RoR. And it seems to run equally well on Linux and Windows. However, Django seems more document publishing-centric and less database-centric, there are very few web hosts that support DJango, and I’ve heard it is a real bitch to get working on a web host.
But then again, maybe I will stick with VB.NET. The Castle/Monorail project is supposed to be a lot like RoR, and I’d even have the option to use Mono on Linux. However, the third party tools are definitely wanting, most web hosts haven’t a clue what Mono is, and they coded Castle/MonRail in C#, so I’d always be dealing with semi-colons…
I could stick with ASP, which I still like, and learn JScript to replace VBScript, the latter of which just has too many limitations when compared with the other current options. This clearly also runs on Windows and any Windows web host will support it, and I already know Windows backwards and forwards. On the other hand, I’ll need to use ISAPI Rewrite for clean URLs, JScript on ASP it has no future and few code examples on the web, and what third party components and tools (to speak of…)?!?
I could also use develop VB.NET objects and call them from ASP; that’s what we last did at Xtras.Net (and I think that is what they are still doing, last I checked…) Of course, calling .NET objects as ActiveX controls just doesn’t feel right, and again there’s that third party component and tools problem…
Of all the teams working on tools for developers over at Microsoft, the PowerShell team run by Jeffrey Snover is the only one that gets me excited anymore. And in an email from him (or was it a comment on my blog, I don’t remember exactly) he said that PowerShell can do web, and will be able to do it more easily in the future. On the other hand, it’s not here today, and what if webified PowerShell is just another way to do rubbish ASP.NET instead of what it should be, a url-based object-selector-and-invoker like Django or Rudy on Rails.  And what’s the chance it will ever run on Mono…?
Is there anything else do consider…?

At this point I should probably explain what I’m not considering, and why:

Java on Anything:
Although I was really impressed at a Sun Tech Days recently here in Atlanta , even the Sun people were all over dynamic languages with praise, like Jython and JRuby. And though I was impressed with NetBeans 5.5, all the other "enterprise" baggage like J2EE and Servlets and JSP Custom Tags gives me the feeling I’d be jumping out of the frying pan and into the fire.  Oh, and Java uses those infernal semi-colons too.
C# on Anything:
One word: semi-colons!  Sorry but if I’m going to go .NET, it’s going to be VB.NET (or IronPython). VB.NET is so much more natural to me than C#, and there are things you just can’t do in C# that you can do in VB.NET related to using "implements" on a method in an inherited class (I ran into that limitation of C# compared to VB.NET on a project several years ago where I was managing a pair of interns coding in C# and they hit a wall because of that limitation. I can dig it up if anyone cares, or better yet, can someone who knows the specifics explain it in comments?)
Perl on Apache:
Although my partner on Toolicious Ben Coffey who is a devoted disciple of Perl will cringe to hear this (yet again), I can’t quite get my head around Perl, and they tide, at least today, is away from Perl. Of course Ben claims that will all change with Perl 5.0, but to me that remains to be seen and I’d rather go with a bird in the hand (i.e. one with a lot more active current user base) than a bird in the bush.  But who knows, they say you should learn a new language every year; at any rate if he’s right maybe I’ll try and pick up Perl 5.0 in around 2012. :)

So there you have it: my potential choices and non-choices. Any thoughts I which I should choose?  Any and all input will be appreciated and considered seriously.

Another Missed Ball: No .NET Application Container

David Laribee just referenced my IIS 7.0: Too Little, Too Late? post and he made an interesting comment that I hadn’t previously pondered but that is very relevent:

It’s a major bummer that there’s no such thing as a virtualized “.NET Application Container” for the new scalable grid computing and provisioning services coming out (Amazon EC2, MediaTemple’s Grid-Server). Essentially .NET programmers can’t easily take advantage of new long tail models with easily-sourced infrastructure services. Going out on a limb, I’d suggest these limitations contribute to a lot of top/entrepreneurial developer talent moving over to various flavors of the LAMP stack, Ruby, etc.

I think this is yet another area where Microsoft is missing the ball. And it is related to the fact that people can’t build and distribute Windows-based stacks as appliances (i.e. because of licensing issues) in the same way people can build and distribute them for Linux. Mark my words, these two aspects are a significant achillie’s heel for Microsoft and will have significant import in the further decline of the Windows Server and .NET platform.

IIS 7.0: Too Little, Too Late?

March 2007 Cover of MSDN Magazine Back in January 2006, I blogged about how much I wanted an IIS 7.0 that handles extensionless URL rewriting. Well this week I just got my March 2007 copy of Microsoft’s MSDN Magazine in which they ran a detailed technical preview of the features and functionality of Internet Information Server 7.0. Reading through it, I found myself salivating over it’s capabilities that I’ve needed for literally a decade. Those who follow some of my other escapades know that the #1 feature I want it to provide over IIS 6.0 and prior is the ability to fully control the URL with our without an extension. Yet, something is different now. Five years ago I would metaphorically have killed for that functionality. Even a few years ago, I wanted it badly. But reading about all the great things in IIS 7.0 today for future availability on server hosting platforms next God-knows-when (i.e. after Longhorn ships *and* most Windows-offering web hosts upgrade) sadly comes across to me as just too little, too late.

Too Little

Too little because Microsoft won’t deliver IIS 7.0 to run on Windows 2003 Server necessitating a costly and in some cases problematic operating system upgrade. This will drastically limit the number of situations in which people can choose to switch to develop for the new features of IIS 7.0. For example, when the funds for operating system upgrades are not in the budget or simply because the developer doesn’t have the corporate clout to convince management of the need to upgrade.  And the only people who will even be able to experiment with IIS 7.0 will be those with Windows Vista. And since upgrading to Vista also requires funds and often new hardware, it is not a foregone conclusion. Consequently there will only be a small percentage of Microsoft-centric developers writing web apps that uses the functionality of IIS 7.0 over the next several years. Given the limitations of IIS 6.0, I just find this scenario to be unacceptable.

Too Late

Too late because Microsoft’s outdated process and slow release cycle, which I blogged about last month, has given rise to compelling alternatives on the Linux platform.  And Apache has has many of the key features that IIS 7.0 provides, most importantly via it’s mod_rewrite functionality, that by the time IIS 7.0 is ready for prime time, there’s a good chance only a tiny percentage of web developers will care. I for one need to develop web apps I can run on web hosts today, not wait around and dream for some yet-to-be-determined future brighter day. Microsoft, the rules have changed and you are not immune. You can no longer schedule product updates years out and expect people to wait to pay you for them years from now when free-to-use open-source alternatives addressing the same need exist today. I can no longer bring myself to design or run a web app on IIS 6.0[1] when the URL management functionality I crave is already available on Apache. And by the time IIS 7.0 is released I doubt I’ll even consider running an IIS server.


However Microsoft, there is a solution if you will only listen, which I highly doubt. Microsoft You should know more than any other tech company that your key to success is getting developers to write programs for your platforms. Yet on the web developers are voting with their feet and most new web applications not sponsored by a "You don’t get fired for buying Microsoft" large company IT organization are choosing to build on Linux and Apache.  IIS was once the leading server on the web, but today it can barely eek out more than 1/3rd market share. If you don’t stem this time, things will only get worse. Much worse. Here’s what to do: Release IIS 7.0 as an update for Windows 2003 Server and Windows XP that gets installed automatically via Windows update. Offer it in parallel to IIS 6.0 so it must first be configured by an admin and IIS 6.0 disabled, if necessary. Feel free to restrict it in whatever ways you must given 2003/XP’s lack of Longhorn/Vista infrastructure, but don’t use that as an excuse to eliminate key features such as URL management and HTTP response filtering. Doing this won’t change the minds of those who have already given up on Windows, but it will certainly minimize the profuse bleeding.


  1. Given how much I dislike ASP.NET and how frustrated I am with IIS 6.0, I can’t wait till I find the time to move my blog to another program besides dasBlog.


After 20+ years on Microsoft operating systems, I’m finally considering moving over to the dark side (or *away* from the dark side, depending on who you ask, LOL!). Yes, I’m considering buying a Mac. Actually a MacBook.

I decided to get a Dell 1405 because of it’s purported great battery life and I placed my order Friday night (and I got a 25% coupon, sweet!). Then two things happened on the same day; Dell held my order waiting for me to call to verify it, and I got a MacMall catalog in the mail and decided to read it. Hmmm.

I blogged about the Mac when I first heard of Parallels, and a friend of mine has a MacBook Pro that he runs Windows on so I’ve been considering it for a while. Well, yesterday I went to the store to check it out and it was pretty nice (except for lack of a right mouse button, doh!) but the guy at CompUSA couldn’t tell me about battery life.

No problem, I have another friend with a MacBook and I emailed him to ask about battery life. To which he replied:

I just googled for “mac book pro extended battery” and it
returned plenty of results…

Ouch, Busted! He did go on to relay his experiences, but point taken. :)

Anyway, though I still haven’t decided which laptop to get, I christen thee a new meme in my friends honor while I pay homage to that soon-to-be bygone era where a few people actually did read the manual:

GTFK: Google The F***in’ Keywords

Just to be explicit, there is a proper context for using GTFK. When someone asks you a question that requires a long explanation that they could have easily answered themselves, it is perfectly appropriate to simple tell them:


From this I’m sure they will get the message. ;-)

P.S. I know I don’t have to tell you what the *** stands for.