Entries from Oct 2006 ↓
Oct 30th, 2006 | Opinion, Programming
I pondered this question today: How do you Objectively Decide Yea or Nay if someone is a Great Programmer?
What do I mean by “great?” I mean someone who is passionate about programming, someone who constantly learns the best techniques for writing code, and someone who stays on top of the latest and greatest technologies. You know the type; he’s the one that all the “just-drawing-a-paycheck” programmers go to when they can’t figure something out or when they need a tool recommendation. For example, Scott Hanselman would be one of my best examples of a great programmer.
Evaluating a programmer’s skill can be pretty difficult because it is so nuanced. But after some contemplation I devised an incredibly simple litmus test to help determine if someone is a great programmer. Certainly it’s not foolproof, but I’ll bet you’ll find it to be pretty accurate:
Do they frequently buy programming books and magazines with their own money, and do they read them on their own time?
Hopefully my little test doesn’t cause anyone who fails to get too annoyed with me; hey, I’m just the messenger! Where someone spends their time and how they spend their money is probably the most reliable indicator of your true values. So if you really are a great programmer you’d have already proven it by putting your money and time where your mouth is as! ;-)
P.S. I have no financial affiliation with O’Reilly Media, Apress, or any other books publisher or reseller for that matter so don’t think that saying I’m just trying to get you to spend money will get you off the hook cause it won’t! The evaluation begins with you. :-)
Oct 23rd, 2006 | Opinion, Programming
In my opinion, there two (2) approaches to software development methodologies and resultant architectures.
In the beginning: Monolithic Complexity
I call the first approach: "Monolithic Complexity" which I characterize by the following:
- Grand Visions,
- Marketing defines Software Architecture,
- Significant Development Budgets,
- Attempt to Eliminate Constraints,
- Requirement to Accommodate Infinite Future Scope,
- Feature Sets based on Target Customer, Price Points, and Release Schedule,
- Divergent and Expanding Functionality,
- Related Teams Worked on Most Components,
- Minimum Utilization of External Components,
- Software Comprised of Complex and Highly-Coupled Components,
- Most Components Built to Specifically Support Application,
- Data and Configuration Stored in Highly Optimized Binary Format,
- Brute Force Development Processes,
- Constantly Extended Release Dates,
- Installation, Operation, and Extension Requires Exactness, and
- Planned Upgrade Cycle.
Microsoft is a master of Monolithic Complexity. One only need to see, use, develop for, and study the history of Windows, Visual Studio, SQL Server, and Exchange to understand just how much Microsoft fits this category. However, most modern software companies emulate this same approach regardless if they do or do not execute as well as Microsoft has.
Then came: Lots of Little Layers
Moving on, I call the second approach: "Lots of Little Layers." I characterize this software development methodology and resultant architecture by the following:
- Modest Visions,
- Developer defines Software Architecture,
- Little or No Development Budgets,
- Nothing but Constraints,
- Moderate Consideration of Future Scope,
- Feature Sets planned based on Developer’s Needs,
- Cohesive and Focused Functionality,
- Unrelated Teams Worked on Many Components,
- Maximum Utilization of External Components,
- Software Comprised of Simple and Minimally-Coupled Components,
- Most Components Useable in Other Contexts,
- Data and Configuration Stored in Poorly Optimized Text Format,
- Minimization of Development Effort,
- Frequently Released on Schedule,
- Installation, Operation, and Extension Accommodates Sloppiness, and,
- Upgrade Cycle an Artifact of Reality.
Anyone involved in software development in the past half decade will immediately recognize the latter: open-source development. Yes these include Linux, Apache, MySQL, and PHP as well as Ruby on Rails. But open source also includes software that Microsoft-centric developers probably know well including NUnit, NAnt, CruiseControl.NET, SharpDevelop, and dasBlog.
All in Life is Gray
Of course nothing in life is really black and white, and that applies here. Some open-source development follows the second approach less than others and it is inversely proportional to the extent with which the open-source project is shepherded by a corporate entity versus a passionate group of individuals.
A Uniter not a Divider
Now I’m not arguing against corporate entities in this short essay as I ran one for the previous twelve years and plan to again in the near future. Instead I am merely indicating differences and then drawing attention to the benefits of a specific aspect of the latter.
Paper covers Rock
I am writing this not to discredit commercial software companies but because I want you to see how by its very nature open-source development results in lots of little layers of software. But more importantly, I want you to understand that this approach has significant benefits. Just as scissors cut paper and rock breaks scissors, paper covers rock.
And finally, I want to encourage commercial software companies, especially Microsoft, to emulate this approach.
And in the future, as I have time, I plan to drill down to different aspects of the "Lots of Little Layers" approach to identify why the approach is so valuable complete with examples and counter examples. In the mean time I look forward to your comments and questions on this
Oct 11th, 2006 | Web
For those of you interested in Salesforce.com and their URL structure, useful for making bookmarklets, I documented Saleforce.com’s URL structure over at another blog I maintain entitled Thoughts on Salesforce.com. I also copied that documentation over to my WellDesignedUrls.org wiki with plans to maintain in on an ongoing basis.
Oct 7th, 2006 | Web
I was listening to one of Scott Hansleman’s podcasts the other day,and in it he mentioned the open source TiddlyWiki so I decided to check it out. For those of you not familiar with wikis yet, have you been living under a rock? (Sorry, just kidding, you can read up on Wikis over at Wikipedia.)
That said, let me talk about the history of my own personal home page and how it relates to TiddlyWiki. For a long time I’d looked for a new personal home page for my browser that would let me easily add/edit/delete/reorganize the common links I use, and TiddlyWiki is now my new favorite in their aspect.
Back in the late 1990’s, after using the web a few years, I moved to a hand edited HTML page on my C: drive for my browser’s "home page" where I stored my favorite links. I would periodically edit that page when I wanted to add or remove links, but I didn’t edit it nearly as often as I would have liked because it always seemed like a bit too much a of PITA. Later I changed my HOSTS file running on my machine to point www.myhomepage.com to a webserver running at localhost which made for a cleaner URL, but it didn’t change the difficulty of editing. This of course masked the real www.myhomepage.com site on the web, but as I didn’t use it I didn’t care! I stuck with that setup for years.
Then came along Google’s Personalized Home Page, and I was overjoyed. I used it for several months, configuring and reconfiguring. I even wished for enhancements to my Google Personalized Home Page, but over time it’s slowness to load just got to be too much for me. I returned back to my custom HTML page at my locally customed domain "www.myhomepage.com."
I even added a "text" field with the full file name of the local file so I could quickly cut and paste the filename into a file open dialog and edit the file. This improved things. But it’s nothng like TiddlyWiki!
I "installed" TiddlyWiki (i.e. copied the "empty" TiddlyWiki .HTML file) into a directory that is FolderShare‘d with my laptop (FolderShare is another of my favorite utilities) so that I can access the most updated verson do matter if I’m on my home computer or my laptop; this was a suggestion Scott Hanselman made, but I’m sure I would have figured it out too! :)
In a similar vein, one thing that makes a TiddlyWiki so cool is it can be stored on a USB key and then accessed from any computer!
How TiddlyWiki works is it uses a collection of "Tiddlers" which are page sections that TiddlyWiki dynamically displays on your page using some useful animations. A Tiddler is analogous to a "Topic Page" on a server-based wiki, but unlike topic page on a server based wiki you can and usually do have many Tiddlers open at the same time.
There are a few things I don’t like about it; nits really such as you can’t rename Tiddlers (or at least I haven’t been able to figure out how.)
I even started thinking about using TiddlyWiki for the home page of a websites I wanted to set up as it allows the disabling of editing over all but localhost. However, after looking into this concept, I realized that it wouldn’t be useful unless editable on another machine besides localhost. And I believe, by it’s very nature, TiddlyWiki could not lockout someone from editing it because all of code for the TiddlyWiki is stored in the HTML file itself! I guess a hybrid could work, and I would like to see that, but then it wouldn’t be a portable single-file solution.
Check it out. I bet you’ll love it too.
Oct 5th, 2006 | Web
Those of you who read my blog know that I strongly believe in the importance of URL design. For years it bothered me that we’ve see so many URLs on the web that look like the following example of poor URL design from Jeffrey Veen’s 2001 book The Art & Science of Web Design:
Back in Aug of 2005 I finally got my thoughts together and wrote the post Well Designed Urls are Beautiful. Well, from anecdotal evidence (I don’t track stats on my blog stats very closely) it appears that post has become my blogs my popular post!The popularity of that post combine with the several others facts inspired me to go ahead and launch a website with the following mission:
"Providing best practices for URL design, and to raise awareness of the importance of URL design especially among providers of server software and web application development tools."
The "facts" I referenced above are:
- I continue to feel strongly about URL design yet many are still oblivious to the benefits,
- I still have a lot more to say on the topic, and
- It appears that good URL design is one of the many tenants of Web 2.0 partly because of AJAX, Mashups, and REST-based APIs meaning that it won’t be such an uphill battle!
The name of the website/wiki is WellDesignedUrls.org and for it I have the following goals:
- To create a list of "Principles" as best practices for good URL design,
- To cultivate how-to articles about implementing good URL designs on the various platforms like ASP.NET, LAMP and Ruby on Rails, servers like IIS and Apache, and web development tools like Visual Web Developer and Dreamweaver,
- To cultivate general how-to articles and resources for tools such as mod_rewrite and ISAPI Rewrite and others,
- To cultivate "solutions sets" for mod_rewrite and ISAPI Rewrite and others that can clean up the URLs on well known open-source and commericial web applications,
- To grade web applications, websites, and web development tools by giving them a "report card" on how well or how poorly they follow best URL design practices,
- To document URL structure of major web applications and major websites,
- To recognize people who are "Champions for the URL Design cause" (those who’ve written articles and essays promoting good URL design), and
- To providing resources for further reading about good URL design.
The wiki is clearly new and thus a work in progress, so it will probably be a while before it realizes all these things I mention. However, as I have time and am able to recruite others to help, I think it will become an important advocate for good url design and a great central resource for best practices. And if you’ve read this far, I’m hoping that you’ll consider either contributing when you feel you have something relevent, or at least use start considering the value of URL design in your own web application development and also point people in the wiki’s direction when applicable.Thanks in advance for the help!P.S. I also plan to launch a WellDesignedUrl blog in the near future.
Subscribe to my RSS feed it you want to be notified of when the blog goes live.
Oct 3rd, 2006 | Marketing
Lately I’ve become be very interested in Web 2.0
with particular interest in Mashup development
, REST-based web services
that empower mashup development, and Building APIs for the web
. The concept that the web can finally start evolving into a programmable set of services and data
instead of just electronic brochures and self-service applications really energizes me!
On the other hand, even though I am incredibly excited about this trend, I’m frustrated by how few companies are actually doing it! Very few business people have thus far gotten that “Aha!” moment where they realize what so many technologists instinctively understand; the business benefits of opening up data and systems as web services on the Internet can be vast!
Even with such highly successful companies as Google and Yahoo freely sharing so much of their data via REST-based web services, and Amazon driving significant revenue1 from it’s pennies-per-transaction SOAP and REST-based web services, most business people I speak to either just don’t get it! Or worse, they are either scared to death of it or convinced it makes absolutely no sense!
Well all I can say is that old saw will definitely be true: “What you don’t know can hurt you!” The late majority to this game (and even some of the early majority) that continue not to get it, avoid it in fear, or just plain out deny it are going to become the Roadkill of the Web 2.0 Era!
Significant for such an early stage