Entries from Mar 2004 ↓
Mar 28th, 2004 | Miscellaneous
I’m back from VSLive in San Francisco. It was a really good show for us.(unfortunately I was in the exhibit hall or in meetings the entire time so didn’t get to see any sessions…)
Since I literally just got back a few hours ago, I’m a bit fried from the trip, so I don’t have much creative to say right now. Maybe tomorrow, but then being out of town all week will probably have me spending the next few days just to catch up!
Mar 19th, 2004 | Opinion, Programming
First, I’m honored Paul Vick was willing to read my long-winded essay, and second I’m honored he would blog about it.
In this blog post Paul wrote:
…Mike also raises the question of strictness. He makes the argument (echoed by Don Box and others) that many programmers would do better with a world that’s more typeless and less strict than the one they get on .NET. As someone who lives day to day in a strongly-typed world (so to speak), this seems somewhat counter-intuitive: less typing usually equals less performance and less compile-time checking, leaving problems to be discovered at runtime. In fact, one of the major features of this release, generics, is specifically about making it easier to have *more* type safety and better performance at runtime. So the persistent voices in favor of scripting appear to be swimming against the tide. The question is: are they right?
Is Strictness Bad?
Let me clarify my point about less strictness. I for one absolutely love strictness; I think it makes far more robust applications possible. .NET is fabulous; I still program some in ASP/VBScript and I see so many things about .NET that I wish I could use.
So the cry for less strictness (at least from me) is not a belief it does not have value but instead about the minimum skill requirements for someone to become minimally productive. Every additional bit of required strictness adds one more thing for the person with few or no skills to learn before they can get started. “Occupational“/”Hobbiest” programmers (Matts and Morts) have short attention spans (!) because they have to get a job done. If learning to IMPORT this or CTYPE() that takes more than a few minutes, they’ll most likely give up and do without or go elsewhere (god forbid!)
I don’t want to let them become sloppy coders, I just want to see you lower the initial barrier that currently keeps them from getting started.
Strictness is Good, but we need Accessibility and Transitionality
No, it is not wanting less strictness, it is about wanting .NET to be more initially accessible. When your parents first taught you to ride a bicycle, didn’t they give you training wheels? It would have been a lot harder (with a lot more skint knees) without! Do you use training wheels on your bike today? Of course not. Did you ski the black diamond on your first ski trip? (I hope not.) Did you get a learner’s permit when learning to drive a car? Were you able to windsurf within the first minute you tried? Don’t governments require hours and hours of practice with a skilled instructor onboard before they allow you to fly a plane solo?
Life is full of things that require signficiant skill, but life has many ways to achieve those skills without requiring initial mastery. Programming in .NET requires a lot of knowledge and skill yet .NET offers no training wheels.
So I am not arguing against strictness, I’m arguing for the option of less strictness because that is what is needed to make .NET more accessible to hobbyists and occupational programmers.
But don’t Send them Adrift without a Paddle
However, I also believe the option for less strictness should go hand-in-hand with a strategy for providing transitionality. Many years ago a Microsoft Windows NT Server program manager whose name I cannot remember told me “scalability doesn’t just mean scaling up.” Similarly transitionality isn’t just about making it easy for the beginner. Transitionality is also about making it easy for the beginner to become an expert.
A transitionality strategy from Microsoft should incorporate messaging and education. It should not only promote how easy .NET is for building small solutions with lax strictness, but it should also promote how important it is for people to learn strictness and incorporate into their apps, especially if those apps will grow in scale. I think this education is important to pre-empt those not classically educated in computer science from developing ignorant biases against strictness.
A transitionality strategy should also incorporate methods of helping the beginner learn by example. In my VBScript.NET essay, I talked about an IDE and/or script-processing EXE that would show VBScript.NET code after it was converted to fully decorated VB.NET code complete with all strictness options so people can easily see how it could/should be done using their own code as the example. If you’d like more examples, see any of my other posts on transitionality as they all cover this concept in one way or another.
And don’t Fragment Expertise
One last point I think it is important to make based on some of the response I’ve seen to my VBScript.NET essay. It is critical that whatever Microsoft offers to make .NET more acccessible is also compatible with its core .NET programming languages, and minimally I think that means VB.NET.
As I tried to get across in my essay, my concept for VBScript.NET was not a new language. It was not a proposal for a .NET version of VBScript. Instead I proposed a lexically simple dialect of VB.NET layered on top of VB.NET with tools that made it easy to use. Tools such as a simple IDE that would show VBScript.NET code converted to VB.NET and a “script processing” EXE that would minimally execute just one or line of code without requiring projects or namespaces or references or that big honkin VS.NET.
Everything a .NET beginner learns should add to his .NET skill, especially in the library and language, though less so with the IDE and tools. That is where I believe VBScript, VBA, and VB6 went astray; expertise in one did not necessarily translate to expertise in the others.
The Bottom Line Goals
I’m not married to my idea for VBScript.NET. The same goals can be acheived numerous ways. I’ll summarize the goals I see:
- Make .NET more accessible to beginners by relaxing strictness
- Provide transitionality to allow .NET beginners to become .NET experts
- Offer tools that simplify use and “tutor” .NET beginners toward .NET expertise “by example”
- Ensure beginner .NET offerings are upwards compatible with expert .NET offerings
- Ensure everything a .NET beginner learn adds to their .NET expertise
So Paul, does this not shine a new light on what I have been proposing? BTW, I really do like your longer posts. They are much more interesting than the short “look there!” type of post by most bloggers.
Mar 19th, 2004 | Programming
In a earlier blog I spoke the need for transitionality in development tools. One area of greatest need is in Microsoft Office; Outlook, Word, Excel, et. al.
Why Office? Recent versions of Office have provided almost full programmability, a nice object model, a macro recorder, and so on which helps power users automate processes and allows programmers to create applications using components of Office. However there are a woefully small number of people who actually program Office apps given the number of Office users. I believe that although Microsoft gave us great power by making Office programmable, they did not make programming Office accessible. They did not provide transitionality.
Case in point: I was just using Word 2003 and I needed to do a search & replace but search and replace didn’t work for what I needed so I decided to use a macro.
By the way, I was creating a mail merge document and I could not get Word to replace my placeholder text (i.e. ““) with a merge field (i.e. “«CompanyName»”) because a merge field is special. I needed to use a placeholder text because Word 2003 won’t let me add or change fields in my Access database unless I close the Word doc and firing up Access. What a PITA! And definitely a step backward from prior versions.
I recorded the following macro that did a “find” on my placeholder and then inserted the appropriate merge field:
.Text = “”
.Forward = True
.Wrap = wdFindAsk
.Format = False
.MatchCase = False
.MatchWholeWord = False
.MatchWildcards = False
.MatchSoundsLike = False
.MatchAllWordForms = False
I tied this macro to Alt-Z and if worked fine. Except I wanted to have it pop-up and ask me for the name of the merge field. And if it didn’t find the placeholder, I did not want it to insert the merge field.
Now I view myself as a pretty good programmer; not the best, but decent. I taught object-oriented programming before it was fashionable (over 10 years ago.) I studied programming as a sideline in college, I’ve programmed in over 10 languages, I wrote a 1000 page book on a programming language product called Clipper, and taught corporate and government developers how to program for seven years, and written a good portion of the VBxtras and Xtras.Net websites over the years, include both the front-end ASP and back-end SQL. So I’m not the typical occupational/hobbyist developer, but my role and position only allow me to program occassionally.
It took me 30 minutes reading help files to learn how to add my desired features to my macro. It shouldn’t have taken me that long. What about the power Word user who doesn’t have the same experience I have? They would have given up (I certainly should have because I didn’t get 1/2 hour of value out of my macro.) Of course a programmer who has worked with the Word object model might say “Duh?!? That is soooo easy! How could you not figure that out in 2 seconds?” But that is the point.
My point is the people most likely to make best use of programming Office are the ones for whom programming Office is generally over their heads: power users. And programmers generally only use Office as a component within other applications; they don’t know what really needs to be automated when using Office; power end users do. If power users were empowered to easily program Office, thousands of freeware, shareware, and commerical add-ons would be developed some of which would form the foundation of new 3rd party add-ons companies. For the most part this hasn’t happened, and all Office users miss the opportunity to use add-ons that were never developed. Microsoft went to tremendous expense to make Office programmable, and are getting a pitiful return on that investment.
What to do? Though I don’t have the answer, I do have some suggestions:
- When recording macros, include commented-out code that would show how to do something a user might obviously want to do if they edit the macro (i.e. after a “Find”, put a comment in showing how to test to see if the item was found.)
- Implement toolbar options using short macros and make it brain-dead easy for people to view the source code such by providing a “View Source” option on right click.
- If using short macros to implement most toolbar options is not viable, do customer research to find out the most common small tasks for which people could use macros and add special macro toolbars for those, with brain-dead easy access to view source.
- Do research to find out what the most common needs are when coding and create drill-down wizards that insert snippets of code for those common needs (i.e. “Need to ask user for some information and store for use in macro?” for which the wizard would insert compelete code for using InputBox)
- Do research to find out what higher level needs are and let power users run wizards to write complete macros for those higher level needs.
My hope is someone at Microsoft involved in programmability for Office will see these things and realize that if they could make programming Office more accessible, a lot more people would start programming Office. In future blog posts I’ll give new example of when I find difficulty doing something quick and dirty with Office macros as I run into the inspiration for them (I know I will.)
By the way, here is the updated version:
Dim strFieldName As String
strFieldName = InputBox(”Field Name:”)
.Text = “<” & strFieldName & “>”
.Forward = True
.Wrap = wdFindAsk
.Format = False
.MatchCase = False
.MatchWholeWord = False
.MatchWildcards = False
.MatchSoundsLike = False
.MatchAllWordForms = False
If Selection.Find.Found Then
Text:=”"”" & strFieldName & “”"”
Loop While Selection.Find.Found
Mar 19th, 2004 | Personal, Programming
Ah the memories
Eric Lippert’s blog post about partial order sorting was, as always, interesting and well written. However, this specific post brought back memories of a project I did for Gateway Foods of Lacrosse Wisconsin back in the early 90’s. (I googled them and they don’t appear to no longer exist.)
Though my project was not as fun or easy to understand as Eric’s example, Gateway wanted to perform the "hot and new" business analysis called Activity-Based Costing, or ABC. Instead of just looking at which food items were most profitable, they looked at which customers were most profitable.
Aced-out by the SQL dude
They hired one of the big eight firms (remember when they were called "the big eight?) Booze-Allen-Hamilton to do the analysis and then went looking for a programmer. I heard about the project from my contact "George" for whom I had delivered a training course on Clipper earlier. I got excited, but really had no chance at it because the IT department didn’t believe in PCs, so they hired a SQL programmer to implement.
A few months went by and George called to tell me the project was going no where. A few more later George cakked to tell me the project was in real trouble. A few month more still, and I got a call from George asking if I could I start on the project that week in hopes of salvaging it (of course they had already spent most of the budget on the SQL guy, but what can you do?)
Be careful what you wish for
Later that week I arrived and met with the people from Booze, and they proceeded to hand me a stack of paper 10 inches thick that contained the allocation formulas! (I kid you not.) My job was to write a Clipper program to apply those formulas to a HUGE 300Mb database (remember, this was the early 90s and 300Mb was huge back then.) The allocation formulas were for things like "Allocate the cost of fuel to deliver to each customer based on the miles required to travel to customer. Sum the cost of fuel and multiply by the miles traveled for each customer divided by total miles traveled to all customers." And so on. Around 1800+ formulas in all.
For the next three days, my eyes glazed over as they tried to understand the formulas. I starting thinking "What the hell have I got myself into? There is no way I will ever be able to understand all of these formulas well enough to encode them into a program. No wonder the SQL guy failed. I am screwed too!" But finally, it started to dawn on me; "Don’t learn the formulas, learn the patterns." I’ve always been extremely good at pattern recognition so why didn’t I see it immediately? (which contrasts nicely with all the other things at which I’m extremely bad.)
Patterns; what a concept!
The initial database had number tables like fooditems, customers, and order. It turned out there were only three patterns:
- Scan a table and perform a calculation like sum or average and write the output to a one-record table. Examples included calculating total sales, average order size, etc.
- Scan a table and perform a calculation like sum or average for each fooditem, or customer, or similar, and write the output to a table that has one record for every fooditem, customer, or similar.
- Scan a table and perform a calculation like every record and write the output to a table with the same number of records as the scanned table.
That’s it. Three patterns. All 1800+ formulas boiled down to three patterns. SQL gurus know the following are examples of the above three patterns:
SUM(SubTotal) AS TotalPrice
SUM(SubTotal) AS TotalOrdered
opc.TotalOrdered/c.TotalDeliveryMiles AS SalesPerMile
Customer c INNER JOIN
OnePerCustomer opc ON opc.CustomerID=c.ID
Technique triumphs over toolset
Ultimately I succeeded using Clipper where the SQL programmer had failed. He didn’t fail because he used SQL and my success was not because I used Clipper. He failed because he tried to hand-code all 1800+ formulas and I succeeded because I wrote a Clipper program to handle the three patterns, not the 1800+ formulas. In hindsight given the tools available at the time, the best solution would probably have used Turbo Pascal and SQL, but such is life.
The Gateway people entered the allocation formulas from Booze into a grid in my program, my program then determined which formulas to calculate and in which order, and then program executed those formulas. I used Clipper’s "macro" capability to allow me to execute the text formulas much like how Execute() can be used in VBScript. However, VB.NET programmer could do the same thing today by writing a program that generates SQL and then passing that SQL to ADO.NET to execute.
Back to Eric’s Post
The "which formulas in which order" was the part I was reminded of by Eric’s post. It actually took me three weeks to write that first program. 12 hours a day. 7 days a week. In the dead of winter. In Lacrosse Wisconsin. Brrrrr. (I’m from the south in Atlanta and I am a wuss when it comes to frigid weather!) Today I could probably write that same program in VB.NET and SQL in an afternoon. But then hardware is a lot faster now.
Why three weeks? My program took almost two days to apply the 1800+ formulas to the 300Mb database! Ever try to debug a program where you have to step through code for 24 hours before you get to see the next bug? Not very productive. It was here I actually learned the tremendous value of creating an "intermediate representation." HTML is a perfect example: one program can generate HTML and another can display it. SQL, XML, and MSIL are also immediate representations. One of the benefits of an intermediate representation is you can validate output without having to observe behavior. In essense intermediate representations provide much the same benefits as state machines.
I split my program into two modules; one that compiled the formulas, and a second that executed the compiler’s output. My "compiler" took only about 2 hours to run which allowed me to debug my program in this lifetime. It still took two days to execute the formulas, but at least those formulas executed correctly!
Just when I thought I was out…
I went home victorious. Or so I thought. A few weeks later I get a call. (They tracked me down while I was on vacation, no less!) The Gateway people decided they wanted to modify the formulas and run "what-ifs." After each "what-if" it took them two days to get an answer. "Couldn’t I make this thing any faster?" (Imagine if the SQL guy had succeeded in encoding all those formulas into a SQL script? He would have had a job for life! A boring-as-hell job, but job nonetheless. Until Gateway realized all the money they were wasting and then threw him out on his ear.)
My original program, except for the compiler part, was almost exactly what Eric Lippert blogged about a few days ago. I was damn proud I wrote it. However, optimizing it was much, much harder. Turns out what was happening was each pass on each of the tables took a really, really long time (we knew that), but also there were formulas that could be performed on the same pass, but I doing a complete pass for each formula.
Well, to finally cut a long story short, another three weeks and I was able to implement an optimizer which batched together formulas that could run at the same. That version took only about 6 hours to run. Good enough, and that was the last of Gateway foods for me (I never was very good at milking clients; guess that’s why never made much money while in consulting.)
So, back to Eric’s example. Let’s assume it took one minute for each seperate Eric put on, but if he was able to put on multiple items "at the same time", then each batch of items would only takes him 1.5 minutes to put on. His original program’s output, which dressed him in this manner:
tophat : [ ]
shirt : [ ]
bowtie : ["ashirt"a]
socks : [ ]
vest : ["ashirt"a]
pocketwatch : ["avest"a]
underpants : [ ]
trousers : ["aunderpants"a]
shoes : ["atrousers"a, "asocks"a]
cufflinks : ["ashirt"a]
gloves : [ ]
tailcoat : ["avest"a]
This would take Eric 12 minutes to dress. However, if he batched items that could be done at the same time (i.e. items that had no prior dependencies), Eric could dress in 4.5 minutes:
tophat : [ ]
socks : [ ]
shirt : [ ]
underpants : [ ]
gloves : [ ]
trousers : ["aunderpants"a]
bowtie : ["ashirt"a]
vest : ["ashirt"a]
cufflinks : ["ashirt"a]
pocketwatch : ["avest"a]
shoes : ["atrousers"a, "asocks"a]
tailcoat : ["avest"a]
So here’s my challenge Eric (and I hope you don’t hate me for it): Implement an optimizer that will generate batched output where items in each batch have no interrelated dependencies, as shown in the last example above. It was a bitch to write in Clipper; how about in JScript?
Mar 18th, 2004 | Miscellaneous, Opinion
I’ve been programming for over 20 years now, and from the day I started programming I came across of language wars. Hell, I was even involved in many over my years. That is, during the foolish days of my youth… (like I never get caught in them today!)
But lately I’ve noticed them and they seem especially vitriolic. Not to pick on anyone especially, but these are some of the ones that got me thinking , ,  (Clarification: I referenced #3 because of the comments it received, not because of the original post), and a lot more (but those are the ones I remember.)
It seems to me to be mirroring the macro world today. People all over are feeling frustrated and upset, and just generally seem unhappy. In the US it was probably triggered doubly by the dotcom bubble bursting and then 9/11, but I believe outside the US there are other more varied and in many cases more pervasive drivers.
I was talking to Scoble the other day and he was able to put it into words where I have previously failed. He called it the "My team" mentality. Everyone seems to be abandoning logic for emotion, choosing sides, and rooting for their team. You know the type; those on my team can do no wrong and those on the other team can do no right.
So in that spirit, I want to provide you with numerous viewpoints:
- I am Muslim and you are Jewish, therefore you are stupid.
- I am Jewish and you are Muslim, therefore you are stupid.
- I am Muslim and you are Christian, therefore you are stupid.
- I am Christian and you are Muslim, therefore you are stupid.
- I am Christan and you are Jewish, and my enemy’s enemy is my friend.
- I am Jewish and you are Christan, and my enemy’s enemy is my friend.
- I went to Harvard and you went to Yale, therefore you are stupid.
- I went to Yale and you went to Harvard, therefore you are stupid.
- I went to Public School and you went to Private School, therefore you are stupid
- I went to Private School and you went to Public School, therefore you are stupid
- I drive a Ford and you drive a Chevy, therefore you are stupid
- I drive a Chevy and you drive a Ford, therefore you are stupid
- I drive a Chevy and you drive a Ford, and my enemy’s enemy is my friend
- I live in the city, and you live outside the city, therefore you are stupid
- I live outside the city and you live, therefore you are stupid
- I live uptown and you live downtown, therefore you are stupid
- I live downtown and you live uptown, therefore you are stupid
- My team is the Yankees and your team is the Mets, therefore you are stupid
- My team is the Mets and your team is the Yankees, therefore you are stupid
- I have a college degree and you do not, therefore you are stupid
- I don’t have a college degree and you do, therefore you are stupid
- I use a PC and you use a Mac, therefore you are stupid
- I use a Mac and you use a PC, therefore you are stupid
- I use Linux, therefore you are stupid
- I use Open Source, therefore you are stupid
- I am in management and you are in the union, therefore you are stupid
- I am in the union, and you are in management, therefore you are stupid
- I am on the west coast on you on the east, therefore you are stupid
- I am on the east coast on you on the west, therefore you are stupid
- I live in the US and you live elsewhere, therefore you are studid
- I live outside the US, and you live in the US, therefore you are studid
- I live in US and you live in Canada, therefore you are stupid
- I live in Canada. Have a nice day, eh?
- I live in Quebec and you live in Ontario, therefore you are stupid
- I live in Ontario and you live in Quebec. Have a nice day, eh?
- I am Republican and you a Democrat, therefore you are stupid
- I am Democrat and you are Republican, therefore you are stupid
- I am Libertarian and you are Republican, and my enemy’s enemy is my friend
- I am Republican and you are Libertarian, and my enemy’s enemy is my friend.
Now with that backdrop, how does this sound?
- I use C# and you use VB, therefore you are stupid
- I use Java and you use C#, therefore you are stupid
- I use C# and you use VB, and enemy’s enemy is my friend
- I use VB and you use VBScript, therefore you are stupid.
Pretty stupid, huh? Come on all; programmers are supposed to be logical. A collective positive attitude goes a long way towards improving everyone’s lot. Can we not use our logic to solve problems for the benefit of all rather than engaging in all the petty squables?
(I don’t expect this post to change anything, but hope springs eternal…)
Mar 15th, 2004 | Programming, Software, Web
Matt Hawley of eWorld.UI has posted what appears to be a pretty cool tool called Web Deploy for ASP.NET. Check it out.
Mar 15th, 2004 | Opinion, Programming
I just came across a blog entry that disturbed me entitled Should the hobbyist programmer matter to Microsoft? by Rory Blyth of Neopolean.com. It was not so much the blog entry by Rory that really disturbed me, but the tone of the comments he received (over 100 at the time.) As I started writing this post, I planned to comment on those comments. Instead, all my comments instead focused on Rory’s post. Maybe I’ll comment on the comments in a future blog.
Divining Microsoft’s Responsibility
Rory was commenting on Kathleen Dollard’s Guest Editorial at Visual Studio Magazine entitled Save the Hobbyist Programmer (I think it regrettable that Kathleen choose the term “hobbyist programmer” rather than something like “occupational programmers” because the post’s comments seemed to attack more of what the term “hobbyist” implies than the actual concepts it seems Kathleen was trying to convey.) In his post, Rory starts out by saying:
“Microsoft’s responsibility is not to hobbyist programmers. As someone who makes his living by working with Microsoft technologies, I would be rather ticked off if MS were catering to people who weren’t professional coders.“
I found that quote most interesting because nowhere in his post did Rory reference Microsoft’s corporate by-laws, mission statement, or analyst presentation where thye indicated they had explicitly chosen to avoid meeting the needs of “non-professional coders.” As far as I know, Microsoft has issued no such statement of direction so I’m not sure why Rory would state Microsoft’s responsibility is not to hobbyist programmers. Has he been speaking to Gates or Balmer and just not told us?
She turned me into a Newt!… Well I got bettuh.
Continuing, Rory goes on to state:
“I’m not so sure where the increased complexity is. I did quite a bit of VB6 programming before moving to .NET. I found the switch to be a bit of a shock, but when I was over the hump of getting accustomed to .NET, I found my job much easier.“
Odd. If there were no increased complexity, what “hump” would there have been to get over?
Compare apples and oranges; they are both eaten
In his post Rory also implies an analogy comparing an older vs. newer car with the hobbyist programmer vs. the professional programmer:
“Making changes to accommodate the tinkerers would be like rolling back advances in auto engineering. My car is held together by more electronics and gizmos than you’ll find at NASA Mission Control. I can’t change my own oil, and I sure as hell wouldn’t try to yank a spark plug out. Although I’ve been denied the chance to put my car on blocks and work on it over the weekend, I’ve been granted an automobile that is much more efficient, reliable, and powerful than I would have had if it had been built with accessibility to non-mechanics in mind. “
I tried to understand this analogy, but I just could not get it to fit:
- Changing oil and spark plugs is routine mechanical maintenance. Programs have no mechanical maintenance. Assuming programs are designed correctly then never “wear out.” There is simply no equivalent in software for the changing of oil or plugs.
- Automobiles were generally designed to address a common use case: transportation. As designed, they meet the needs of most people who use them, although probably never perfectly but almost always good enough. Automobiles are far more analogous to applications like Word and Excel than to development tools which are designed to create software for new use cases.
- It is simply not possible for one person with only intellectual resources to modify or create a new type of car to meet new use cases.
- In the automotive field, the better analogy with development tools might be the tools required to fabricate new automobiles. Still, the physical requirements of creating an automobile to meet a new use case vs. the lack of such requirements for programming seem inexact analogy at best.
Maybe another analogy would be the professional drivers whose job it is to use cars (and trucks) to transport people or things from one location to another such as truckers, taxi drivers, package deliver drivers, and so. By comparison there are many non-professional drivers who also transport people or things by cars (and trucks.) We could possibly imagine a professional driver saying about General Motors:
“General Motor’s responsibility is not to hobbyist drivers. As someone who makes his living by driving General Motor’s vehicles, I would be rather ticked off if GM were catering to people who weren’t professional drivers.“
It would seem that might not make the most sense for GM as its market for drivers who transfer people or things from one location to another is larger if it does not ignore the “hobbyist” driver. I won’t further comment on this.
Mommy, where do “Professionals” come from?
Elsewhere in his post Rory states:
“Gone was the horrible combination of procedural and OO programming styles. I no longer wrote code that made use of some mysterious global function that existed in god-knows-what library. My web applications are no longer monolithic pages of stand-alone code that’s about as reusable as a condom. Life is better.“
Strange. By implication Rory states he is a professional programmer, but my understanding is that if one is a professional programmer one wouldn’t large monolithic modules of stand-alone code that are not reusable. But maybe that is naive on my part as technically the definition of “professional” means “they get paid to do it.” So maybe “professional” doesn’t mean one is actually reasonably skilled in their profession?
But maybe that too would be an unfair interpretation. Maybe when Rory used the term “professional programmer” he meant someone who has developed a level of skill and expertise in the area of programming, and someone who would be considered a “professional programmer” by others holding the same designation. That’s probably what he meant?
Assuming that definition, let’s revisit Rory’s last quoted statement above. Is it possible that Rory was not always a professional programmer but instead actually learned his skills over time? Maybe Rory is now a skilled professional programmer, but earlier he was not? That would seem logical as I don’t believe on the day of one’s birth, a person has ever left the mother’s womb carrying with them the designation of “professional programmer.” (please post comments and provide references if any readers know otherwise.)
Based on that postulate, wouldn’t logic dictate that the state of being a professional programmer would require a transition from being something less of a professional programmer? Further, it would seem logic would also dictate that is it not possible to instantaneously transition from being “not-a-professional” to being a professional if being a professional by definition requires expertise?
If my logic is all correct, that would then explain how Rory could be consider a professional programmer now yet at one time his code was previously “implemented using a horrible combination of procedural and OO programming” which is inconsistent with how a professional programmer would implement code. Clearly in the past he was not (as much of) a professional programmer as he is today?
The logic would also appear to reveal that there are not just two states, one of being a “professional programmer” and another being a “hobbyist programmer“, but instead an almost infinite series of states in between the two extremes. Given that, it would follow that it would be very difficult to determine just exactly to whom Microsoft has responsibility, assuming there was some way to determine, in this area, just to whom that specific responsibility would be.
Since any given state along the continuum between hobbyist and professional is infinitesimally small, how could Microsoft limit its target focus to one exact state along the continuum? Given the costs of creating and supporting development tools, would not Microsoft be required by the laws of economics to provide tools that could be used by a broader range of programmer’s along the continuum?
If so it creates the quandary of where to stop along the continuum as one targets professional and expands target toward hobbyist. Since all it is that appears to limit the hobbyist from the professional is skill and expertise, then would not catering to the hobbyist in turn generate more professionals since the hobbyist becomes like a professional each day programming? That would tend to support the earlier argument that a programmer transitions from being not-a-professional to a professional over time.
Divining Microsoft’s Responsibility, Redux
To finally bring this all to closure let’s consider the modern publicly-held corporation in a capitalist society. My understanding is that such a corporation’s primary responsibility is to its shareholders, and public company shareholders generally want, all other things being equal, for the share price to be increased. Since Microsoft is a modern publicly-held corporation in a capitalist society, it would seem Microsoft’s responsibility is to its shareholders and hence to increase the price of its shares.
In the past, ignoring government intervention, Microsoft has proven itself rather adept at increasing its share price (I personally am glad I bought Microsoft shares many years back, which I have since sold at a gain.) I think it can be argued Microsoft’s strategy’s for increasing its share price have generally been to increase the installed user base of Windows.
Further, I think it is reasonable agreed computer users generally purchase computers because they provide applications that provide the users with some level of value. Since developers create applications, it would seem empowering developers would help increase the number of applications available. Simple math would then imply that the more people developing applications for Windows, the more potential users would find applications for Windows they would value and hence be more likely to purchase Windows from Microsoft. Further it would seem logical that users would be more likely to value applications developed by programmers whose skill and expertise levels were greater because those programmers would be more likely to create programs of higher quality.
Given all of these postulates and theorems, it would seem Microsoft’s best strategy would be to focus on creating as broad a base of programmers as possible, and enable as many of them as possible to move from “hobbyist” to “professional.” Doing this would be most likely to increase the number of quality applications available for Windows, which would likely increase the number of people buying Windows, which would likely increase Microsoft’s share price. Would this not be the most logic strategy for Microsoft given the analysis?
How I see it
Assuming I have not made any logic errors in my prior analysis, it would then seem that, regarding Microsoft’s “responsibility“, the following could be said:
“Microsoft’s responsibility is not only to professional programmers but also to hobbyist programmers, and all programmers in between. Anything less, as a modern publicly-held corporation in a capitalist society, would be a dereliction of their duties to their shareholders to whom they have ultimate fiduciary responsibility.“
Anyway, that’s how I see it. :-)
Mar 12th, 2004 | Programming, Software
I use MS Access from time to time, and for what it is, it is a pretty nice tool. However, there are a few things I’d really like to see added (subject of a future blog) and there are a few bugs that I run across from time to time that really frustrate. For example, I just ran across one of them.
Let’s say you have a simple table in a SQL Server database and you have an Access Data Project (an .ADP file) pointing to that database. Let’s say that table is called tblPeople and it has fields ID, Name, and Email. Further, let’s say we want to require the values in the Email field to be either 1.) NULL, or 2.) Unique. Of course you can’t set a constraint for it to be unique because then it has to be non-NULL. But, you can create a calculated field, let’s call it EmailKey that refers to the Email field if non-null, or refers to the int ID field converted to character when Email is null, i.e.:
Then, create a UNIQUE index on EmailKey, and everything works great!
Great that is, except for when you try to add or edit a record from within Access and then you get this following error message "UPDATE failed because the following SET options have incorrect settings: ‘ARITHABORT’":
Arrggghhh!!! Perchance anyone know how to fix this, or it is just something I have to wait for MS to fix? (btw, this problem has existed for several versions of Access.)
Mar 12th, 2004 | Miscellaneous, Opinion
Mike Sax mentions my blog today on his blog.
However, Mike talks about Component Resellers and, by asking:
“How component and tools resellers can re-invent themselves to provide value for developers?”
he implies component resellers don’t currently offer value to developers, kind of like the old trick question:
“So have you quit beating your wife?”
Mike states that in the years before the Internet vendors needed our printed VBxtras catalog but says they don’t anymore because:
“The Internet has challenged the value resellers used to provide. Finding the right vendor is only a Google or Yahoo search away. And if you want even sharper focus, you can just browse some of the component galleries out there, like on ASP.NET”
(One question to Mike; have you actually ever tried to use the component galleries on ASP.NET? It’s a hodge-podge and very difficult to use as we use it to try and find new vendors. Even my competitors do a much better job.)
So let me address his implication that resellers don’t provide value:
- For those who care about price, we always offer lower prices than vendors. If developers are on a tight budget or buying for a lot of developers, they can almost always save money buying from a reseller like VBxtras and Xtras.Net.
- For developers buying for a large number of developers and/or products from multiple vendors, they can aggregate purchasing.
- If developers purchase on Net 30 accounts, they can establish the Net 30 accounts with us once instead of once per vendor,
- If they buy from us at least, all their downloadable software purchases and all their serial numbers will be held in their “online library” at our website for future reference. No need tracking down the vendor when the bits are lost or the serial numbers are misplaced.
- When developers buy direct from a vendor and they have problems the vendor won’t correct, they have no advocate. The developer is just one of probably tens of thousands of other developers whose complaints are lost in the noise (I have emails from developers who didn’t buy from us but wish they had because of this. I could post but don’t want to as it could get real ugly with those specific vendors…) If they buy from us and have a problem, we’ll go to bat for the developer. After all, there are not that many resellers, and as resellers we are much more focused on customer service than the average vendor. (If our customer service sucks, why would a customer buy from us? If the vendor’s customer service sucks but his product is the best, developers still grudgingly buy.)
- If a developer buys direct, who is going to help them select amongst competitive products? A vendor will rarely tell a customer his competitor’s product are much better for their needs, especially if that vendor’s sales people are commission-based. Since we carry multiple products in a category, we can help developers who need help selecting products to decide what best meets their needs, especially since we are the most focused reseller amongst our competitors (Focus: VBxtras = VB6/ActiveX only, and Xtras.Net = .NET only).
- Though I can’t speak about other resellers, one of the benefits of buying from us is our new XDN program, short for Xtras.Net Developer Network. We have three levels: Basic, Plus, and Professional (actually Plus hasn’t been finalized as I blog but will be soon.) We position XDN as “Empowering Serious .NET Developers” and our Professional membership ($99/year) has some pretty outrageous benefits, we think. XDN targets those developers that are influencers amongst their peers, especially the ones that evaluate and buy and/or would like to buy a lot of 3rd party products. How is this a benefit of buying from us? Starting this month we give developers 10% of their purchase price (5% on Microsoft products) toward an XDN Plus ($25) or an XDN Professional ($99) annual membership. We plan to expand that concept over the coming months. The choice seems clear to me: pay full price at vendor’s site with no additional benefits, or pay a discount at VBxtras or Xtras.Net and get a lot more benefit than just the one product.
These are just some of the reasons why I believe component vendors still add great value today. I’ll blog about more reasons in the future.
However, I will say that we are having challenges. Our two biggest challenges:
- How we can stay top-of-mind? How do we get the developer to remember to come to us instead of just going to Google? (This isn’t really that hard for us to solve, and we are hard at work on it right now.)
- Much worse, how do we get past Corporate Resellers? What are Corporate Resellers? They are the general resellers that large companies contract with to manage all their software purchases. These corporate resellers typically offer zero value to vendors and developers yet are gaining an increasingly large share of purchases as Windows development becomes more enterprise critical. It is almost impossible for us to compete with these corporate resellers. We go to trade shows and developers tell us “We love your catalog and website. We download demos all the time!” When we ask if they buy from us they say “No, we have to purchase everything through our corporate reseller” and they typically don’t seem to care to help us and try to get around that requirement. Grrr! It makes me want to limit my website and downloads to only people who can buy from us if they decide to make a purchase!
On the other hand, like so many other companies that have been affected by the Internet, we are evolving, and “How?” was what Mike Sax’s asked in his blog. In two to three years, we’ll probably look very different than today, offering lots of new services. I don’t think that means we’ll stop being a component reseller; on the contrary I think we’ll become far stronger as a as we evolve and add these services.
So how will we evolve, and to what? Only time will reveal. :)
Mar 12th, 2004 | Opinion, Programming
Now that I wrote my VBScript.NET post, I’m finding others have also been addressing the issue. Maybe one more voice helps?
Actually, I previously read Kathleen’s article and it influenced my essay but when I was preparing to write I had forgotten about it: out of sight, out of mind.
Sorry Kathleen, I’ll do better next time. :(