Justice pinged me as soon as I got home today to point out what has happened to Kathy Sierra. Take the time to read the whole of Kathy's post. If you're not a subscriber to her blog, read the comments and you'll see how passionate the community is for her knowledge and personality.
I usually don't read the comments on blogs because you inevitably (at least on the A-List ones) get the run-off-at-the-mouth commenters who have nothing to add to the conversation, but feel that they want to be heard. Taking it the level that Kathy has received is going from the down right absurd and irritating to criminal and frightening. The perceived safety of the anonymity (again perceived) that can be afforded by the internet makes people think that they can say and do things with no more repercussions than a flaming. As the web has matured, the user base has grown and the diversity of those users has increased, the acceptability of these acts has declined.
Slowly the virtual world is growing to mirror the societal values of the real world. I would suggest that we can't expect it change over night, but we should push to try to make it happen. Just as we can't allow demeaning and destructive behaviour in our physical society, we can't allow it to happen in the virtual one either. Unlike the real world, there are no dedicated cops in the virtual one. Instead we primarily rely on corporations who are using the web for financial gain or we rely on each other. Neither of those two options have much bite. Really, who hear would change the fibre of their personal behaviour because I told them that they should? Yeah...just like I thought...nobody. Some hosting company takes down your blog because it has questionable content? Big deal. It is easy to find another company who wants to make a buck and will look the other way if they even bother to investigate your past.
I hope we've reached the peak of this kind of action on the web, but let's face it, more and more the web is going to mimic real life. In real life we have idiots by the droves. I rarely read Scoble's blog, but today I did and for a change he's saying something that I believe in whole heartedly. Like Scoble, I'm going dark until next Monday.
Kathy, I highly doubt you're reading this as you have way more important things to be doing right now, but if you do by chance, this scotch is for you. Thanks for everything you've done for developers around the world. Thanks for what you've done for developer communities around the world. Thanks for what you've taught me. If I'm ever lucky enough to make your acquaintance, I'm buying...establishment, food and beverages of your choice.
Jonathan Cogley posted on how he believes that whitespace in your code is a code smell. Thinking about this concept, I can't agree fully one way or the other. I think that there are some situations where whitespace is most definitely a code smell and I think that there are other situations where whitespace is nothing more than a tool to reduce the strain on your eyes.
For me, whitespace used for the logical breaking of code within a single function is a sign to me that there may be opportunities for method extraction. It's a lot like using a region in Visual Studio, but without the collapsibility feature.
Look outside the methods though and you'll find that whitespace between methods, XML documentation, properties and instance fields is a way to increase the readability of the code. By readability I don't mean that the code becomes easier to grok. Instead I think that whitespace makes it physically easier to read large segments of code.
As an example, look at a newspaper. Editors and layout designers put a lot of thought into the whitespace on every page of their papers with two exceptions. Classifieds and Stock Quotes. I can read any business or world news page quite easily. If I look at the classifieds section I have to strain to even make sense of what's going on. I need to concentrate a lot harder to work my way through the entries and columns. It's a more taxing experience than reading the other sections of the paper.
When turn code over to a maintenance team, I need to be comfortable in knowing that they can pick it up and work with it effectively and efficiently. What Jonathan said about refactoring instead of using whitespace definitely is one of the dominant and more correct ways to accomplish this. Don't discount that whitespace is a factor too. Developers will unknowingly be thanking you for incorporating it into your codebase.
I'm working on a fairly large codebase right now. Unfortunately it has significant problems. Right now we have the time to work through some of these problems, so we are. What does that mean? Refactor, refactor, refactor.
An unfortunate addition to our poor codebase is the lack of unit tests for huge portions of it. Not the perfect situation when you're starting a major refactoring exercise. So what is the best way to handle major refactorings which lack suitable test coverage?
After a couple of weeks working on refactoring I think I've managed to screw up a couple of things well enough to have some lessons learned. The first attempt we made at a major refactoring we started working on a method that was a major culprit for fragility and complexity. We refactored and refactored in that method. By the time we were done, the guilty method had been reworked, but we'd failed to see the logic in delegated calls. Not only did we miss that code, but we didn't see the patterns that were embedded in the existing code.
In the end we managed to work our way out of our pickle and get a nicely formed code base in that area. The next set of code that we refactored was approached in a different way though. Instead of diving into the problem area and ripping and tearing, we opened up the code and mapped the high level functionality first. We traced the majority of the delegated code so that we understood more than the skin of the problem. The last step was coming up with a game plan. The game plan wasn't anything formal. Instead it was a rough idea of how we wanted to interface with the problem code (single line call or calls to multiple discreet public methods or other means). We combined that with an overall riding guidance to testability and dependency injection we ended up with the basis for our approach.
One of the challenging things that we're running into is refactoring that removes an excessive use of Singletons and static classes/methods. Refactoring isn't as straight forward as changing a class/method name and then working on the logic that it is abstracting away. Instead we're spending a lot of time finding the usages and figuring out how we're going to remove the single line usages that were originally coded. It's not a huge problem, but it adds to the workload. Thank goodness for ReSharper and Alt-F7 (Find Usages).
Using this approach we managed to get the code into a more respectable state on the first pass through. I don't know if it's the best result, but so far it's proving the most effective for us.
I spent a little time this afternoon looking at the new Code Metrics functionality that has appeared in Visual Studio "Orcas". I think that there are a number of different things that need to be said about this feature. The first is that the list of metrics available is short. Don't expect to be wowed and inundated with this like you will be by using nDepend.
No matter, the integration in the IDE is sweet. Top that off with the intuitive treeview drill down of the elements being measured and you'll be in eye candy heaven. Some things don't look right either (this is beta remember). For instance, the Lines of Code column for the project shows a value of 2.5. Obviously this is not a true total. Maybe it's an average and I can't find the formula that is being used. A nice feature is the ability to filter by the values in the different columns. This will allow you to check metrics based on threshold values and focus on the areas of your code that need work.
If you're looking for a hard core code analysis tool, go get nDepend and some asprin. If all you want is something to start with, this might just be for you. You don't get a whole bunch of stuff from the Code Metrics in Visual Studio "Orcas", but it is in the IDE and that's enough to have me excited about what's going to come in the future.
The answers? The keyword var can only appear within a local variable declaration. So nothing publicly exposed can have the type var. This also includes parameters in methods and constructors.
This reinforces with me that the keyword var will not be able to become another incarnation of the variant type from VB6 and prior.
In my previous post on Constructors in C# 3.0 I stated that I didn't understand the reasoning behind compilation creating two Employee objects in the way that it does.
Richard left the following comment.
I guess the second employee assignment is so the creation of the Employee seems atomic. If the assignment of FirstName threw an exception (for example), you'd never have a reference to the "incomplete" object.
I got to thinking that this didn't make any sense. If you look at the disassembled code, an error on any of the property setter calls will throw an exception to the wrapping try/catch block, or, if one didn't exist, the calling code (bad example in this case as there is no calling code, but you know what I'm getting at). If either happens the object being created will fall out of scope which means we wouldn't be able to work with an "incomplete" object anyways.
What Richard did get me thinking about was possibility of creating a typing error during the setting of the properties at execution time. The first, and most obvious, attempt at creating a run time error was to assign an incorrect type in the constructor. As you can see below, that didn't get me past compile time.
In addition to this, I attempted to assign the ID property a value from a variable defined using the var keyword and initialized to a string. Because the var keyword is implicitly typing the variable based on the value that it is initialized as, the same result was seen as the one that appears in the image above.
The only way that I was able to make this break at run time was to attempt to cast a string to an integer when it wasn't possible. That isn't the constructor failing though, it's the Convert.ToInt32(obj) call causing the exception.
I think it's fairly safe to say that it's not possible to cause the property setters created by using the new constructor syntax to throw an exception. All exceptions that are possible would be typing based and those are handled at compile time.
In a week or so I will have been contracting for about 6 months. I made a move away from the employee world because of two major factors. The first was the fact that employee status wasn't allowing me to take advantage of the upward movement that rates were making in our local market. The second reason is that I was feeling more and more constrained by the schedules that employers were asking me to make.
The money reason was the least of the two reasons in the end. The restrictive feeling that I was feeling around my schedule has turned out to be so much bigger. So many times I hear developers talk about making the leap to contracting and they echo the two reasons that I've stated here. They want more money and a more flexible schedule. They want to be able to attend the conferences of their choice, take trips when they want and for the amount of time that they want. Essentially they talk about life freedom.
What I've seen from the majority of contractors is that they will talk this talk, but they never seem to walk this walk. I'm not sure why they don't. Some most definitely get addicted to making more money. Some just don't seem to make the transition to having the ability to make more time. Either way, a large number of new software development contractors don't make the time to take the time away from the industry.
We've either been to the point of burnout or seen another developer get there. The dollars won't buy you happiness unless you take the time to spend them.
One of the nicest features in ReSharper is Live Templates. At work during the last week I was writing a lot of mock tests and I was getting tired of tapping out the same thing over and over. I was already using a Mock Test File Template that I'd written to speed up my use of Rhino Mocks, but it didn't help any with the repeated creation of mock objects. I wrote this one liner that requires you to only enter the mock type and the name (which even provides a pre-determined list of names).
Making it work:
- Download the Create Mock Live Template here and the Mock Test File Template here
- Open the ReSharper | Options menu (Alt-R, O)
- For the Create Mock Live Template, navigate to the Templates, Live Templates node in the menu tree on the left hand side of the Options window
- Select the User Templates item in the Available Templates area in the upper right of the Options window
- Click the Import Templates from File button and select the file you downloaded in step one
- For the Mock Test File Template, navigate to the Templates, File Templates node in the menu tree on the left hand side of the Options window
- Select the User Templates item from the Available Templates area in the upper right of the Options window
- Click the Import Templates from File button and select the file you downloaded in step one
- Click 'OK' to save the changes.
- For the File Template to work you will need to re-open the ReSharper Options menu and click the Reset Shortcuts button on the General menu node
- Back in the IDE you will now be able to type 'cm' followed by the Tab key to start the process of creating a new mock object, and use the Alt-R, N, M to create a new mock testing class
This Live Template was written specifically to work with my Mock Test File Template which creates the module level _mockery variable. If you aren't planning on using the Mock Test File Template then you will need to tweak the Create Mock Live Template.
After a weekend spent coding in the March CTP of Visual Studio 'Orcas', I can safely say that I've seen another sign that ReSharper makes developers junkies. The Orcas CTP is provided as a VPC image and I didn't bother installing ReSharper onto it because I didn't know how the two would play together. As a result I was constantly hitting ReSharper shortcuts and either getting no response or being surprised with some odd reaction from the IDE. In the end I was stumbling around in the IDE with jitters and shakes....a sure sign of withdrawal.
Another of the new features in C# 3.0 (part of Visual Studio Orcas) is the ability to do initialize objects inline and without the need for special constructors. As you can see in the image below, this is done by initializing an object with curly braces and a "Property = <value>, Property = <value>, ..." syntax. Also note that you don't have to use all the properties in when filling in the constructor.
Here's what you see when you look at the disassembled code when I only initialize one Employee object. When I first looked at this I was a little bit shocked. In the first few lines it does exactly what I expected it to do by creating the employee variable and then assigning the property values in separate lines of code. I don't fully understand why it is creating a second Employee variable at the end and making it equal the first one.
One of the nice IDE features that goes with this is the autocomplete popup while you're typing in the "Property = <value>" assignments. You see in the first image on this post that the autocomplete popup displays only the remaining unused properties in its list. If you try to use a property more than once in the initialization of the object you will get a compile time error.
Once C# 3.0 ships I think that this will be one of the more widely used new language features. Developers won't have to write and chain together numerous different constructors in their objects which reduces the maintenance overhead and increases the flexibility of the code.
Back in the good old days when I was programming in VB6 we had this data type called variant. Basically we could use it for anything we wanted to as long as we were comfortable with weak typing. Some people liked this a little more than others as I recall. In one instance I inherited an application where variant was the only variable type that was used. Thankfully variant disappeared from the world with the inception of the .NET framework. Since .NET 1.1 we've been living in a strong type world that provides us with wonderful compile time validation of our coding. In case you can't tell, I love strong typing.
Well, with C# 3.0 the variant datatype comes back with a couple of important twists. First, it's not called variant, but rather var. I suppose there were some bad connotations still tied to the term variant, but really the name change is indicative of the bigger differences between variant and var. The biggest difference is that var infers the datatype when it is being used. In the image below you can see that the varCust variable is showing the properties that belong to the Customer object. varCust has inferred that it's type should be Customer based on it being set to equal the cust variable which is defined as type Customer. If you look at the image on the right, which was taken from disassembling the compiled code using Lutz Roeder's Reflector tool, you'll see that the compiler simply changes the var typed variable to a Customer type at compile time. Another way to see this is to hover over the varCust variable definition when it's being used and the tool tip will display "(local variable) Customer varCust" (sorry no image as my software won't keep the tool tip open during capture).
Because the datatype is undermined until the variable has been initialized, you can not compile your application without doing the initialization. This also makes sense when you consider that the compilation has changed the var type to a Customer type.
Another difference between var and variant is that once you've initialized a variable typed var, you can't change it's type. In the example below I've initialized varCust to a Customer type and then try to set the varCust to a value of type Employee. Unlike variant, the compiler simply doesn't let you do this.
So what does this mean for you the developer? Well, not a whole lot quite frankly as var is nothing more than syntactic sugar. I think that the most prevalent use the var type is going to see is as a quick and dirty way to get a variable created. It will be used by lazy programmers for this purpose more than anything else. I think that we will be back to the days of looking at variant based variables again, but this time at least we will have strong typing to back us up.
One of the things that I noticed in the last couple of years is that a significant number of .NET developers (including myself) are doing nothing more than procedural programming while using objects. The more I've worked on large projects, the more I've noticed that this approach imposes a large set of restrictions on what you can do with your code. The main project that I'm working on right now is fairly large and has some interesting complexities in both the business rules that we need to program for as well as the architectural paradigm that we work within.
After spending some time looking at areas of that application that the development team shuddered at working in, I realized that the procedural nature of the code was restricting our ability to automate tests. A direct relation to this problem was the lack of adherence to the Single Responsibility Principle. This is not to say that procedural code will always turn its back on the SRP, but in the situations that I was looking at the two went hand in hand.
In my reviews I was finding methods that contained code that covered dozens and possibly hundreds of execution path permutations. It was obvious that the original and maintenance developers had noticed the extent of these paths when you looked at the test that had been written. In the cases where there were to many obvious permutations to even consider counting them, tests were simply not written. In other cases, where the complexity of the execution paths was more understandable and quantifiable, automated tests never covered all situations. Instead only the scenarios that appeared "most likely" were handled, which amounts to a system who's complexities are the least tested areas. As a result the untested and partially tested code was very brittle and very frightening to a developer who had to modify it.
Don't get me wrong, there were some definite benefits to the code. For instance, the one function that had a few dozen simple 'if' statements (embedded eight to ten deep) made it easy to do two things: tie the code to the logic explanation in the story, and it was easy for a new, less experienced developer to grok. Another thing that I found was a mentality that more classes was a bad thing, so developers knew that the functionality for invoice calculations existed only in four or five different classes and files. Beyond the fact that the readability was fairly intuitive, not much else about the code made me smile.
We've spend some time over the last few weeks refactoring core portions of the application and making them more testable. The one experience that I'll talk about the most here is a situation where the maintenance developer had a bunch of free time and an area he was very knowledgeable of needed a lot of help. He started working on refactoring the areas of one business component that had a high levels of complexity, cohesion and defect reports combined with low levels of testing, SRP adherence and both client and developer confidence.
When we went through the existing code I made the decision to direct the refactoring effort towards SRP and Dependency Injection. This developer, although bright and with a number of years of experience under his belt, had never heard of Dependency Injection or SRP let alone written code that used them. We found a piece of code where the execution paths were numerous and deeply nested, but not very complicated when you looked at each individual piece, and decided that it was as good a place as any to start. By the time the developer had finished the refactoring we had gone from 1 method in on class (breaking SRP due to the method really having nothing to do with the purpose of the class) to about 25 classes, each adhering to SRP and implementing Dependency Injection. That one original method had less than 10 tests written for it, none of which were what I'd consider confidence building and in combination they certainly didn't exercise all of the code in the method. What we have now is a suite of 300 to 350 tests which test data state through NUnit assertion and expected delegation through Rhino Mocks. Because I haven't run nCover against the code yet I haven't been able to prove this, but I'm very confident that our code coverage has increased substantially. More importantly though, I believe that by focusing on SRP while designing the code we have been able to write much more effective tests.
After the refactoring exercise was over I asked the developer if he thought it was worth the effort. He immediately responded that it was and that the effort wasn't as large as he'd originally thought when starting out. The best part of the response was the enthusiasm that he spoke with when talking about this piece of code. Prior to starting this task, developers and BAs would physically cringe (that is not me embellishing either) when there was any mention of the business process that we refactored. This developer was no different (if anything he cringed more than most). While he was sounding off about how worthwhile the previous weeks work had been I noticed the one thing that changed more than anything was his confidence in the code. Even though the code read with more difficulty due to the repeated delegation from one class to another, this developer definitely exuded confidence that future defects would be easier to track down and that we would be spending less time working within it.
I love to work with devs and show them new techniques or teach them new technologies. I find it to be one of the most rewarding parts of my job. Of all the time I've spent in this career, the last couple of weeks spent working on this solution has been the most rewarding and enjoyable I've spent in legacy code.
For all the RTM versions of Visual Studio that have been produced by Microsoft since the inception of the .NET framework, each has only worked with it's own version of the .NET framework. Visual Studio .NET (2002) worked with .NET 1.0, Visual Studio .NET 2003 worked only with .NET 1.1 and Visual Studio 2005 worked solely with .NET 2.0. You can say that VS 2005 supports both .NET 2.0 and 3.0, but .NET 3.0, no matter how badly named, is nothing more than an extension of .NET 2.0. Orcas is setting a breaking precedence in this realm.
When you go into the New Project dialog in Orcas there is a small icon in the upper right corner of the screen that allows you to select the .NET framework that you want to work with on the project that you're creating.
So what does this mean to you, Joe Developer, in your day to day work? I think that there might be a group of people who start speaking up saying "Now we can work on a solution that has .NET 2.0, 3.0 and 3.5 projects in it." To me this sings like the song that went something like "Now we can using many different languages in our solutions." My response? Malarkey. Sure you can do these things, but out in the wild who really does? Some do I'm sure, but we're talking about a very small minority.
Instead of pumping up the "Multi-framework Project" front, I'm going to say that the biggest benefit that will be seen by the day to day developer is the fact that you will be able to work in the framework of your choice (more than likely the framework of your boss/IT departments choice) with the newest IDE features. Just because you choose to work with v2.0 of the framework it doesn't mean that you have to loose the cool new features of the latest IDE. By splitting the dependency of the Framework from the IDE, Microsoft has opened up a whole new future for developers. One day, say 10 years down the road, we may be able to support that v3.0 application while we're working in Visual Studio 2018 (assuming it doesn't ship late).
Like any good code that does dependency injection, Visual Studio Orcas responds to some of the boundaries that the injected frameworks have. If you look at these images you can see that the Add Project dialog is filtered when selecting two different version of the frameworks. So no WPF Browser Applications when you're working in the 2.0 version of the framework.
I think that this is a great thing for Visual Studio. If Microsoft keeps doing this we developers are going to reap the rewards for years to come.
I've decided that my company (who employs one lousy, but good looking programmer) is going to help me get more work done. I find that when I'm working at home I get distracted easily by things like Gears of War. Because of this I tend to have a very large backlog of things I've been meaning to get to.
To work at getting through this stack of geekdom the board of directors has decided that I'm entitled to one working weekend a month where they will located me somewhere without distractions. This weekend I'm heading south to Calgary. My experience this far has been quite nice.
I'm riding the Red Arrow down the QE2 instead of driving or flying. The biggest reason for this is that they're providing wireless internet access on the bus. This is so sweet. Right now the driver is fighting traffic while I'm kicking back and harassing Justice and Rockarts on IM.
So what will I be working on this weekend? Well, I have some pet software that I'll be coding. I've also downloaded the March CTP of Orcas so you'll see some blogging about the new C# and IDE feature in it.
This weekend it by no means all work and no play though. So if you're in Calgary and you want to get together for a bevvie, drop me a line and we'll see what we can organize.
A while back I posted about Velocity being more than just speed. This week I saw an entire project management team decide to shut the door on velocity. I'm not going to pretend to understand the reasons behind their choice, but I can talk for a bit on the ramifications.
In the last couple of weeks, the development team that I'm working with has achieved our first real sense of velocity (well, 0 is still a velocity, but we weren't getting anywhere fast with that). I was starting to get a feel for the heart beat of the team. We finally had a reason to hold our daily standups. Better yet, we had the buy-in from the BAs to join us in a background, support role at the standups. There was a visible buzz going through the team as we designed, implemented, refactored and tested code. The guys were stoked about changing the existing code for the better and they've been showing great initiative to create good base code that should have been in the code from day one.
Then it happens. Management decides that everything is going well and that other tasks require the attention of the BAs. So now we have no firm completion date for the stories that are in the pipe. The development team currently has enough work to keep them busy for the next week and a half. After that the story faucet dries up, which means that we're back at velocity == 0.
In my previous velocity post I said that velocity is directly related to the morale of the team. This week I saw a team that became completely deflated on one piece of news. I contemplated not telling them about it, but in the end I figured that whether they knew or not, we still were going to run out of work to do in the near future. At first I thought it was like the wheels fell off our development-mobile, but that would imply that there was unexpected failure of the equipment. This wasn't unexpected. It was a blatant act that was akin to shooting the tires out.
We Edmontonians were out in fairly large numbers tonight to see John Bristowe, of DNIC and Plumbers fame, talk about stuff. Sure it wasn't just stuff, but there was a lot of stuff to be talking about. The ever ambitious man that he is, John tackled not one, not two, but three major topics in one presentation.
We heard about Visual Studio Team Systems for Database Professionals, which John suggested we call VSTS for DB Pros because 'it rolls off the tongue better'. I hate to disagree with the man, but Data Dude is even slicker. Alas, the marketing guys won out. Although the crowd seemed to be least interested in this subject, I think it's a great tool and it solves some of the biggest dilemmas that we work around when developing data centric apps.
As this was a talk on stuff, what kind of presentation would it be without diving into LINQ? Everyone always sees LINQ as a new way of doing data access. I'm starting to believe that there is more power in it (with help from extension methods) as a provider of syntactic sugar for rich domains.
Finally, John went like a caffeine addict who'd just finished a 6-pack of Jolt to squeeze in the ADO.Net Entity Framework. We didn't get to touch on this very much, but it's safe to say that MS is working on a strong competitor to nHibernate.
All-in-all, a good meeting, with a good turn out. Watch for upcoming Edmug events...especially our April 2007 anniversary meeting.
Update: Apparently the April 07 anniversary meeting was construed as April 7th. That's not the case. It was supposed to mean April '07 (or April 2007). All fixed now.
Maybe it's the 3 NeoCitroen's that I've had in the last hour. Maybe it's the Tylenols too. Heck, maybe it's the combination of the two with this craptacular illness as a side dish. Any way you look at it I'm irked. Well, I'll actually be brash and say I'm more than a little peeved.
Dear Mr/Mrs JetBrains,
Resharper is a fantastic product. My productivity has gone up immensely since I started working with your software. I love how I can Ctrl-Alt-V my way to a new variable, or the way I can Ctrl-Insert and Ctrl-Enter my way to a plethora of great refactorings. Even with those great features there is one thing that just flat out sucks.
Why do I have to Alt-R-O my way to resetting the Resharper short cuts so many times each day? It was really funny during my training when JP was saying "...and now I'll have to refresh my Resharper shortcuts". It was mildly amusing when he had to do it in the middle of his last Edmug presentation. I'm finding this little Resharper feature to be less than cute now that it is happening to me.
I understand that if I am adding a new File Template that I may need to reset the short cuts. Actually now that I'm thinking of it, I don't understand why. If it has to be done, why isn't it happening automagicallly when I save the File Template or close the Resharper Options window. The truly annoying part of this isn't that the Alt-R-N-C (along with other File Templates) is not available when I need it, but instead it's the fact that after I reset the shortcuts I am presented with the request to choose Resharper or Visual Studio shortcuts for my system. I know you have to ask this in some circumstances, but it gets old really quick when you're seeing it a few times a day. Worse yet is the fact that I don't get this window just once, but usually at least twice after each Reset Shortcut incident.
Resharper is all about a great user experience in the Visual Studio IDE. Having to Reset Shortcuts once a day does nothing to better my experiences. Please do something to rectify this.