PrairieDevCon 2014 content

We’re just wrapping up at the conference and it’s time to put up our materials. Thanks to everyone who attended my talks. Feel free to contact me if you have any questions.

If you’re interested in my Circuit Breaker implementation you can find it at https://github.com/dbelcham/dervish

Published at

Originally posted at

A solid foundation

As I started going though the ideas I had for automation and connectivity in the house one thing became very obvious: Everything required some sort of connectivity. It could be coax for cable/satellite TV, or Cat5 for phones, or Cat6 for network and AV distribution. The common denominator was that there needed to be some kind of connectivity.

I’ve dabbled on the IT Pro side of the fence in the past so I know that cobbling together a wiring solution was likely to end up in a world of pain. Rather than do that I decided that one of the first things that needed to be done to the house was to add some wiring in. As with many “stock” houses these days, the only wiring done was coax to 3 locations for TV and some Cat5e run for phones. Everything terminated at the power panel with hardly any extra cable to use. The way that our house is setup, the power panel is in an awkward location that didn’t lend well to much of anything. There was no way I was building an accessible wiring closet in that location without completely blocking the power panel so I had to come up with another option (plus, the area near the power panel, awkward as it is, was quickly designated the future wine cellar…priorities right?).

The location that I ended up settling on was about 15ft away from where all the builder installed wiring was terminating. So I had a couple problems on my hands. First, I needed to get more wiring run, and second I needed to extend the current cabling to this new location.

Extending cabling

As I said earlier I had Coax and Cat5e that needed to be extended. The Coax isn’t so hard. there are plenty of male-to-male connectors available. All you need to do is buy/build some cables to get from the current endpoint to the desired endpoint. Because I hate sloppy cabling I decided to custom make my cables so that they fit precisely to length. A few tools, 30 minutes in Home Depot watching a YouTube video on how to crimp ends on and I was good to go. Because I didn’t want the cables running willy-nilly under our stairs I spent some time with a speed bore and punch holes in the studs so that I could run the Coax through the walls and have it pop out right where I needed it.

cat5e junction box

The Cat5e extension was a bit more of an issue. There really aren’t that many ways to extend network cable. I did manage to find a set of 110 punchdown boxes though. Wire goes in one side and punches down; wire comes out the other side where it was punched down. A small PCB board in the box makes all the connections for you. So, like the Coax, I custom cut some Cat5e, ran it through the studs and ended it where my new termination location was going to be.

Running more wire

Most new houses are built with no data wiring in them. It seems that the common belief is that WiFi is ubiquitous and convenient enough that there’s no value in doing so. I disagree. WiFi is convenient. It is easy to procure and setup. It doesn’t, however, offer a very good data transfer rate for most devices. Yes 802.11N offers decent speeds but its nothing compared to gigabit and if the WiFi signal is weak in an area of the house, the data connection will be slower than advertised. On top of that, not all the devices in our house are WiFi enabled so they either have to sit near the router or be off the network. Neither of those options will work for us here. And don’t get me wrong, there will be WiFi in the house.

So to fill my need for gigabit speed I got some electricians to stop by and run a bunch of Cat6 cables for me. Each run ends in the basement termination location. Here’s what I ended up getting:

  • 4 runs to the upstairs TV area (more on why 4 in a future post)
  • 4 runs to the main floor TV area
  • 1 run to each of two different office areas
  • 1 run to the laundry room (again, more in a future post)

Unlike the home builder, I had the electricians leave 20+ feet of extra cable on the termination end just in case I changed my mind about the termination location. Most of that extra cable, once trimmed, has gone to building patch cables so it wasn’t a waste at all.

If you have the chance do get data cabling done before the walls in a house are closed up…or get lots of empty conduit run. We were lucky that all that cabling only required one hole in an inner closet wall. Not much damage but the time spent getting the cables pulled sure added to the cost.

Termination point

Once I had cables run to a centralized location I needed to figure out what I was going to do to manage the termination of these feeds. After some googling around I found out about media enclosures. These are just a metal box that gets installed between two wall studs and gives you a solid platform to mount different devices to. I had a bunch of small devices that I wanted to house so this seemed like a good idea. In the end it is home to my 8-way coax splitter, 8-way phone junction point, cable modem, cable-to-phone box and a WiFi router (more on that in a later post too).

I waffled on the idea of terminating the long runs in this box. I knew it wasn’t the cleanest or most flexible solution but for something like the coax lines I likely wasn’t going to be changing their configuration ever so in the end simplicity won out. All of the Coax runs terminate in the enclosure. None of the data cables or phone runs go to the media enclosure. The connection between the cable modem and the WiFi router stays contained in the enclosure and a single data run leaves the WiFi router and the enclosure to connect to a gigabit switch. The same is true for the cable-to-phone box; all of its connections are kept in the enclosure and only 4 cables from the phone junction point exit the enclosure. In the end there is 1 Coax cable into the enclosure and 1 Cat6, 4 Cat5e and 3 Coax out of the enclosure.

Now, I needed to manage the termination of the 12 or so data lines and the 4 phone lines that I had coming to this central location. Unlike the coax lines flexibility in configuration was going to be a huge benefit here. To that end I ran all of those cables into a rack and terminated them in a patch panel. I also terminated the line from the WiFi router in the patch panel. This give me the ability to directly connect any data line wall jack in the house directly to the internet connection. I can also create a physically separate network if I need/want to. Right now all required data lines are patched from the patch panel to the switch giving full gigabit connectivity within the house.

Extending WiFi

I hate weak WiFi signals. To combat this I put a WiFi router (configured as an access point) on each floor and connected it via one of the gigabit data runs back to the main switch. With that I actually killed two birds with one stone: I was able to get stronger WiFi everywhere, and I got a 4 port switch in those locations. The 4 port switch actually turned out to be very useful. At one TV location all of the following devices can be connected:

  • Xbox
  • WDTV Live Hub
  • TV
  • HD cable box
  • DVD player (this one lost out as we rarely use it)

 

The end configuration

In the end it all logically looks something like this

LogicalNetwork

And, amazingly enough, it all works as desired and with all of this I now have a foundation to add home automation components onto.

Published at

Originally posted at

Home Automation

Recently we moved into a new house. One of the things that I have always wanted to do was wire up a house and automate as much of it as possible. So here’s my chance!

This isn’t going to be something that happens over night and as proof you need to know how long it’s taken to get the first parts done. Because I’m taking an incremental approach to adding functionality that is going to be one of my primary concerns when choosing technology and hardware.

Overall I have an idea, but no big detailed plan. I’m taking an agile approach to things. I’m going to wait until the absolute last responsible moment to decide on everything. That said I do have that big picture idea, and here it is.

The idea


  • Everything that gets automated must be reachable via some form of communicate. I want to build my own central automation platform/software to integrate any/all the different technologies that will end up in the house and if I can’t program against them then I can’t do this.
  • Automation should be focused on providing value and usefulness. For example, automated blinds are nice. But once you see the 20ft entry way in our house and the 15ft climb to the window in it and then combine that with my fear of heights you can now make the case that automated blinds would be both valuable and useful.
  • Automation should not be intrusive to install. I do not want to have rip open walls just to add an automated item. I understand that there will be a small number of situations where walls will have to be opened up. If there are two options for a given need and one requires no drywall work then it shall be the choice.
  • While preferred, off the shelf solutions will not be the sole way to accomplish an automation task. I have dabbled enough in Arduino and embedded coding to know that if I can make something that better fits my needs then I will.

With those three concepts in mind I have started researching and, at the time of this writing, started my first project which I’ll cover in the next post. Until then here’s a list of some of the ideas (some good, some bad) that are in the current backlog.

  • blinds
  • zone/scene controlled lighting
  • HVAC control
  • irrigation (probably should get a lawn first)
  • whole house audio
  • centralized audio/visual components
  • water leak detection
  • utility shut off

Let the games begin…

Published at

Originally posted at

The myth of “Best Practices”

TL;DR – When you see a “Best Practices” article or conference session, read or attend with caution. Its likely not to help you with your current problems.

Today I read a very informative blog post about passwords and the security that they don’t provide. The one thing that stood out in that post more than anything else was the following sentence:

“Best practice” is intended as a default policy for those who don’t have the necessary data or training to do a reasonable risk assessment.

You see, I’ve always cringed when I’ve read blog posts or seen conference sessions that claim to provide “Best Practices” for a given technology or platform. People read or attend feeling that they will leave with a profound solution to all of their problems within that sphere. In the end these sessions don’t help the attendee/reader. They lack two important things; context and backing data.

“Best practices” are an attempt to commoditize solutions to what usually are very complex problems in even more complex environments. I have seen “always use HTTPS for communication between browser and web server when authenticating onto websites” as a best practice. I’m sure you have too. But does that make any sense when the traffic between the browser and web server only exists on an internal network? Possibly, if the system needs to be hardened against attack from internal company users, but this is a pretty rare scenario. So what benefit do we get from blindly following this “Best Practice” for our internal system? We have to purchase, manage, re-new, deploy and maintain SSL certs (amongst other things). And to what benefit if the risk of attack from our internal users is deemed to be low (which is what most organizations I’ve experienced would categorize it for their internal apps)?

The “Best practice” of always using HTTPS is a broadly painted practice intended to cover more situations than necessary. Why? Well these “Best practices” are intended for organizations and people that “…don’t have the necessary data or training…” These organizations and people need solutions that err on the side of caution instead of being focused for their needs. In essence, “Best Practices” are intended for audiences that are either too lazy or too uninformed about their scenarios, tools or platforms to make decisions on their own.

I know that I’m using a security scenario and referencing a security related blog post. On top of that I used phrases like “side of caution”. Don’t mistake this as a condemnation only of “Best Practices” for security related matters. I intend to condemn all “Best Practices” using the same arguments. Regardless of if those “Best Practices” are for MVC, IIS hardening, network security, design patterns, database technologies or anything else that we deal with in software development, I opine that they are worthless. Well, they have one use; to identify organizations that have neither the interest or capability to assess their specific situations and needs.

Tags:

Published at

Originally posted at

The *Specified anti-pattern

I’ve spent far too much energy on this already. But it needs to be said to a broader audience. When your svc code creates *Specified properties for struct properties you’re doing more harm than good.

WCF allows you to define certain output elements as optional. This means that they won’t be sent in the message body. When the message is deserialize the message object will contain a property for that element but somehow there must be an indication that the element wasn’t received. If the type is nullable (like a string), no problem. But if the type is a struct, which aren’t nullable by definition, then how do you indicate the lack of a value? Welcome the *Specified boolean property.

So if our message has a DateTime element that has been marked as optional on transmission, and there is no data, that element won’t be included in the message. When WCF deserializes that message into the object, the property for the element isn’t nullable so it has to put a value into it. In the case of DateTime it will put in DateTime.MinValue which is an actual valid date. So for you to know that there wasn’t an value (or element for that matter) you will have to check the correlated *Specified property. Now the consumer of the WCF service has to write if…else statements in their code to translate the lack of value into something meaningful; like a Nullable<DateTime>.

As soon as you see if…else statements like this you can be assured that you have a leaky abstraction. The consumer of the WCF service has too much knowledge of the WCF service’s implementation details. The consumer shouldn’t care to have to look at one value to know if another value should be empty or not. It should just look at the value and be able to say “Oh, this is empty”. That’s why we have nullable types. In a lot of situations having no value is a valid state for an object or property. Worded another way, the absence of data is data in itself.

If we have to deal with checking the *Specified properties we’ve just introduced a piece of conditional logic into our application for every pair of properties that use this pattern. Conditional logic implementation are some of the easiest code to get wrong. In this case you may get your true and false if…else condition reversed. You may simply forget to do the conditional check. The use of a pattern that requires the implementation of conditional logic immediately makes your code more brittle and error prone.

On top of that, patterns like the *Specified one are not normally seen in the wild. The inexperience that people have with it means that they will make mistakes, like forgetting to check the *Specified property before working with its partner property. Again, we see the pattern introducing possible points of failure.

All of these problems could be alleviated if we adhered to two ideas: good encapsulation and null data being valid data. Until then, avoid the *Specified pattern like the plague.

Published at

Originally posted at

PhoneAnnotations

Recently I was trying to find a good DataAnnotation extension that would provide validation for phone numbers. I stumbled on this blog post from AppHarbor and decided that I should take that idea and make something of it. With that, I introduce Igloocoder Phone Annotation. You can grab it from github or nuget.

Tags:

Published at

Originally posted at

Support lessons learned

It’s been quiet around here for the last while. I’ve been spending my time and energy on some side projects that don’t really pertain directly to the art of programming. They do, however, have a degree of interaction with a programming community….and that’s where it all goes sour.

I’ve had two run-ins with this programming community over the last few months. Both times it has been related to support. I’ve been playing around with creating things for a flight simulator (www.flightgear.org to be precise). One thing that I have been working on was the modeling of an aircraft flight and autopilot systems. Again, I hoped that some of the elements of programming a simulated computer system would possibly lead to some greater development enlightenment, but the ultimate desire was to have a great simulation of this specific aircraft.

In just the last couple of weeks I ran into a problem with the graphics functionality of the software. I’m still not sure what it is. FlightGear is open source so I could dive into the codebase and figure out what is going on. I have absolutely zero desire to look at a bunch of C++ graphics code though. Since this feature is a fundamental framework support piece for what I am trying to do what I really want is for it to just work or a fix to be made. My mentality is that of yours and my mother; I don’t care how sausage is made, I just want to eat it.

With that in mind I log a defect with all of the details that the project asks for in a defect entry….and then I wait. A few weeks, and nightly builds, later I update to the latest build and the problem still persists. So I update my defect. I’m already thinking about my options for moving to another flight sim, but finally someone responds to the defect.

Lesson 1: The longer you wait to respond to defects, the more likely the defect reporter is to go find another tool/system to replace the problem system.

The first response to the defect is request for me to turn some stuff, which I vaguely recognize the meaning of, on and off. So I go looking in menu items, the project wiki and other things to see if I can figure out how to trigger these changes. Nothing. I’m stumped so I report back as such.

Lesson 2: No matter how close you are to walking away, communication is always helpful.

The next messages follow a similar pattern but also include statements like “see the wiki”, but don’t include links or specific topics that I should be referring to. So, again, I stumble around looking for the mysterious piece of information that is supposedly included in the wiki. I find nothing related to that topic. I end up digging around and eventually finding something that works the same as I was originally told to use.

Lesson 3: If your support technique is to send people to documentation to help gather debugging information, that documentation had better be both comprehensive and complete.

I’m told to increase the logging verbosity and “send the console output to a file”. I get the verbosity jacked up without any problem but I cannot get the output into a file. After a fair bit of research I find out that sending the output to a file isn’t supported in Windows for this application. Well isn’t that just nice.

Lesson 4: If you know you’re going to be supporting different platforms/environments you need to be 100% sure that your support tools and techniques work equally well for all of them.

I have been told that I should reconfigure my hardware to ensure a known baseline. My hardware is not abnormal. In fact, it is using up to date drivers and out of the box configurations.

Lesson 5: Asking for hardware configuration changes makes you look like you’re flailing and unprofessional.

So after 2 weeks, and about 6-8 hours of effort, I’m absolutely no closer to a solution that I was before.

Lesson 6: If it takes the user more than a few minutes to start the process of gathering debug information for you, then it is you (the project/developer/team) that has failed, not the user.

Before anyone says “Oh, it’s open source so you could just go fix it yourself” everyone needs to start remembering one thing; You’d never do this on a project for your employer. If you did you’d probably not be long for employment with them. So why is it any different in OSS

Lesson 7: An OSS codebase is not justification to throw good development and/or support practices out the window.

Tags:

Published at

Originally posted at

Comments (2)

.NET Developer presentation resources

There are a lot of people who have lists of tools that developers should use. There are also a lot of people that have suggestions of how to make presentations ‘better’. Rather than duplicate all of that good information, I’m just going to supplement it with a few things.

Coding

I’ve been in a tonne of presentations (in person and screencast training) where the presenter (myself included) has insisted on typing out every single character of code manually. It’s tedious to watch and error prone to do. Unless each keystroke has some invaluable purpose that your audience must see and hear about, don’t do it. There are a couple of tools that can help you out and you probably already have one or both of them; Visual Studio Snippets and ReSharper Live Templates.

If you’re going to type out long lines of code, entire methods, method sturctures or more, consider adding most/all of that into a Snippet or Live Template. Here’s a Live Template I used recently.

livetemplate

As you can see, all I need to type is three letters (pcw) and an entire line is filled in for me. In this case all I wanted to do was talk over the addition of the line with something like “…so let’s just add a console write line here to output that the code executed…” The attendees didn’t need to know the details of the whole line of code (what value would watching me write string.Format(…..) bring?)

Source Control

Put the code that supports your presentation in source control….please. Put it on github, bitbucket, or Google Code. I don’t care about the platform it’s stored on, just put it some where that I can easily get to it. Then don’t forget to tell us about it. Include a slide at the end of the presentation that has the repository URL on it. Nothing special is needed, just “Source code: www.github.com/myname/myrepo”.

Your Details

Put your contact details at the end of your presentation. Setup a custom email account of this purpose if you’re worried about being inundated afterwards. Even if you just put up your Twitter handle, at least you’ve given someone the opportunity to reach out to you for clarification on your presentation. Not everyone comes up to ask questions after the presentation ends.

On a related note, please, for the love of all things, don’t include an “About Me” slide at the start of your presentation. That’s what your bio on the conference website is for.

Slide Delivery

Make your slides available if the conference isn’t going to. I know that the slides without the overlaying presentation aren’t worth a crap for most people, but for those that did attend the session, did pay attention and do want to refer back to what they saw, the slides are an invaluable resource. Even if only for the URL that links to your sample code’s repository or your contact info, having the slides available is valuable to attendees. There are a number of different sites out there that you can use to make your slides available while still preventing people from downloading the raw PowerPoint (or whatever you use) and turning your presentation into their presentation. Try SlideShareAuthorStream, or SlideBoom.

Published at

Originally posted at

Comments (1)

Branching for success

I’ve always struggled while trying to explain to teams and organizations how to setup their version control system so that the project can succeed. Ask almost any business person and they want the dev team to be able to release both at a scheduled steady pace (versioned releases) and spontaneously when needed (hotfixes). Most organizations that I’ve dealt with have a very naïve version control plan and struggle immensely once projects head to production. Usually they flail around removing partially completed, or untested, features so that a hotfix can be released quickly. After they get the hotfix pushed to production they flail away ensuring (and not always succeeding) that the hotfix will be included in the next scheduled release.

For anyone that has seen this, it’s an unmaintainable work pattern. As the application becomes more complex, ensuring that the product is stable when a hotfix is required becomes a game of chance and luck. To many organizations pour endless time (and ultimately money) into maintaining this situation in a (perceived) working state.

A solid branching pattern can help ease the pain. Don’t get me wrong, branching won’t solve the problem for you by itself. You have to have a plan. How and when do we branch? Where to we branch from and where do we merge back to? What conventions do we have in place to identify the different branch purposes? All of these things need to be standardized on a team (at a minimum) before branching has a hope of enabling a team to do what every business seems to want.

I won’t try to explain my branching policies on projects. All I’d be doing is duplicating the work by Vincent Driessen over at nvie.com http://nvie.com/posts/a-successful-git-branching-model/

Tags:

Published at

Originally posted at

AOP Training

A couple of announcements to make here. First, I’ve hooked up with the fine folks at SharpCrafters to become one of their training partners for Aspect Oriented Programming with PostSharp. We’re now offering a 2 day Deep Dive training course for the product. I’m currently working on writing the materials and every day I’m finding more interesting little corners of the tool. I’m really looking forward to some of the things that Gael has in store for v3 of it. Contact me (training@igloocoder.com) if you want more information about it.

Also, I’ve started working with the fine folks at SkillsMatter to offer an AOP course. This one is much more general 2 day course that talks about AOP’s purpose, uses and different techniques for implementing it. The first offering is coming quickly (May 24-25) in London and I’m quite excited for that.

I’m beating around a few other course and location ideas. If you would like to see an AOP or PostSharp course in your city, let me know and we’ll see if we can make something happen.

Tags:

Published at

Originally posted at

Migration

I’m sure if you subscribe to the RSS feed of this blog you’ve probably been flooded with old posts in the last couple of days. That’s because I’ve changed blog engines and migrated to a different hosting scheme. The old blog, my wiki and an SVN server were all hosted on a Virtual Private Server. Now VPSs tend to get expensive when all they’re doing is exposing a website. So I was looking for a way to replace all of them. Here’s what I did.

The blog

My old blog was being hosted using an archaic version of SubText. I can’t remember the last time I upgraded it or even looked at modifying the theme. It just worked. So going forward I wanted to make sure that would continue, but I also wanted to have something where I could easily modify the codebase to add features. Hopefully you’ll see some of them on this site over the next few months. As always there was an underlying thread of learning to my desires so I was looking to use some thing that I could pick up new skills from. I settled on RaccoonBlog. Ayende has built it as a demo app for RavenDB, but is also flat out works.

After some testing I dived into the migration project. There were three big things I needed to do. First, migrate the data. This was handled by the migration tool included in the RaccoonBlog codebase. Change the users that it creates and voila.

Second, create a good theme that met my corporate standards. This was harder. Oren never built a theming engine for RaccoonBlog, and I can’t blame him. Why would you when there’s CSS? So I had to dive into the CSS and make some changes. My first skillset is not as a web dev so this was a bit challenging. Add to that some HTML5 stuff that I’d never seen before and you have the recipe for a lot of cussing.

The third thing was get it deployed to a new hosting site. I made the choice to go with AppHarbor since it seemed to have the right price point. I also liked the continuous deployment option that is embedded into it. So deployment was pretty easy. Follow the instructions for setting up the application and add the RavenHQ add-on. Since I had imported the data to a local RavenDB instance earlier all I needed to do was export that and import it into the RavenHQ instance. Connecting the application to the RavenHQ instance was a bit trickier. AppHarbor has the ability to replace connection strings during it’s deployment cycle. To ensure that this happened I had to consolidate the connection string out of it’s own config file and into the main web.config. I also had to change the RavenDB initialization code in the Application_Start for RaccoonBlog so that it constructed the connection string correctly. It’s a bloody pain since the default development connection string doesn’t work with the new code. I’ll have to figure that out and post more about the overall process. The other thing that I needed to do was add a Nuget package that would mute the AppHarbor load balancer’s tendency to add port numbers to URLs. That was easy as well. Commit the changes to github and, voila, I have a working blog. Change the DNS settings and add a custom HostHeader to AppHarbor ($10…the only cost for the blog per month) and it’s like I never had the old one.

The custom app

I have a custom time tracking and invoicing application that I pay little love to. It just works…not well, but it works. Asp.Net, MonoRail and SQL Server. Again, off to AppHarbor to create an app and add the SQL Server extension. My biggest problem here was dealing with moving from dependencies stored in source control to using Nuget. After I got that sorted out it was easy.

The other big hassle with a SQL Server backed system on AppHarbor is that there’s no way to do backups or restores. So you have to use SQL Management Studio to script the DDL and the data from your existing database and run those scripts against your AppHarbor hosted database.

Source control

This was easy. For my public facing repositories GitHub works just fine. I did, however, have a number of private repositories. They weren’t active enough to warrant buying private repos from GitHub, so I looked for alternatives. I figured that if I was going the git route, I might as well go in wholesale. BitBucket by Atlassian offers private Git repositories for free. So there’s that problem solved. The best part of both repository providers is that they integrate seamlessly with AppHarbor for continuous deployment.

Wiki

I had a small ScrewTurn wiki site that I hosted for documentation of my public facing repositories. Both GitHub and BitBucket offer wiki functionality that is directly tied to the repositories themselves so there was no effort required to make this migration.

Conclusion

Getting from a VPS to a fully cloud hosted situation was less painful that I thought it would be. I had been putting it off for a long time because I didn’t think that my situations were going to be easy to handle. I always complain about clients who thing “we have the hardest/biggest application of anyone anywhere”. Guess what I was doing? After about 5-10 hours of research I had a plan and I was able to implement it with ease. By far the most work I did was creating a theme/style for the blog. In the end I went from about $150/month in outlay to $10. Not bad for 20 hours of work.

Published at

Originally posted at

Eagerness to fail

If your developers are eagerly taking blame for failures on your project they’re either:

a) buying into the concept of collective code ownership and have a commitment to quality
     or
b) are trying to get blamed for everything so that they can be fired and rid of your place of employ.

Tags:

Published at

PostSharp Training

I’ve hooked up with the fine folks over at SharpCrafters to build some training materials for their AOP product PostSharp. Starting in January of 2012 we will be offering training on the use of PostSharp for all your Aspect Oriented Programming needs. I’m currently working on writing the materials and every day I’m finding more interesting little corners of the tool. I’m really looking forward to some of the things that Gael has in store for v3 of it.

If you’re interested in getting some training on PostSharp, shoot me an email at training@igloocoder.com.

Published at

Professional Neglect and Clear Text Passwords

For that past few years I’ve been the recipient of a monthly reminder from Emug (Edmonton Microsoft User Group). The contents of that email is where the problems lay. Every month that email comes in and it contains 3 pieces of information (plus a lot of boilerplate):

  1. A link to the Emug mailing list admin site
  2. My username
  3. My password in clear text

It doesn’t take much thought to know that storing clear text password is a prime security issue. Sending those passwords in emails doesn’t make it any better. Emails can be intercepted. Systems can be hacked. It’s happened before. Just read about the hack of PlentyOfFish.com. Or the hack of HB Gary. Two things stand out in these attacks. First, PlentyOfFish stores its passwords in clear text which made it easy to compromise the entire user base once access was achieved. HB Gary (an IT security consulting firm no less) had many users who used the same password between different systems which made it easy to hop from system to system gaining different access.

Most web users don’t heed advice to have a different password for every user account they create. First, it seems unreasonable to try to remember them all. Second, most people believe that using their dogs name combined with their birth date is never going to be hackable. As system designers and operators (which the Emug membership is a professional community of) we should know that we can’t do much of anything to prevent users from choosing bad passwords. We can, however take the steps to ensure that those passwords are adequately protected.

So with all of that in mind I decided to call the Emug people on their password practices. I sent an email of concern to them along with a request that they take the time to do the correct professional thing with regards to their members passwords. The response I received back included…

I know what you're saying about the passwords though, the first one you get is randomly generated and if you ever did go on and change it to a common one then it is there within all the options you can also set it to the option of the password reminder. The option "Get password reminder email for this list?" is a user based control option and you can set that to your liking. It's in with all the digest options.

That’s great. So basically the Emug response was “You don’t have to see that we store your password in clear text if you just go uncheck this one box”. Jeez guys, thanks. So you’re suggesting that I should feel that my password is secure just because I’m not seeing it in an email anymore? Security through naiveté?

Most places / sites/ subscriptions now have an automated email reminder method. It does make you ponder its value but I think the focus on that this is a very low level security setting.

Okay…so because you think “most places/sites/subscription now have an automated email reminder” it’s okay for you to follow the same bad practices? Really? What happened to professionalism? Or integrity? Yah I know, that takes effort and you’re just a volunteer running a community group. Except for one little thing: the members of that community entrusted you with their passwords. There was an implied belief that you would protect those passwords in an acceptable manner. Clearly you’re not.

I also ask you to enumerate “most places / sites / subscriptions” please. I don’t get an email from Google Groups, StackOverflow, etc that contains my password in clear text. I know that those are professional companies and you’re not, but remember that professionalism has nothing to do with the size or revenue of your organization.

The piece of the email that really rubbed me the wrong way was this:

The mailman list serve server and application is maintained centrally not by us for the record. It is more of a self-service model and is intentionally designed for little to no maintenance or requirement to assist an end user.

So you don’t administer the system. That’s fine.  Yes, the current system may have been designed/implemented to require as little end user support as possible. That’s fine too. Here are my beefs. You have the choice to change what tooling you’re using. I’m pretty sure that you’re able to use Google to find replacement options. It will take some time and effort to see the change through, but don’t you think the integrity of your member’s passwords is worth it?

So to Brett, Colin, Ron and Simon: Please show a modicum of professionalism and take care of this issue. Since you chose not to continue the conversation with me via email, I’ve resorted to blogging. I’m submitting your mailing list email to www.plaintextoffenders.com. I’ll be contacting other community members in the hopes that they can get through to you. I suspect they won’t be able to, but I feel that I have a professional obligation to at least try.

UI Workflow is business logic

Over my years as a programmer I’ve focussed a lot of attention and energy on business logic.  I’m sure you have too.  Business logic is, after all, a huge part of what our clients/end users want to see as an output from our development efforts.  But what is included in business logic?  Usually we think of all the conditionals, looping, data mangle-ment, reporting and other similar things.  In my past experiences I’ve poured immense effort into ensuring that this business logic was correct (automated and manual testing), documented (ubiquitous language, automated testing and, yes, comments when appropriate) and centralized (DDD).  While I’ve had intense focus on these needs and practices, I’ve usually neglected to recognize the business logic that is buried in the UI workflow within the application.

On my current project I’ve been presented with an opportunity to explore this area a bit more in depth.  We don’t have the volume of what I have traditionally considered business logic.  Instead the application is very UI intensive.  As a result I’ve been spending a lot more time worrying about things like “What happens when the user clicks XYZ?”  It became obvious to us very early on that this was the heart of our application’s business logic.

Once I realized this we were able to focus our attention on the correctness, discoverability, centralization and documentation of the UI workflow.  How did we accomplish this then?  I remember reading somewhere (written by Jeremy Miller I think, although I can’t find a link now) the assertion that “Every application will require the command pattern at some point.” I did some research and found a post by Derick Bailey explaining how he was using an Application Controller to handle both an Event Aggregator and workflow services.  To quote him:

Workflow services are the 1,000 foot view of how things get done. They are the direct modeling of a flowchart diagram in code.

I focused on the first part of his assertion and applied it to the flow of user interfaces.  Basically it has amounted to each user workflow (or sequence of UI concepts) being defined, and executed, in one location.  As an example we have a CreateNewCustomerWorkflowCommand that is executed when the user clicks on the File | Create Customer menu.  It might look something like this:

1: public  class  CreateNewCustomerWorkflowCommand  : ICommand <CreateNewCustomerWorkflow >
2: {
3:     private  readonly  ISaveChangesPresenter  _saveChangesPresenter;
4:     private  readonly  ICustomerService  _customerService;
5:     private  readonly  ICreateNewCustomerPresenter  _createNewCustomerPresenter;
6: 
7:     public  CreateNewCustomerWorkflowCommand(ISaveChangesPresenter  saveChangesPresenter,
8:                                             ICustomerService  customerService,
9:                                             ICreateNewCustomerPresenter  createNewCustomerPresenter)
10:     {
11:         _saveChangesPresenter = saveChangesPresenter;
12:         _customerService = customerService;
13:         _createNewCustomerPresenter = createNewCustomerPresenter;
14:     }
15: 
16:     public  void  Execute(CreateNewCustomerWorkflow  commandParameter)
17:     {
18:         if  (commandParameter.CurrentScreenIsDirty)
19:         {
20:             var  saveChangesResults = _saveChangesPresenter.Run();
21:             if  (saveChangesResults.ResultState == ResultState .Cancelled) return ;
22:             if  (saveChangesResults.ResultState == ResultState .Yes)
23:             {
24:                 _customerService.Save(commandParameter.CurrentScreenCustomerSaveDto);
25:             }
26:         }
27: 
28:         var  newCustomerResults = _createNewCustomerPresenter.Run();
29:         if  (newCustomerResults.ResultState == ResultState .Cancelled) return ;
30:         if  (newCustomerResults.ResultState == ResultState .Save)
31:         {
32:             _customerService.Save(newCustomerResults.Data);
33:         }
34:     }
35: }

As you can see the high level design of the user interaction, and service interaction, is clearly defined here.  Make no mistake, this is business logic.  It answers the question of how does the business expect the creation of a new customer to occur.  We’ve clearly defined this situation in one encapsulated piece of code.  By doing this we have now laid out a pattern whereby any developer looking for a business action can look through these workflows.  They clearly document the expected behaviour during the situation.  Since we’re using Dependency Injection in our situation, we can also write clear tests to continuously validate these expected behaviours.  Those tests, when done in specific ways, can also enhance the documentation surrounding the system.  For example, using BDD style naming and a small utility to retrieve and format the TestFixture and Test names we can generate something like the following:

1: public  class  When_the_current_screen_has_pending_changes
2:  {
3:     public  void  the_user_should_be_prompted_with_the_option_to_save_those_changes(){}
4: }
5: 
6: public  class  When_the_user_chooses_to_cancel_when_asked_to_save_pending_changes
7:  {
8:     public  void  the_pending_changes_should_not_be_saved(){}
9:     public  void  the_create_new_customer_dialog_should_not_be_displayed(){}
10: }
11: 
12: public  class  When_the_user_chooses_not_to_save_pending_changes
13:  {
14:     public  void  the_pending_changes_should_not_be_saved(){}
15:     public  void  the_create_new_customer_dialog_should_be_displayed(){}
16: }
17: 
18: public  class  When_the_user_chooses_to_to_save_pending_changes
19:  {
20:     public  void  the_pending_changes_should_be_saved(){}
21:     public  void  the_create_new_customer_dialog_should_be_displayed(){}
22: }
23: 
24: public  class  When_the_user_chooses_to_cancel_from_creating_a_new_customer
25:  {
26:     public  void  the_new_customer_should_not_be_saved(){}
27: }
28: 
29: public  class  When_the_user_chooses_to_create_a_new_customer
30:  {
31:     public  void  the_new_customer_should_be_saved(){}
32: }

As you can see, this technique allows us to create a rich set of documentation outlining how the application should interact with the user when they are creating a new customer.

Now that we’ve finished implementing this pattern a few times, have I seen any drawbacks?  Not really.  If we didn’t use this technique we’d still have to write the code to coordinate the screen sequencing.  That sequencing would be spread all over the codebase, most likely in the event handlers for buttons on forms (or their associated Presenter/Controller code).  Instead we’ve introduced a couple more classes per workflow and have centralized the sequencing in them.  So the trade off was the addition of a couple of classes per workflow for more discoverability, testability and documentation.  A no brainer if you ask me.

Is this solution the panacea?  Absolutely not.  It works very well for the application that we’re building though.  In the future will I consider using this pattern? Without doubt.  It might morph and change a bit based on the next application’s needs, but I think that the basic idea is strong and has significant benefits.

A big shout out to Derick Bailey for writing a great post on the Application Controller, Event Aggregator and Workflow Services.  Derick even has a sample app available for reference.  I found it to be great for getting started, but it is a little bit trivial as it only implements one simple workflow.  Equally big kudos to Jeremy Miller and his Build Your Own CAB series which touches all around this type of concept.  Reading both of these sources helped to cement that there was a better way.

DateTime formatting for fr-CA

I just stumbled across a nice little hidden “feature” in the .NET framework.  If you’re running on a machine that has the CurrentCulture set to fr-CA the default DateTimeFormatInfo.CurrentInfo.ShortDatePattern is dd-MM-yyyy.  On my current project we wanted to allow the end user to override that value with their own format when a date is displayed on the screen.  The easy way to do this is to do something like DateTime.Now.ToString(“dd/MM/yyyy”).  Unfortunately the result from that will appear as 16-09-2010 still.  As far as I can tell (and there is very little backing this up), this is by design.  I’m not sure why at all.  If the CurrentCulture is set to en-CA both the formats of dd/MM/yyyy and dd-MM-yyyy will cause ToString() to output a value that you would expect, but as soon as you trip over to fr-CA the rules seem to change.

If you’re running into this there is a relatively simple solution.  DateTime.Now.ToString(“dd\/MM\/yyyy”) will output 16/06/2010 as you’d expect.

The more localization that I’m doing on this application, the more I’m finding nice hidden gems of inconsistency like this.

Microsoft.Data.dll and LightSwitch

Microsoft has made some announcements over the last week or so.  The first was Microsoft.Data.dll.  I think that Oren adequately wraps up the feelings that I have towards it.

The second was the announcement of Visual Studio LightSwitch.  I have some strong feelings for this as well.  Microsoft is positioning this as a tool that allows non-professional developers create line of business (LoB) applications.  They suggest that it will allow these non-developers to create applications that can be handed off to IT for maintenance and further enhancement.  Since LightSwitch isn’t really Visual Studio, the IT group will have to upgrade the application codebase.  Does this sound like anything to you’ve ever heard of before?

While Microsoft won’t come out and say it, LightSwitch is positioned to fill the space that MS Access has for more than a decade.  During that decade plus IT departments and programmers world wide have grown to loathe MS Access application created by business.  Invariably those MS Access systems start live as small intra-department apps that service one to a few users.  Over time their feature set grows along with their user base.  At some point these LoB applications hit an invisible wall.  Sometimes it’s that the system has devolved into a mess of macros and VBA.  Other times they have collapsed under the pressures of concurrent users.  Regardless, the developers that take over these applications are met with a brownfield mess.  On top of that, the application has likely grown into a brownfield application that is critical to the business.  We professional developers end up picking up the pieces and, often, re-writing the application from scratch, under huge timeline pressure, so that it can meet the requirements and specifications that it has grown to need.

So back to LightSwitch.  Why is Microsoft pitching what this product is good at to us professional developers?  They say it’s not for us, but instead for non-professional developers.  Market it to them then.  Don’t waste our time with this marketing campaign.  Instead Microsoft, sell us on the part we’re going to have to deal with; the migration and fixing once these “LightSwitch” applications when the business inevitably comes running to us to do this.

To the professional developers that read this blog (most of you I’m guessing), prepare to move your hatred and loathing from MS Access to LightSwitch.

Making the most of Brownfield Application Development – Winnipeg Edition

On Friday July 23rd I’ll be in Winnipeg giving a one day seminar on the nuances of Brownfield Application Development and how to get the most out of it.  More about the day can be found here.  I recently did the seminar at the PrairieDevCon and it was a blast.  The day is filled chocka-block with content and ideas that pertain directly to Brownfield codebases and will work in Greenfield situations.

Registration can be found here and until July 2nd the session is available at a discount.  Hope to see you there!

Published at

Visual Studio Project files and coupling

The way that we’re told to use Visual Studio is that we create a solution file and add into it one or more project files.  Each project file then gets filled with different development artefacts.  When you build inside of Visual Studio each project represents a compiled distributable file (exe, dll, etc).  Many people carry this practice over into their build scripts.  You might be one of them.  I’m here to tell you why you’re wrong to be doing this.

Let’s say you’re starting a project.  You open Visual Studio, select File | New Project and get things rolling.  In a few minutes you have a Solution that contains a few Projects.  Maybe you have one for the UI, one for the business logic and one for the data access layer.  All is good.  A few months later, after adding many artefacts to the different projects, something triggers the need to split the artefacts into one assembly from one DLL into two DLLs. 

You set off to make this happen.  Obviously you need to add a new Project to your Solution, modify some references, and shift some files from one Project into another.  Say you’re stuck using an exclusive locking source control system (like VSS…shudder).  You *must* have exclusive access to all the files necessary including:

  • the sln so you can add the new project
  • at least one existing cs/vb/fsproj which you’ll be removing existing code artefacts from
  • any cs/vb/fs files that will be moved
  • any cs/vb/fs files that reference the ones moving (using statements will need updating when you change the namespacing on the files being moved)
  • possibly some resx files that need to be moved
  • possibly config files that need to be changed or moved
  • any automated tests that make use of the moving cs/vb/fs files

It’s a pretty damn big list of files that you will need to exclusively lock during this process.  Chances are you will need to push all of your co-workers out of the development environment so that you can gain access to all of those files.  Essentially you are, at this point, halting the development process so that you can do nothing more than split one DLL into two.  That in quite inefficient in the short term and it’s completely unsustainable in the long term.

I can hear you now, “Well I use <git/mercurial/svn/etc> so we won’t have those issues”.  Really?  Think it through for a second.  Go ahead, I’ll wait.

With the volume of changes that I listed above, you’ll likely want to be working in some kind of isolation, whether that is local or central.  So yes, you can protect yourself from blocking the ongoing development of your co-workers by properly using those version control systems.  But remember, you do have to integrate your changes with their work at some point.  How are you going to do that?  You’ve moved and modified a significant number of files.  You will have to merge your changes into a branch (or the trunk) locally or otherwise.  Trust me, this will be a merge conflict nightmare.  And it won’t be a pain just for you.  What about the co-worker that has local changes outstanding when you commit your merged modification?  They’re going to end up with a massive piece of merge work on their plate as well.  So instead of being blocked while you do the work, you’re actually creating a block for them immediately after you have completed your work.  Again, the easiest way to achieve the changes would be to prevent any other developers from working in the code while modifications are occurring.  Doesn’t that sound an awful lot like exclusive locking?

Now, I know you’re thinking “Pfft..that doesn’t happen often”.  This is where you’re wrong.  When you started that application development cycle (remember File | New Project?) you likely didn’t have all of the information necessary to determine what your deployables requirements were.  Since you didn’t have all of that information, chances were good, right from the outset, that you were going to be doing the wrong thing.  With that being the case, it means that chances were good that you were going to have to make changes like the one described above.  To me that indicates that you are, by deciding to tie your Visual Studio Projects to your deployables, accepting that you will undertake this overhead.

People, possibly you, accept this overhead on every software project they participate in.  This is where you’re wrong.  There is a way to avoid all of this, but people shrug it off as “not mainstream” and “colouring outside the lines”.  The thing is it works, so ignore it at your own peril.

There is a lot of talk in some development circles about decoupling code.  It’s generally accepted that tightly coupled code is harder to modify, extend and maintain.  When you say that a Visual Studio Project is the equivalent of a deployable, you have tightly coupled your deployment and development structures.  Like code, and as the example above shows, it makes it hard to modify, extend and maintain your deployment.  So why not decouple the Visual Studio Project structure from the deployables requirements?

It’s not that hard to do.  You’ll need to write a build script that doesn’t reference the cs/vb/fsproj files at all.  The .NET Framework kindly provides configurable compiler access for us.  The different language command line compilers (vbc.exe/csc.exe/fsc.exe) allow you to pass in code files, references, resources, etc.  By using this capability, you can build any number of assemblies that you want simply by passing a listing of artefacts into the compiler.  To make it even easier, most build scripting tools provide built in capability to do this.  NAnt and MSBuild both provide (for C#) <csc> tasks that can accept wild carded lists of code files.  This means you can end up with something like this coming out of a solution-project structure that has only one project in it:

<csc output="MyApp.DAL.dll" target="library" debug="${debug}">
  <sources>
    <include name="MyApp.Core/DAL/**/*.cs"/>
  </sources>
  <references>
    <include name="log4net.dll"/>
  </references>
</csc>

<csc output="MyApp.Core.dll" target="library" debug="${debug}">
  <sources>
    <include name="MyApp.Core/Business/**/*.cs"/>
  </sources>
  <references>
    <include name="log4net.dll"/>
    <include name="MyApp.DAL.dll"/>
  </references>
</csc>

<csc output="MyApp.UI.exe" target="winexe" debug="${debug}">
  <sources>
    <include name="MyApp.Core/**/*.cs"/>
    <exclude name="MyApp.Core/DAL/*.cs"/>
    <exclude name="MyApp.Core/Business/*.cs"/>
  </sources>
  <references>
    <include name="log4net.dll"/>
    <include name="MyApp.DAL.dll"/>
    <include name="MyApp.Core.dll"/>
  </references>
</csc>

Likewise, we could consolidate code from multiple projects (really file paths is what the build script sees them as) into one deployable.

<csc output="MyApp.UI.exe" target="winexe" debug="${debug}">
  <sources>
    <include name="MyApp.DAL/**/*.cs"/>
    <include name="MyApp.Business/**/*.cs"/>
    <exclude name="MyApp.UI/**/*.cs"/>
  </sources>
  <references>
    <include name="log4net.dll"/>  
  </references>
</csc>

Now, when it comes time to change to meet new deployables needs, you just need to modify your build script.  Modify the inputs for the different compiler calls and/or add new compilations simply by editing one file.  While you’re doing this the rest of your co-workers can continue doing what they need to provide value to the business.  When it comes time for you to commit the changes to how things are getting compiled, you only have to worry about merging one file. Because the build script is far less volatile than the code files in your solution-project structure, that merge should be relatively painless.

Another way to look at this is that we are now able to configure and use Visual Studio and the solution-project structure in a way that is optimal for developers to write and edit code.  And, in turn, we configure and use the build script in a way that allows developers to be efficient and effective at compiling and deploying code.  This is the decoupling that we really should have in our process and ecosystem to allow us to react quickly to change, whether it comes from the business or our own design decisions.

Rotating text using Graphics.DrawString

Recently I needed to create a custom WinForms label-like control that allowed for the text to be displayed in a rotated fashion.  Our needs were only for four rotation locations; 0 degrees (the default label position), 90, 180 and 270 degrees.  There were other complicating factors, but for this post we’ll only concentrate on this component of the control.

To rotate text using the Graphics.DrawString method you only have to do a couple of things.  First you have to use the Graphics.TranslateTransform method, then the Graphics.RotateTransform method, and followed by the Graphics.DrawString.  Here’s what it looks like.

using (var brush = new SolidBrush(ForeColor))
{
    var stringFormat = new StringFormat
                       {
                           Alignment = StringAlignment.Near,
                           LineAlignment = StringAlignment.Near
                       };
    e.Graphics.TranslateTransform(transformCoordinate.X, transformCoordinate.Y);
    e.Graphics.RotateTransform(rotationDegrees);
    e.Graphics.DrawString(Text, Font, brush, DisplayRectangle, stringFormat);
}

What you see are the three steps that I outlined above.  Let’s start at the bottom and work our way up.  The code exists inside of a UserControl’s overridden OnPaint event.  The DrawString method makes use of some of the properties on the control, like Text and Font.  It also uses the DisplayRectangle property to set the boundaries for the drawing to be the same size as the control.  This is one of the keys to making the rotations work.  The other key is to provide the DrawString with the StringFormat settings.  By setting them to be StringAlignment.Near for both the Alignment and LineAlignment, you are declaring that the text’s location should be based in the top left of the DisplayRectangle’s area.

Graphics.RotateTransform is how you set the rotation value.  In the case of our control, we would be putting in a value from the list of 0, 90, 180, and 270.  As you might expect the rotations are clockwise with 0 starting with the text in the ‘normal’ location.

Graphics.TranslateTransform is where the last piece of magic occurs.  It is here that you set where the top right corner of the text drawing area will be located in the DisplayRectangle’s area.  Here are some images that will help clarify the situation.

0degrees

When you need the text to appear the same as “Text Area” does in the above image (rotated 0 degrees), you need to set the TranslateTransform X and Y parameters to be those that are designated by the “X” in the image.  In this case, it’s X=0 and Y = 0.

90degrees

The picture above shows you what you should be displayed when you are rotating the text “Text Area” 90 degrees.  Again, you need to set the TranslateTransform, but this time the values are slightly different.  The Y parameter is still 0, but the X parameter equals the height of the text.  You can get this value by using the following line of code:

var textSize = TextRenderer.MeasureText(Text, Font);
textSize.Height;

180degrees

To render the text upside down we set the rotation to 180 degrees and then, again, determine the location of the TranslateTransform X and Y coordinates.  Like we did for the last rotation, we will need to retrieve the text size to set these values.  For this situation Y will be the text height and X will be the text width.

270degrees

The final step is to make the rotation work for 270 degrees.  Like all the others, we need to set the X and Y coordinates for the TranslateTransform method call.  Here the Y value will be the text width and the X value will be 0.

This is simply the first step of many to making a control that will allow rotation of the text and locating it in one of 9 locations in a 3x3 grid representation of the control’s DisplayRectangle.  More on that in another blog post though.

PrairieDevCon 2010 wrapup

Friday past brought the end to the first incarnation of the PrairieDevCon in Regina.  The conference had a great buzz of people, interest, conversations and learning about it.  It really was a blast to be at it.  Thanks to everyone who attended in whatever capacity since it was you that made this event so much fun and productive to be at.

Here are the materials from the sessions that I presented.  There isn’t anything for the panel discussion since it was all off the cuff.  If you weren’t there you didn’t get to add or absorb….sorry.

Intro To Aspect Oriented Programming: Slides, Code
ORM Fundamentals: Slides

Thanks again everyone and I hope to get invited back to do this all again next year.

Where do you start building skills from?

In the past I’ve had to take development teams and build their skills.  It was part of what I was hired to do.  “Build an app, and at the same time make our developers better.”  I’m back at it again and today I had a chat with someone online about where do you need to start.

First you need to know what your goals are.  Usually I find that management is, by asking me to make their developers “better”, looking to increase quality, decrease development time and increase maintainability.  All of these are pretty vague and there’s certainly no one day course for each one, let alone all of them.  So where do you start then?

One of the first lessons I learned while at basic officer training was that before getting my section/platoon/company working on a task I needed to know what their skills (special or otherwise) were.  The lesson was all about resource management.  I’m starting a new project complete with a new (to me) development team and once again I’m being asked to make them “better".  I could go into a meeting room right now and tell them all how they should be doing TDD, BDD, DDD, SOLID, etc.  Some (most I hope) of you will agree that these are practices that can make you a better developer.  It would be far more prudent of me to walk into that room and ask instead of state though.  I should take the lessons of my Drill Sergeant and initially put effort (and not much will be needed) into evaluating what skills (special or otherwise) the team has.  That knowledge is going to set the foundation for how I will approach making these developers “better”.

One of the questions raised in the conversation I was having today was “When we talk about things that we can throw at developers to learn, something like DDD is (sic) beneficial. By the time someone reads the ‘blue book’ they should know quite a bit.  Where would you place it (sic) relative to SOLID or the other good practices?”  This raised the question of what knowledge in what order when dealing with under trained developers.

For me the whole idea revolves around one thing building on another.  Yes you could dive straight into DDD, but wouldn’t it be easier if you understood something about SOLID, or OO Fundamentals?  So what is my preferred order then?  Depending on what the developers skills are I may start in different places, but usually the order would be something like this.

  1. Tooling.  Understanding and being effective inside Visual Studio and other tools that are used every day.
  2. OO Fundamentals.  Abstraction, Encapsulation, Polymorphism and Inheritance.
  3. DRY. Simple code reuse techniques.
  4. SRP. Single Responsibility is the foundation (in my mind) for all that follows.
  5. OLID.  The rest of SOLID.
  6. Coupling.  Why developers should care and how they can deal with it effectively.
  7. Application Layers.  How to use logical layering to build more flexible applications
  8. TDD.  With the foundation for good design acquired, developers are ready to learn how to apply those design skills.
  9. DDD.  How to think about business and translate it into code.
  10. Frameworks.  With the foundations built through this list, I feel developers are ready to understand how to use tools such as nHibernate, StructureMap, log4net and others.

I made the mistake that most developers have; jumping straight into frameworks.  While it didn’t set my career back, I did need to take a step back and put concerted effort into building my way back up to frameworks again.  The best part?  With all of the fundamental and foundational knowledge, learning almost any framework is quite simple.

You can’t expect to blast into a room full of developers and expect them to follow this (or any list on this topic) to achieve overnight success.  It’s going to take time.  Time, effort and commitment.  At least the transition from one learning topic to the other will be logical and smooth.