Professional Neglect and Clear Text Passwords

For that past few years I’ve been the recipient of a monthly reminder from Emug (Edmonton Microsoft User Group). The contents of that email is where the problems lay. Every month that email comes in and it contains 3 pieces of information (plus a lot of boilerplate):

  1. A link to the Emug mailing list admin site
  2. My username
  3. My password in clear text

It doesn’t take much thought to know that storing clear text password is a prime security issue. Sending those passwords in emails doesn’t make it any better. Emails can be intercepted. Systems can be hacked. It’s happened before. Just read about the hack of Or the hack of HB Gary. Two things stand out in these attacks. First, PlentyOfFish stores its passwords in clear text which made it easy to compromise the entire user base once access was achieved. HB Gary (an IT security consulting firm no less) had many users who used the same password between different systems which made it easy to hop from system to system gaining different access.

Most web users don’t heed advice to have a different password for every user account they create. First, it seems unreasonable to try to remember them all. Second, most people believe that using their dogs name combined with their birth date is never going to be hackable. As system designers and operators (which the Emug membership is a professional community of) we should know that we can’t do much of anything to prevent users from choosing bad passwords. We can, however take the steps to ensure that those passwords are adequately protected.

So with all of that in mind I decided to call the Emug people on their password practices. I sent an email of concern to them along with a request that they take the time to do the correct professional thing with regards to their members passwords. The response I received back included…

I know what you're saying about the passwords though, the first one you get is randomly generated and if you ever did go on and change it to a common one then it is there within all the options you can also set it to the option of the password reminder. The option "Get password reminder email for this list?" is a user based control option and you can set that to your liking. It's in with all the digest options.

That’s great. So basically the Emug response was “You don’t have to see that we store your password in clear text if you just go uncheck this one box”. Jeez guys, thanks. So you’re suggesting that I should feel that my password is secure just because I’m not seeing it in an email anymore? Security through naiveté?

Most places / sites/ subscriptions now have an automated email reminder method. It does make you ponder its value but I think the focus on that this is a very low level security setting.

Okay…so because you think “most places/sites/subscription now have an automated email reminder” it’s okay for you to follow the same bad practices? Really? What happened to professionalism? Or integrity? Yah I know, that takes effort and you’re just a volunteer running a community group. Except for one little thing: the members of that community entrusted you with their passwords. There was an implied belief that you would protect those passwords in an acceptable manner. Clearly you’re not.

I also ask you to enumerate “most places / sites / subscriptions” please. I don’t get an email from Google Groups, StackOverflow, etc that contains my password in clear text. I know that those are professional companies and you’re not, but remember that professionalism has nothing to do with the size or revenue of your organization.

The piece of the email that really rubbed me the wrong way was this:

The mailman list serve server and application is maintained centrally not by us for the record. It is more of a self-service model and is intentionally designed for little to no maintenance or requirement to assist an end user.

So you don’t administer the system. That’s fine.  Yes, the current system may have been designed/implemented to require as little end user support as possible. That’s fine too. Here are my beefs. You have the choice to change what tooling you’re using. I’m pretty sure that you’re able to use Google to find replacement options. It will take some time and effort to see the change through, but don’t you think the integrity of your member’s passwords is worth it?

So to Brett, Colin, Ron and Simon: Please show a modicum of professionalism and take care of this issue. Since you chose not to continue the conversation with me via email, I’ve resorted to blogging. I’m submitting your mailing list email to I’ll be contacting other community members in the hopes that they can get through to you. I suspect they won’t be able to, but I feel that I have a professional obligation to at least try.

UI Workflow is business logic

Over my years as a programmer I’ve focussed a lot of attention and energy on business logic.  I’m sure you have too.  Business logic is, after all, a huge part of what our clients/end users want to see as an output from our development efforts.  But what is included in business logic?  Usually we think of all the conditionals, looping, data mangle-ment, reporting and other similar things.  In my past experiences I’ve poured immense effort into ensuring that this business logic was correct (automated and manual testing), documented (ubiquitous language, automated testing and, yes, comments when appropriate) and centralized (DDD).  While I’ve had intense focus on these needs and practices, I’ve usually neglected to recognize the business logic that is buried in the UI workflow within the application.

On my current project I’ve been presented with an opportunity to explore this area a bit more in depth.  We don’t have the volume of what I have traditionally considered business logic.  Instead the application is very UI intensive.  As a result I’ve been spending a lot more time worrying about things like “What happens when the user clicks XYZ?”  It became obvious to us very early on that this was the heart of our application’s business logic.

Once I realized this we were able to focus our attention on the correctness, discoverability, centralization and documentation of the UI workflow.  How did we accomplish this then?  I remember reading somewhere (written by Jeremy Miller I think, although I can’t find a link now) the assertion that “Every application will require the command pattern at some point.” I did some research and found a post by Derick Bailey explaining how he was using an Application Controller to handle both an Event Aggregator and workflow services.  To quote him:

Workflow services are the 1,000 foot view of how things get done. They are the direct modeling of a flowchart diagram in code.

I focused on the first part of his assertion and applied it to the flow of user interfaces.  Basically it has amounted to each user workflow (or sequence of UI concepts) being defined, and executed, in one location.  As an example we have a CreateNewCustomerWorkflowCommand that is executed when the user clicks on the File | Create Customer menu.  It might look something like this:

1: public  class  CreateNewCustomerWorkflowCommand  : ICommand <CreateNewCustomerWorkflow >
2: {
3:     private  readonly  ISaveChangesPresenter  _saveChangesPresenter;
4:     private  readonly  ICustomerService  _customerService;
5:     private  readonly  ICreateNewCustomerPresenter  _createNewCustomerPresenter;
7:     public  CreateNewCustomerWorkflowCommand(ISaveChangesPresenter  saveChangesPresenter,
8:                                             ICustomerService  customerService,
9:                                             ICreateNewCustomerPresenter  createNewCustomerPresenter)
10:     {
11:         _saveChangesPresenter = saveChangesPresenter;
12:         _customerService = customerService;
13:         _createNewCustomerPresenter = createNewCustomerPresenter;
14:     }
16:     public  void  Execute(CreateNewCustomerWorkflow  commandParameter)
17:     {
18:         if  (commandParameter.CurrentScreenIsDirty)
19:         {
20:             var  saveChangesResults = _saveChangesPresenter.Run();
21:             if  (saveChangesResults.ResultState == ResultState .Cancelled) return ;
22:             if  (saveChangesResults.ResultState == ResultState .Yes)
23:             {
24:                 _customerService.Save(commandParameter.CurrentScreenCustomerSaveDto);
25:             }
26:         }
28:         var  newCustomerResults = _createNewCustomerPresenter.Run();
29:         if  (newCustomerResults.ResultState == ResultState .Cancelled) return ;
30:         if  (newCustomerResults.ResultState == ResultState .Save)
31:         {
32:             _customerService.Save(newCustomerResults.Data);
33:         }
34:     }
35: }

As you can see the high level design of the user interaction, and service interaction, is clearly defined here.  Make no mistake, this is business logic.  It answers the question of how does the business expect the creation of a new customer to occur.  We’ve clearly defined this situation in one encapsulated piece of code.  By doing this we have now laid out a pattern whereby any developer looking for a business action can look through these workflows.  They clearly document the expected behaviour during the situation.  Since we’re using Dependency Injection in our situation, we can also write clear tests to continuously validate these expected behaviours.  Those tests, when done in specific ways, can also enhance the documentation surrounding the system.  For example, using BDD style naming and a small utility to retrieve and format the TestFixture and Test names we can generate something like the following:

1: public  class  When_the_current_screen_has_pending_changes
2:  {
3:     public  void  the_user_should_be_prompted_with_the_option_to_save_those_changes(){}
4: }
6: public  class  When_the_user_chooses_to_cancel_when_asked_to_save_pending_changes
7:  {
8:     public  void  the_pending_changes_should_not_be_saved(){}
9:     public  void  the_create_new_customer_dialog_should_not_be_displayed(){}
10: }
12: public  class  When_the_user_chooses_not_to_save_pending_changes
13:  {
14:     public  void  the_pending_changes_should_not_be_saved(){}
15:     public  void  the_create_new_customer_dialog_should_be_displayed(){}
16: }
18: public  class  When_the_user_chooses_to_to_save_pending_changes
19:  {
20:     public  void  the_pending_changes_should_be_saved(){}
21:     public  void  the_create_new_customer_dialog_should_be_displayed(){}
22: }
24: public  class  When_the_user_chooses_to_cancel_from_creating_a_new_customer
25:  {
26:     public  void  the_new_customer_should_not_be_saved(){}
27: }
29: public  class  When_the_user_chooses_to_create_a_new_customer
30:  {
31:     public  void  the_new_customer_should_be_saved(){}
32: }

As you can see, this technique allows us to create a rich set of documentation outlining how the application should interact with the user when they are creating a new customer.

Now that we’ve finished implementing this pattern a few times, have I seen any drawbacks?  Not really.  If we didn’t use this technique we’d still have to write the code to coordinate the screen sequencing.  That sequencing would be spread all over the codebase, most likely in the event handlers for buttons on forms (or their associated Presenter/Controller code).  Instead we’ve introduced a couple more classes per workflow and have centralized the sequencing in them.  So the trade off was the addition of a couple of classes per workflow for more discoverability, testability and documentation.  A no brainer if you ask me.

Is this solution the panacea?  Absolutely not.  It works very well for the application that we’re building though.  In the future will I consider using this pattern? Without doubt.  It might morph and change a bit based on the next application’s needs, but I think that the basic idea is strong and has significant benefits.

A big shout out to Derick Bailey for writing a great post on the Application Controller, Event Aggregator and Workflow Services.  Derick even has a sample app available for reference.  I found it to be great for getting started, but it is a little bit trivial as it only implements one simple workflow.  Equally big kudos to Jeremy Miller and his Build Your Own CAB series which touches all around this type of concept.  Reading both of these sources helped to cement that there was a better way.

DateTime formatting for fr-CA

I just stumbled across a nice little hidden “feature” in the .NET framework.  If you’re running on a machine that has the CurrentCulture set to fr-CA the default DateTimeFormatInfo.CurrentInfo.ShortDatePattern is dd-MM-yyyy.  On my current project we wanted to allow the end user to override that value with their own format when a date is displayed on the screen.  The easy way to do this is to do something like DateTime.Now.ToString(“dd/MM/yyyy”).  Unfortunately the result from that will appear as 16-09-2010 still.  As far as I can tell (and there is very little backing this up), this is by design.  I’m not sure why at all.  If the CurrentCulture is set to en-CA both the formats of dd/MM/yyyy and dd-MM-yyyy will cause ToString() to output a value that you would expect, but as soon as you trip over to fr-CA the rules seem to change.

If you’re running into this there is a relatively simple solution.  DateTime.Now.ToString(“dd\/MM\/yyyy”) will output 16/06/2010 as you’d expect.

The more localization that I’m doing on this application, the more I’m finding nice hidden gems of inconsistency like this.

Microsoft.Data.dll and LightSwitch

Microsoft has made some announcements over the last week or so.  The first was Microsoft.Data.dll.  I think that Oren adequately wraps up the feelings that I have towards it.

The second was the announcement of Visual Studio LightSwitch.  I have some strong feelings for this as well.  Microsoft is positioning this as a tool that allows non-professional developers create line of business (LoB) applications.  They suggest that it will allow these non-developers to create applications that can be handed off to IT for maintenance and further enhancement.  Since LightSwitch isn’t really Visual Studio, the IT group will have to upgrade the application codebase.  Does this sound like anything to you’ve ever heard of before?

While Microsoft won’t come out and say it, LightSwitch is positioned to fill the space that MS Access has for more than a decade.  During that decade plus IT departments and programmers world wide have grown to loathe MS Access application created by business.  Invariably those MS Access systems start live as small intra-department apps that service one to a few users.  Over time their feature set grows along with their user base.  At some point these LoB applications hit an invisible wall.  Sometimes it’s that the system has devolved into a mess of macros and VBA.  Other times they have collapsed under the pressures of concurrent users.  Regardless, the developers that take over these applications are met with a brownfield mess.  On top of that, the application has likely grown into a brownfield application that is critical to the business.  We professional developers end up picking up the pieces and, often, re-writing the application from scratch, under huge timeline pressure, so that it can meet the requirements and specifications that it has grown to need.

So back to LightSwitch.  Why is Microsoft pitching what this product is good at to us professional developers?  They say it’s not for us, but instead for non-professional developers.  Market it to them then.  Don’t waste our time with this marketing campaign.  Instead Microsoft, sell us on the part we’re going to have to deal with; the migration and fixing once these “LightSwitch” applications when the business inevitably comes running to us to do this.

To the professional developers that read this blog (most of you I’m guessing), prepare to move your hatred and loathing from MS Access to LightSwitch.

Making the most of Brownfield Application Development – Winnipeg Edition

On Friday July 23rd I’ll be in Winnipeg giving a one day seminar on the nuances of Brownfield Application Development and how to get the most out of it.  More about the day can be found here.  I recently did the seminar at the PrairieDevCon and it was a blast.  The day is filled chocka-block with content and ideas that pertain directly to Brownfield codebases and will work in Greenfield situations.

Registration can be found here and until July 2nd the session is available at a discount.  Hope to see you there!

Published at

Visual Studio Project files and coupling

The way that we’re told to use Visual Studio is that we create a solution file and add into it one or more project files.  Each project file then gets filled with different development artefacts.  When you build inside of Visual Studio each project represents a compiled distributable file (exe, dll, etc).  Many people carry this practice over into their build scripts.  You might be one of them.  I’m here to tell you why you’re wrong to be doing this.

Let’s say you’re starting a project.  You open Visual Studio, select File | New Project and get things rolling.  In a few minutes you have a Solution that contains a few Projects.  Maybe you have one for the UI, one for the business logic and one for the data access layer.  All is good.  A few months later, after adding many artefacts to the different projects, something triggers the need to split the artefacts into one assembly from one DLL into two DLLs. 

You set off to make this happen.  Obviously you need to add a new Project to your Solution, modify some references, and shift some files from one Project into another.  Say you’re stuck using an exclusive locking source control system (like VSS…shudder).  You *must* have exclusive access to all the files necessary including:

  • the sln so you can add the new project
  • at least one existing cs/vb/fsproj which you’ll be removing existing code artefacts from
  • any cs/vb/fs files that will be moved
  • any cs/vb/fs files that reference the ones moving (using statements will need updating when you change the namespacing on the files being moved)
  • possibly some resx files that need to be moved
  • possibly config files that need to be changed or moved
  • any automated tests that make use of the moving cs/vb/fs files

It’s a pretty damn big list of files that you will need to exclusively lock during this process.  Chances are you will need to push all of your co-workers out of the development environment so that you can gain access to all of those files.  Essentially you are, at this point, halting the development process so that you can do nothing more than split one DLL into two.  That in quite inefficient in the short term and it’s completely unsustainable in the long term.

I can hear you now, “Well I use <git/mercurial/svn/etc> so we won’t have those issues”.  Really?  Think it through for a second.  Go ahead, I’ll wait.

With the volume of changes that I listed above, you’ll likely want to be working in some kind of isolation, whether that is local or central.  So yes, you can protect yourself from blocking the ongoing development of your co-workers by properly using those version control systems.  But remember, you do have to integrate your changes with their work at some point.  How are you going to do that?  You’ve moved and modified a significant number of files.  You will have to merge your changes into a branch (or the trunk) locally or otherwise.  Trust me, this will be a merge conflict nightmare.  And it won’t be a pain just for you.  What about the co-worker that has local changes outstanding when you commit your merged modification?  They’re going to end up with a massive piece of merge work on their plate as well.  So instead of being blocked while you do the work, you’re actually creating a block for them immediately after you have completed your work.  Again, the easiest way to achieve the changes would be to prevent any other developers from working in the code while modifications are occurring.  Doesn’t that sound an awful lot like exclusive locking?

Now, I know you’re thinking “Pfft..that doesn’t happen often”.  This is where you’re wrong.  When you started that application development cycle (remember File | New Project?) you likely didn’t have all of the information necessary to determine what your deployables requirements were.  Since you didn’t have all of that information, chances were good, right from the outset, that you were going to be doing the wrong thing.  With that being the case, it means that chances were good that you were going to have to make changes like the one described above.  To me that indicates that you are, by deciding to tie your Visual Studio Projects to your deployables, accepting that you will undertake this overhead.

People, possibly you, accept this overhead on every software project they participate in.  This is where you’re wrong.  There is a way to avoid all of this, but people shrug it off as “not mainstream” and “colouring outside the lines”.  The thing is it works, so ignore it at your own peril.

There is a lot of talk in some development circles about decoupling code.  It’s generally accepted that tightly coupled code is harder to modify, extend and maintain.  When you say that a Visual Studio Project is the equivalent of a deployable, you have tightly coupled your deployment and development structures.  Like code, and as the example above shows, it makes it hard to modify, extend and maintain your deployment.  So why not decouple the Visual Studio Project structure from the deployables requirements?

It’s not that hard to do.  You’ll need to write a build script that doesn’t reference the cs/vb/fsproj files at all.  The .NET Framework kindly provides configurable compiler access for us.  The different language command line compilers (vbc.exe/csc.exe/fsc.exe) allow you to pass in code files, references, resources, etc.  By using this capability, you can build any number of assemblies that you want simply by passing a listing of artefacts into the compiler.  To make it even easier, most build scripting tools provide built in capability to do this.  NAnt and MSBuild both provide (for C#) <csc> tasks that can accept wild carded lists of code files.  This means you can end up with something like this coming out of a solution-project structure that has only one project in it:

<csc output="MyApp.DAL.dll" target="library" debug="${debug}">
    <include name="MyApp.Core/DAL/**/*.cs"/>
    <include name="log4net.dll"/>

<csc output="MyApp.Core.dll" target="library" debug="${debug}">
    <include name="MyApp.Core/Business/**/*.cs"/>
    <include name="log4net.dll"/>
    <include name="MyApp.DAL.dll"/>

<csc output="MyApp.UI.exe" target="winexe" debug="${debug}">
    <include name="MyApp.Core/**/*.cs"/>
    <exclude name="MyApp.Core/DAL/*.cs"/>
    <exclude name="MyApp.Core/Business/*.cs"/>
    <include name="log4net.dll"/>
    <include name="MyApp.DAL.dll"/>
    <include name="MyApp.Core.dll"/>

Likewise, we could consolidate code from multiple projects (really file paths is what the build script sees them as) into one deployable.

<csc output="MyApp.UI.exe" target="winexe" debug="${debug}">
    <include name="MyApp.DAL/**/*.cs"/>
    <include name="MyApp.Business/**/*.cs"/>
    <exclude name="MyApp.UI/**/*.cs"/>
    <include name="log4net.dll"/>  

Now, when it comes time to change to meet new deployables needs, you just need to modify your build script.  Modify the inputs for the different compiler calls and/or add new compilations simply by editing one file.  While you’re doing this the rest of your co-workers can continue doing what they need to provide value to the business.  When it comes time for you to commit the changes to how things are getting compiled, you only have to worry about merging one file. Because the build script is far less volatile than the code files in your solution-project structure, that merge should be relatively painless.

Another way to look at this is that we are now able to configure and use Visual Studio and the solution-project structure in a way that is optimal for developers to write and edit code.  And, in turn, we configure and use the build script in a way that allows developers to be efficient and effective at compiling and deploying code.  This is the decoupling that we really should have in our process and ecosystem to allow us to react quickly to change, whether it comes from the business or our own design decisions.

Rotating text using Graphics.DrawString

Recently I needed to create a custom WinForms label-like control that allowed for the text to be displayed in a rotated fashion.  Our needs were only for four rotation locations; 0 degrees (the default label position), 90, 180 and 270 degrees.  There were other complicating factors, but for this post we’ll only concentrate on this component of the control.

To rotate text using the Graphics.DrawString method you only have to do a couple of things.  First you have to use the Graphics.TranslateTransform method, then the Graphics.RotateTransform method, and followed by the Graphics.DrawString.  Here’s what it looks like.

using (var brush = new SolidBrush(ForeColor))
    var stringFormat = new StringFormat
                           Alignment = StringAlignment.Near,
                           LineAlignment = StringAlignment.Near
    e.Graphics.TranslateTransform(transformCoordinate.X, transformCoordinate.Y);
    e.Graphics.DrawString(Text, Font, brush, DisplayRectangle, stringFormat);

What you see are the three steps that I outlined above.  Let’s start at the bottom and work our way up.  The code exists inside of a UserControl’s overridden OnPaint event.  The DrawString method makes use of some of the properties on the control, like Text and Font.  It also uses the DisplayRectangle property to set the boundaries for the drawing to be the same size as the control.  This is one of the keys to making the rotations work.  The other key is to provide the DrawString with the StringFormat settings.  By setting them to be StringAlignment.Near for both the Alignment and LineAlignment, you are declaring that the text’s location should be based in the top left of the DisplayRectangle’s area.

Graphics.RotateTransform is how you set the rotation value.  In the case of our control, we would be putting in a value from the list of 0, 90, 180, and 270.  As you might expect the rotations are clockwise with 0 starting with the text in the ‘normal’ location.

Graphics.TranslateTransform is where the last piece of magic occurs.  It is here that you set where the top right corner of the text drawing area will be located in the DisplayRectangle’s area.  Here are some images that will help clarify the situation.


When you need the text to appear the same as “Text Area” does in the above image (rotated 0 degrees), you need to set the TranslateTransform X and Y parameters to be those that are designated by the “X” in the image.  In this case, it’s X=0 and Y = 0.


The picture above shows you what you should be displayed when you are rotating the text “Text Area” 90 degrees.  Again, you need to set the TranslateTransform, but this time the values are slightly different.  The Y parameter is still 0, but the X parameter equals the height of the text.  You can get this value by using the following line of code:

var textSize = TextRenderer.MeasureText(Text, Font);


To render the text upside down we set the rotation to 180 degrees and then, again, determine the location of the TranslateTransform X and Y coordinates.  Like we did for the last rotation, we will need to retrieve the text size to set these values.  For this situation Y will be the text height and X will be the text width.


The final step is to make the rotation work for 270 degrees.  Like all the others, we need to set the X and Y coordinates for the TranslateTransform method call.  Here the Y value will be the text width and the X value will be 0.

This is simply the first step of many to making a control that will allow rotation of the text and locating it in one of 9 locations in a 3x3 grid representation of the control’s DisplayRectangle.  More on that in another blog post though.

PrairieDevCon 2010 wrapup

Friday past brought the end to the first incarnation of the PrairieDevCon in Regina.  The conference had a great buzz of people, interest, conversations and learning about it.  It really was a blast to be at it.  Thanks to everyone who attended in whatever capacity since it was you that made this event so much fun and productive to be at.

Here are the materials from the sessions that I presented.  There isn’t anything for the panel discussion since it was all off the cuff.  If you weren’t there you didn’t get to add or absorb….sorry.

Intro To Aspect Oriented Programming: Slides, Code
ORM Fundamentals: Slides

Thanks again everyone and I hope to get invited back to do this all again next year.

Where do you start building skills from?

In the past I’ve had to take development teams and build their skills.  It was part of what I was hired to do.  “Build an app, and at the same time make our developers better.”  I’m back at it again and today I had a chat with someone online about where do you need to start.

First you need to know what your goals are.  Usually I find that management is, by asking me to make their developers “better”, looking to increase quality, decrease development time and increase maintainability.  All of these are pretty vague and there’s certainly no one day course for each one, let alone all of them.  So where do you start then?

One of the first lessons I learned while at basic officer training was that before getting my section/platoon/company working on a task I needed to know what their skills (special or otherwise) were.  The lesson was all about resource management.  I’m starting a new project complete with a new (to me) development team and once again I’m being asked to make them “better".  I could go into a meeting room right now and tell them all how they should be doing TDD, BDD, DDD, SOLID, etc.  Some (most I hope) of you will agree that these are practices that can make you a better developer.  It would be far more prudent of me to walk into that room and ask instead of state though.  I should take the lessons of my Drill Sergeant and initially put effort (and not much will be needed) into evaluating what skills (special or otherwise) the team has.  That knowledge is going to set the foundation for how I will approach making these developers “better”.

One of the questions raised in the conversation I was having today was “When we talk about things that we can throw at developers to learn, something like DDD is (sic) beneficial. By the time someone reads the ‘blue book’ they should know quite a bit.  Where would you place it (sic) relative to SOLID or the other good practices?”  This raised the question of what knowledge in what order when dealing with under trained developers.

For me the whole idea revolves around one thing building on another.  Yes you could dive straight into DDD, but wouldn’t it be easier if you understood something about SOLID, or OO Fundamentals?  So what is my preferred order then?  Depending on what the developers skills are I may start in different places, but usually the order would be something like this.

  1. Tooling.  Understanding and being effective inside Visual Studio and other tools that are used every day.
  2. OO Fundamentals.  Abstraction, Encapsulation, Polymorphism and Inheritance.
  3. DRY. Simple code reuse techniques.
  4. SRP. Single Responsibility is the foundation (in my mind) for all that follows.
  5. OLID.  The rest of SOLID.
  6. Coupling.  Why developers should care and how they can deal with it effectively.
  7. Application Layers.  How to use logical layering to build more flexible applications
  8. TDD.  With the foundation for good design acquired, developers are ready to learn how to apply those design skills.
  9. DDD.  How to think about business and translate it into code.
  10. Frameworks.  With the foundations built through this list, I feel developers are ready to understand how to use tools such as nHibernate, StructureMap, log4net and others.

I made the mistake that most developers have; jumping straight into frameworks.  While it didn’t set my career back, I did need to take a step back and put concerted effort into building my way back up to frameworks again.  The best part?  With all of the fundamental and foundational knowledge, learning almost any framework is quite simple.

You can’t expect to blast into a room full of developers and expect them to follow this (or any list on this topic) to achieve overnight success.  It’s going to take time.  Time, effort and commitment.  At least the transition from one learning topic to the other will be logical and smooth.

Winnipeg Code Camp and DevTeach Toronto

It’s been a hectic couple of weeks speaking at Winnipeg Code Camp, the Winnipeg .NET User Group and DevTeach Toronto.  All the events were a blast to do.  To those who attended, thanks for the great questions and conversations.

Session material is available as follows:

Introduction to Dependency Inversion and Inversion of Control Containers – Slide deck, Dimecasts
ORM Fundamentals – Slide deck
Software Craftsmanship – Slide deck

Published at

NotAtPDC and ICE Edmonton session material

Thanks to those who attended the session that I did on “A (Failed?) Project From the Perspective of a Team Lead”.  As you know from being there, the slide deck by itself is not all that useful.  Instead of putting your memory to the test I’ve written up an accompanying white paper that can be downloaded here (pdf format).  If you also look back at my series on failure you can see some other examples of what and where I’ve failed on projects and how they could have been, or were, resolved.


Published at

Failure: Establishing Flow

Like my past posts in this series, we’re going to talk about failures and I’ve had this problem at a number of places that I’ve been employed or contracted.  I just can’t maintain flow.  Developers all know what flow is, whether this is what you call it or not.  You know that feeling when you’re “in the zone” or “on fire”.  That’s flow.  It’s a period of focused hyper-productivity.  We enjoy it.  I long to experience the adrenaline like rush that it provides to me.  The things is, achieving flow is the exception, not the rule, for our daily lives.  Here are some examples from my experiences.

At one place I worked I shared an office with another programmer.  It was located part way down a quiet dead end hallway.  Just a little farther down that hallway was an office belonging to a manager in the company.  She had a fairly regular stream of people coming and going through the day.  The office that I was in was a little bit weird in it’s layout.  While there was a hallway right outside of it, the office also had doors connecting it to those on either side of it.  For some reason my co-workers got in the habit of not using the hallway to walk to this manager’s office.  Instead they would travel through my office and use the joining door.  When the manager was busy people would linger in my office waiting for her to free up.  Essentially my office became a waiting room for this manager’s visitors.  As you can imagine it was quite disruptive.  No amount of loud music in headphones could completely block the distractions of movement that occurred in my peripheral vision all day long.

In another working situation, I was managing a fairly large development team.  As a result, I was required to be constantly moving about helping to solve problems, clear roadblocks and to communicate (read: meetings) with management above me.  All of those tasks were part of the job.  Over time my managers began to request my attention on shorter and shorter notice.  This resulted in my day constantly being broken up by spur-of-the-moment meeting requests and drop in discussions.  Regularly these would interrupt work that I had to get done.  Instead of being able to focus on that work for any appreciable amount of time, I found my self constantly having to re-integrate the work task into the front of my conscious mind.  As most of us know, this kind of context switching takes an enormous amount of effort and an appreciable amount of time.  Tasks that would normally have required 30 contiguous minutes of effort were now taking two hours (when the smaller pieces were added together).

In both of those cases I was far less productive than I could have been.  I responded differently in both situations as well.  In the highly trafficked office I expended an enormous amount of effort trying to block out the distractions.  When that wasn’t providing any results I attempted to break my co-worker’s habits by locking the connecting doors between the offices.  The, of course, revolted against my attempts to change their habits and freedom of movement was quickly restored for them.  In the end the only times I was able to regularly achieve a state of flow in that job was when I was working in the office after 7pm.

When I was fighting with the cost of context switching in the second example, I did for a short time attempt to counter act the problem by block booking “meetings” in my calendar.  For a short while people respected those meeting blocks which gave me the desired results.  Again the old habits and practices came back once people clued into what I was doing.  Once that happened seeing a “meeting” entry in my calendar simply invoked the person to walk by my cubicle to see if I was there or not.  If I was….well…you get the picture.  Like the other example, the only times that I could make any progress were in non-traditional working hours.  For the duration of that work engagement I never once successfully achieved flow.

So how is this a failure (since this is a series on failures)?  I failed to be able to produce at the levels that my employers should have received from me.  Yes, a significant portion of the blame can rest with my co-workers and bosses for not allowing or providing an environment where flow was possible.  On the flip side, as a professional I should have requested that I be allowed to create those conditions.  I’m not saying that I should have marched into management’s office and stated “I want to be left alone to work”.  That just won’t fly.  Any development role requires interaction with others.  You can’t just squirrel away and expect to have full day after full day of flow capable time.  It’s unrealistic.  There are a couple of things I should have done though.

First I should have requested to have an office that minimized any unnecessary distractions.  I should have made it clear that having my office be a hallway and/or waiting room was unacceptable considering the alternative (the hallway) that was available. 

I should have made it clear to people that I had ‘office hours’ where I would be available and ‘productive hours’ where I would not.  As long as the ratio of the two is well balanced (this will vary depending on your role and the requirements that the job places on you), I don’t see this as being a difficult request to fulfill.

Finally, I should have made it clear to all of my bosses that the environment was not allowing me to achieve what I was being asked to do.  I should have explained to them, and quantified if requested, how much it was costing them by keeping me from achieving flow.

In any event both of those experiences are from a number of years ago.  It wasn’t until last spring that I was lucky enough to hear Neal Ford mention the book Flow by Mihaly Csikszentmihalyi in one of his talks.  I spent a week reading and really absorbing the information in the book and now understand that the state of flow doesn’t just happen.  You can construct it and make it repeatable.  Not allowing myself to do that at future jobs will be a failure completely born by me now.

Published at

Failure: Napkins and a completed product are not good enough

Continuing on my journey through failures and the lessons that I’ve learned, we’re going to make a stop at a project that I did when I worked at a very small (but successful) development shop.

I started at this shop and one of the first tasks that they assigned me was building a reporting engine for a POS style’d application that they were re-developing.  This sounded fine by me as I’d just rolled out of a company where I’d done the exact same thing.  I gathered what little info I had on the new and old applications plus the reporting that was being replaced.  People dropped print out copies of existing reports on my desk.  I was getting my bearings with regards to the outputs that may be needed, but I still was completely lost when it came to what the user experience should be.  It wasn’t too long into this project that I realized this (I hadn’t written any code yet in fact) and decided that I needed a sit down with whoever was going to act as the client.

After finding out who I should talk to I set up a meeting so that I could gather some requirement stuff.  I sat down and laid on the table that I wasn’t sure what the expected experience should be when we finished this thing.  The response was “It should be configurable.”  I asked how.  I was told “We want to be able to change things like logos and stuff when we put this out to different clients.”  Okay…this person was obviously thinking of the final reports.  I redirected the conversation back to the user experience leading up to the generation of the reports.  The advice I was given at this time was “We need the same selection criteria as we currently have.”  Nothing more (nothing less mind you).  When I asked about layout, modality (this was a WinForms app) and other things that every user would see and have to deal with I was met with blankness.  After an hour of probing and questioning in different ways I left the meeting holding onto one detail…the criteria must be the same as before.

I left the meeting believing that the user experience was firmly in my hands and that anything I did (within reason and practicality) would be acceptable.  Oh my was I wrong.  I worked for about a month and a half on the product and had some working reports that I could show so I asked for a desk review of what I’d done.  None of the user experience was acceptable.  It didn’t mimic the green screen style of the system we were replacing.  In the eyes of the reviewers, I had horrifically failed.

For some reason, when trying to initially gather requirements I chose to ignore my past experience, which had taught me that just because a client didn’t state something, it doesn’t mean that they didn’t want it.  I should have known to dig harder to pull those requirements out.  That was a large part of my failure.  One simple assumption quickly transitioned into a mistake which grew into failure.

How could I have fixed it?  Well, the obvious answer is that I should have asked more questions and taken more time before getting started.  Sure, that will work, but I like to get things delivered quickly so having a large up front requirements gathering task, just isn’t my style.  The not-so-obvious answer is that I should have started, but delivered after the first week instead of waiting one and half months.  I should have mocked up the user experience in a way that conveyed my intentions and direction and shown that too the parties concerned.  I could have done it by drawing up some forms and linking them together to show navigation and mock content.  I could have take the decidedly lower tech solution and just drawn up some story boards.  Either way, after one week I would have known if I was on the right track or not.  If not, I would have had a great opportunity to use the existing mockup as a conversation starter to elicit more requirements and then I could have started over. 

Having to start over after one week is hardly a failure in my mind.  In the case of lacking guidance it would have acted as a necessary step in gathering requirements.  Better yet, I only would have burned one weeks effort instead of one and a half months.  That is a more responsible use of the company’s money and a much better way to get exactly what the business needs.  Fail fast and succeed.

Failure: Interviewing for a position

Experience is simply the name we give our mistakes – Oscar Wilde

Continuing on my series of posts themed around failure, we’re here to look at an interview that I once did.  It resulted in me taking the job.  That was the failure.  Before we get too far into this, let’s step back to the beginning…..

When I was just getting started contracting/consulting independently, I was presented with the opportunity to be a team lead for a project.  I was told that the client was very excited to have me as a possible candidate and with my experience.  Feeling good about the fact I was desired by a company I went off and did what I thought would be the first in a series of interviews with them for this position.  I arrived at my interview and nothing seemed out of place when we got started.  There was the obligatory HR person and another who was introduced as the Project Manager.  As I’ve found to be the case, HR takes the reins and leads us down a touchy-feely path about how I work with people and “tell me about one time that you were in a stressful situation and how you dealt with it” stuff.  As I was expecting that this was simply a starter interview that would determine if HR and the PM thought that I’d fit in their environment and team, I played along for about half an hour.  That’s when it got different…and I didn’t notice enough to act on it.

The PM started to investigate my technical skills, or so he said he was going to.  He led off with a monologue about the team, the project, the client and the importance of all these things.  After about fifteen minutes of me listening to his manager-speak he asked if I thought I could handle this, to which I replied that I could.  He immediately followed that by asking “So you know about design patterns?”  I was relieved.  Finally I was going to be able to show some of the technical skills/knowledge that I’d amassed over the years.

“Yes, I have experience with the theory of them as well as their practical applications.”  I left the door open for the PM to take the design pattern conversation where he wanted to which was….

“Okay, I think you’ll be able to work with our library of them then.” Looks at HR person, “I think we’re done here.”

And the interview ended.  I was so dumbfounded at that being the technical portion of the interview I couldn’t react and left, my mouth agape.

The next day I was offered a contract that saw me leading (at one point) a team of sixteen developers on a project that nearly had an eight digit budget.

Where is the failure in getting a lucrative, long term contract?  I had no idea what I was getting into.  Zero idea.  I didn’t make sure to ask a single question.  Interviews are a two way process.  The employer needs to ensure that the interviewee isn’t selling a false bill of goods, will fit the culture, etc.  The interviewee has to ensure that the role on offer will fit them, the culture is one they can work in, the project is interesting, etc.  Employers get a detailed CV detailing the history of the interviewee.  Usually it’s a couple, or more, pages in length.  The interviewee gets a boilerplate job posting and Google.  The knowledge of each other is firmly slanted in favour of the employer.  As a result interviewees have to spend more time probing during the interview to try to level the playing field.  I didn’t do that and, as a result, failed myself into a position that was a less than enjoyable experience.

I needed to ask to have a technical person in the interview.  I needed to ask why it was rumoured that this project was nearing 100% developer turnover rate in the last twelve months.  I needed to know what the codebase looked like.  I needed to know what the current developers were bitching about.  I needed to know a lot more than I did.

I’ve since learned that I don’t allow an interview to end until I’m satisfied that I have enough information to make a decision.  If people claim to have to end the interview due to other commitments, I ask to meet with other members of the project or team.  If they won’t produce those members, it counts against them.  If they won’t schedule a follow-up (if required), and one that has more than just HR in it, it counts against them again.  The process isn’t done until you can make a well informed decision.  If you can’t you need to tell the employer that and walk away if they won’t help you to.

Allowing that interview process to get me into a long term contract was a mistake…and an experience.  Certainly one that I’ve learned from.

Failure: An introspective series on those I’ve created and endured

I recently posted about Hiring Internally and it turned out to be about the failure in leadership that I had on a project.  I’m fine with that.  For those of you that don’t know, I fail.  I do so spectacularly (sometimes) and regularly.  Welcome to my realm.  What I do, however, that may (or may not) separate me, is spend considerable time thinking about the reasons for, actions taken during and result of those failures.

In the spirit of Scott C. Reynolds’ recently started series baring all for critcism, I’m going to do the same.  I’m going to openly document my failures.  I’ll do my best to make them detailed and comprehensive, but I will invariably miss something.  Please offer your thoughts and criticism.  I’m a big boy and I can take them.  Hopefully you can take something from one of these and learn from it.  If not, unleash your hounds and tell me how I did it wrong and why you would have done it differently.  I need to learn too after all.

I’ll keep updating this post with links to all of the posts in the series.  I’m not sure how long this will be or what content it will cover.  When I get bored with it I’ll probably stop.  When I think of something I’ll probably write.  Either of those might be my first failure in this series.

Hiring Internally
Interviewing for a position
Napkins and a completed project are not good enough
Establishing Flow

Hiring internally

In our Brownfield Application Development book, and in previous posts on this blog, Kyle and I have talked about the different personalities that you can encounter while working on a development project.  Unfortunately many of the types we listed, and that you see and remember, are negative.  Because development is such a social task (whether you like it or not) conflicts in personalities, styles and objectives are easily surfaced.  Add on a heavy dose of introversion and natural social ineptness and you have a cauldron of simmering conflict just waiting to bubble over.

During the hiring process HR and dev teams do their best to eliminate candidates that are obviously going to bring these types of issues with them.  Sometimes it works, occasionally they slip through and you have to deal with them.  Unfortunately, it’s very rare to see the same diligence applied to the transfer of people from one internal team to another.  The results can be disastrous.  About a year ago this happened to my team (I say my because I was the tech lead for the project).

We, as a cohesive and very productive team, were ticking along meeting deadlines, turning out high quality code and generally getting praise from the clients for these things.  Management decided that we had more work that we were going to take on and decided to transfer another of the already on-staff developers to our project to assist with the new work load.  Don’t start thinking the Mythical Man Month and adding resources to an already late project.  We weren’t late.  In fact we were ahead of project schedule at that time.  On top of that the work being assigned to the new team member was isolated in it’s own branch with a thought of it being included in a future release, not the one we were currently working to finish.

Knowing that this developer would be working in isolation I decided that the impact to the team was minimal and shed any worries I had at the time.  Things moved ahead for a few weeks before they came to a crashing halt.  I won’t get into details about what happened but the underlying problem was that the developer recently assigned to our team didn’t have the same beliefs or culture as the rest of the team did.  The result was a series of very animated and heated conversations that led to a requirement for serious and significant policing of the development actions of this person.  In the end the original team had to declare to management (above the project management level) that having this individual working on the project was adding significant risk to the current release and had already jeopardized the following release.  The result was that the offending developer was removed from the project and a 100% re-write of his code (four months effort) was required to bring it to the project’s quality standard.

The lesson that I wanted to convey from this incident was that you have to be diligent in the screening of any person joining your team, regardless of if they are an external or internal hire. Blindly accepting a new resource since they are already within the organization doesn’t preclude them from introducing risk and conflict to the project.  If you are a team/technical lead, architect or manager for a team, you have a responsibility to protect your team from distractions, morale killers and risks.  In this case flags should have been raised months before this individual joined the team, but trained with it.  The statement “It doesn’t matter what the client wants, I’ll give them what I think they need” was boldly stated and I failed to remember that when the speaker was pushed onto our team.  I failed my team by not filtering based on this past interaction.  Don’t let internal recruitment cause you to fail yours.

Apply SRP to your emails….please

I recently got an email that had no fewer than eight significant topics in it. Yes, it was a long email. As a result of this email I was unable to remember and act on all the different topics.  Sound like a big messy class/method to you?  Sure it does. 

I propose that people now send emails as they would write code: singly responsible. I don’t care if I get eight emails instead of one, it didn’t cost me anything. I can, however flag for follow up, organize or delete each as I see fit.  Like a good method name the subject line should reveal the entire intent of the email.  Like a method name, the urge to put “and”, “or”, and “plus” in the subject line should be a smell indicating that you have too many topics.

If your emails come in with more than one topic in them I’m likely to miss one item. Heck, I’m likely to delete the email when I’ve acted only on part of what you need me to. In that case, it’s tough shit on you I’m afraid. While it’s convenient for you to type it all up in one window, it’s easier for me to do the job you require of me if I have the separate.

With that, I’m going to start working on an intelligent Outlook filter for this problem.

Quick take on Bing

I’ve been playing around with Microsoft’s new, yet to be released, search engine ( for the last couple of days.  Miguel Carrasco has done a pretty detailed review of it’s capabilities here.  I don’t see the point in me re-hashing his findings so I’m going to point out some things that I’ve noticed and liked.

Image Search Paging

The best thing about the paging model on image searches is that there isn’t any.  Yah, no longer do you have to click through the page numbers or “Next” button/links on the bottom of the current page.  Instead, with Bing you just scroll down and it will fill in the next set of images for you. Scroll down some more and it will fill in the next batch for you again.  Personally, I think this is a fantastic way to navigate through results.

Web Search Paging

Unfortunately, the people responsible for the web search results didn’t work closely enough with the image search people to get the same paging model implemented.  You still have to click “Next” or a page number to see more result.  This is a fail in my mind.  So close to success, but a usability miss.

Content Preview

Hovering over a search result allows you to move to the far right of it and expand out a preview of the content in the web page that contains the term you’re searching for.  Too often I search for stuff and find that the previews provided in search results don’t let me understand the context of the terms use.  This looks like it could help with that

Grouping of Information

If you look at this search for Ferrari you see that there are groupings for content such as “Cars”, “For Sale”, “Dealers” and more.  The more that I use the search engine, the more I wish that this would be expanded to other search terms. For example, searching for “NHibernate WCF” doesn’t bring back results that are grouped.  Why not have them grouped under headings like “Blog”, “Magazine”, “Provider Content”, etc?  I think this would help people to better decide what trust level to assign certain content.  I know it would certainly help me to focus on the areas that I think provided better content value.


It’ll be really interesting to see how Bing turns on.  Maybe Google’s search really isn’t all that.  Maybe it was just the best that we had, but it could be improved a lot.  Maybe Bing does this.  Time will tell.

Published at

A little more WCF NHibernate

As part of my recent changes to the WCF-NHibernate code I have, I declared that there wasn’t going to be a way to handle automatic transaction rollbacks when WCF faults were going to be raised.  I wasn’t even sure that I wanted them in tool.  Andreas Öhlund pointed out that rollback could be handled quite nicely using the IErrorHandler interface within WCF.  After some toying around with the idea and some proof-of-concept implementations, I decided to add this ability in.

Currently you identify a WCF service as using the NHibernate code by adding a [NHibernateContext] attribute to the *.svc.cs file.  I wanted to keep that syntax, and the rollback capability, as clean as possible.  Rather than adding another attribute, I parameterized the existing one.  Now you can indicate that the service should automatically rollback the NHibernate transaction when WCF Faults are being raised simply by attributing the *.svc.cs file with [NHibernateContext(Rollback.Automatically)].  The default [NHibernateContext] requires manual rollbacks and exists as such for backwards compatibility.

More information can be found over on the wiki ( and the code can be grabbed from the svn trunk (

The Possessive Developer

Wrapping up our first pass at Development Project Archetypes we look at a common culprit on brownfield teams.

During your first week on the project you’re assigned to have a mentor who has written a large portion of the existing application. While working on your first serious defect in the system, you ask the Possessive Developer about an existing piece of code and suggest refactoring. She looks back at you and states “There’s no reason to change that code. It works just as I want it to.” After explaining its deficiencies, the Possessive Developer is sullen and in a less-than-cordial mood. At the end of the conversation she has neither agreed nor disagreed with the suggested changes, but there is tension in the air.

The following morning the Possessive Developer corners you at the water cooler and lashes into a list of reasons that there should be no changes to the code discussed the previous day. At the end of the one-way conversation she walks away with confidence in her stride. You now know. It is her code, her creation and her domain. Only the Possessive Developer can state when her code is to be changed.

Like the “Oooo…Shiny!” Developer, code reviews with peers can be useful. In a more neutral forum, it’s much harder to argue against the majority. But don’t ignore The Possessive Developer. It doesn’t take long to turn her into a Skeptic or a Hero.

Published at

The 'Oooo...Shiney!' Developer

For the final few posts in the Development Project Archetypes we'll focus on developers.

An incestuous cousin to the Front of the Magazine Architect, this developer is easily distracted by any new technology. Not only will he want to talk about it endlessly, the ‘Oooo…Shiny!’ Developer will simply add the technology to the project without telling anyone. You will find, scattered through the code base, a number of different techniques, tools or frameworks that are used one time and then abandoned. While adding to your technical debt, the ‘Oooo…Shiny!’ Developer is working feverishly to keep adding new entries to the “Experience Using” section of his resume.

Sometimes it is easy enough to counter his predilection for new and shiny simply by placing a pretty glass bead on their keyboard every morning. When that fails, it’s time to up the priority of the code reviews for The “Oooo…Shiny!” Developer. And be merciless.

The Enhancing Tester/QA

Still avoiding developers, we continue talking about archetypes...

Usually found in the confines of an organization that has heavily silo’d roles and responsibilities, the Enhancing Tester will be assigned responsibility for ensuring the product quality. She believes it is her personal responsibility to question and alter any specifications that were used in creating the software. Since she wasn’t involved at the start of the development cycle, the Enhancing Tester will question the design and requirements only after the development team has passed them on for test. The proposed ‘enhancements’ are usually obscure and with far reaching architectural ramifications. For example, “this really should be an MDI application.”

Since she understands the original requirements, but not the overall business, the Enhancing Tester has no choice but to log these as software bugs so that they will get the attention of the team. After working with her for a few months you will wake up in a cold sweat yelling “It’s not a bug, it’s a feature change!”

Usually, you can counter this by assigning cost estimates to the “bug fixes”. It helps to do so in front of the client.

Published at

The Over Protective DBA

Deviating from the developer sphere, we continue the Development Project Archetypes...

A good many application require access to a database. If you’re lucky, you’ll have free rein over the database to make whatever changes you deem necessary. If you’re unlucky, you’ll need to make those changes through an Over-Protective DBA.

The Over-Protective DBA protects his database with an iron fist. Requests for changes to a stored procedure go through several iterations to ensure they include the standard corporate header and naming conventions. He also challenges every single piece of code in the procedure to see whether you really need it. Only when satisfied that the application can’t be deployed without it will he grace the database with your changes. In the development environment, at least…

If you really want a battle on your hands, suggest to an Over-Protective DBA that you should switch to an object-relational mapper. Be prepared to launch into a prolonged debate on the performance of stored procedures vs. dynamic SQL, the dangers of SQL injection, and the “importance” of being able to deploy changes to business logic without re-compiling the application.

The Over-Protective DBA often has company policy on his side so he will be a challenge. Don’t spend a lot of time confronting him head-to-head. Your database is an important part of your application, it behoves you to get along with him. Instead, arm yourself with knowledge and talk to him in common terms. In our experience, DBAs can often be negotiated with for certain things, such as an automated database deployment.

The Hero Developer

Another in the Archetypes series...

Everyone loves a hero. The PM, the architects and the client relish the long hours he puts into delivering results. When the client is told we don’t have the budget or manpower to add a feature, the hero’s cubicle is his first stop after the meeting. “Old man Baley says we can’t have this. But we NEED it.” The Hero hums and haws and complains how badly the project is being managed, then with a sigh says, “I’ll put it in. But this is the LAST TIME.”

The Hero then proceeds to circumvent your entire development architecture wedging the feature in because he doesn’t understand terms like “budget” and “resources”. All he cares about is getting his ego stroked and being the martyr that saved the project. The long hours he puts in are heralded by the PM and the client who don’t realize his effort is not directly correlated to the value he is delivering.

Project Managers and Clients will scoff at you when you make claims against their Hero. In their mind, he is a cornerstone in the project and whose absence will wreak havoc on the success of the project. Regardless of their actual ability, Heroes are often more trouble than they’re worth.

Published at