For the last few months I've been on a steady mission to absorb as much about lean software development as I can. This morning I got up and decided to enjoy the morning air, slight breeze, my Zune and the start of the Poppendieck's Lean Software Development: An Agile Toolkit book. After having read The Toyota Way recently, it was pretty clear to me that eliminating waste was a large component of lean ideals. In the first chapter of the Poppendieck's book, how waste manifests itself in software development was cemented for me. I immediately started thinking about a recent experience of mine.
A little while ago I was asked to take a look at some modelling that another project was doing. The idea was that I would review it and contribute to a discussion with information on our project's Domain Driven Design movement. I was provided with a forwarded copy of an email thread that spanned a conversation between a couple of architects and a senior level developer along with a small set of attached diagrams. Rather than initially working through the multi-page email thread, I decided to take a look at the diagrams. My whole though process boiled down to the fact that looking at four diagrams would provide me with much faster initial insight, and it did.
I immediately realized that the concept of modelling used on this other project was completely different than the model that we created organically as part of our DDD efforts. While the attachments did provide me with quick insight into the area of discussion that was being engaged, detailed inspection of them didn't provide any information on the project. After reviewing all four 'model' diagrams, I still had no idea what the project was about. It's at this point that I decided that I needed to read the email thread.
It was in the email thread that things got interesting for me. I'll say in advance that I still have no idea what business area, let alone problem, the project is addressing, but that wasn't the most interesting thing that I took from the reading. The email thread consisted of a series of five to seven emails spread out over approximately a week. The conversation was triggered by the initial question of "How do the different models you are presenting relate to each other?". Through the remainder of the thread, the relationship between the models was no clearer to me.
My first thought was that the number of different models was creating communication friction. The different model diagrams that I had seen in the attachments seemed as though they were fragmented. While they were cohesive (in the programming sense), they were so loosely coupled that they didn't tie together in any way. Combined, the diagrams didn't create an understandable story for their consumer. This is what I initially thought the problem was.
After some thought though, I have decided it wasn't that the project had four different models that weren't tied together in any meaningful way. What I think happened is that these diagrams were created to enhance communication where it wasn't needed. In one of the diagrams it was outlined that the project would create a Semantic Model during business analysis, a Platform-independent Model during technical analysis and a Platform-specific Model during technical design. Overall the project was creating three models that, I'm guessing, were to represent the same business functionality. Three different views on the same thing. Does the business get any increased value by having these three representations created? Probably not. Instead they're getting more effort required to do the initial creation of the models. More effort to maintain the models through the life of the software. More effort to communicate how the models relate.
All of that effort is wasteful since it doesn't provide any additional value to the client. Lean software development is all about eliminating waste. I'd say drop multiple models in favour of one that accurately represents the business and is written in a way that allows all parties, clients, analysts and developers, to communicate in one language and context.
I know this has been beaten to death in some circles (521,000 results in Google, 2,551 results in Google's blog search), but it bears repeating until people start paying it some attention. Separation of Concerns is the concept that software areas exist with as little overlap as possible. The existence of distinct logical layers within your code is an example of the start of creating good separation of concerns. A layer exist for the UI, another layer exists for Data Access, and on and on. Each layer is concerned with one aspect of the application.
Today I saw the complete lack of separation of concerns on my project. Note that this project has no concept of a dynamically generate user interface. A stored procedure was presented that had intimate knowledge about the controls that were to be displayed in the user interface. The stored procedure's output provided both data and the control type that said data should be displayed in. So a return result might have looked something like this:
The blurring of concerns here, isn't so much a blurring as it's a complete erasing of it. To change the physical display of data on the screen, a stored procedure (data access component) must be changed. Because there's a significant group of people in the industry that don't seem to be catching on to why this is a problem, let's look at it in a bit more detail.
In this situation there are two distinct concerns in the life of the application; Displaying data (UI) and retrieving data (data access). Both have distinctly different requirement. When displaying data the application is dealing with screen colour, font information, control location, interaction points, etc. All of these things relate to the experience of the user. The standard user's primary concern doesn't include what table in the database the information comes from, if/how logging is occurring, ORM vs Stored Proc, etc.
The second concern that is identifiable in this scenario is data access. DALs are quite well known in the industry. All a DAL is is a logical layer within your application. Simply having a DAL is a decent first step to separating out the concerns surrounding your data access methods, techniques and nuances. It is just a start though. A data access layer will be concerned, in some way that is tied to the technology that you're using, with things like creating connections, queries and transactioning. All of those things, plus some others, have to do with the physical interaction between your code and the data storage repository. That physical interaction includes anything that lays between table, column, row and your application. It might be that the interaction uses inline parameterized queries or it could be using stored procedures. Both of those are part of your DAL along with the C#, VB, Java, whatever code that is managing the connections, transactioning, ect.
So what is the problem with what I saw at work today? If the user interface is concerned with the formatting, layout, and control rendering of the screen, why does the stored procedure need to know about it? By putting the control type decision into the stored procedure we've made every layer between the UI and the DAL (and there should be a few different logical layers in there) aware of the UI implementation. Yes, not all, or even any, of those logical layers will actively interact with the control type information, but it is there which means each layer is aware of it. The bigger issue is that we've now tightly coupled our UI and stored proc. The UI is dependent on the stored proc for it to render correctly. If the stored proc changes, the UI will render differently (outside of the data displayed within the layout/structure of the UI). The coupling is also reversed. The stored procedure is now has to be intimately knowledgeable about the capabilities of the UI that is being implemented. If we moved from a webform to winform implementation there is a very high probability that the stored procedure would need changes to allow it to continue to work.
The capability of one logical layer or concern in the application is directly tied to the implementation in another. Separating each concern will allow you to work in a section of the application with less worry about the implications in another layer or concern.
None of this is an exact and quantifiable science, but every developer in the industry should be able to notice where concerns are being blurred or mixed. If you can't notice this, I'd strongly suggest that you need to look at changing your career to something that involves less abstraction. Yes, I said it. If you can't conceive of software at this level of abstract get a different job. You're not cut out to qualify as a software developer. All you are is a hobbyist that is getting paid. Separation of Concerns is a fundamental practice that all developers need to understand. The size of your application is irrelevant. The expected life of your application is usually irrelevant. The technology that you use is irrelevant. Maintainability, changeability and reverseability are relevant though. Those are some of the things that make applications that sustain the trials of time and business changes.