Microservices and isolation

In my first post I made reference to the idea of microservice isolation a number of times. I figured that this is as good of a topic as any to start with. The concept of isolation and boundaries is core to how you build your microservices. Let’s leave boundaries for another post because it’s a complicated and deep concept by itself.

It doesn’t matter who you listen to or read, microservice isolation is going to be in the content. Why are they consistently bringing it up? Isolation is at the heart of microservices. A microservice is meant to be architected, created, deployed, maintained and retired without affecting any other microservice. You can’t do any, let alone all, of that without good isolation.

Databases

Probably the most common point made when talking about isolation is database sharing, or better stated, the idea that you should avoid it. Traditional monolithic application development usually sees that one large codebase working with one large database. Any area of the monolith can access any area of the database. Not only does the monolith’s codebase usually end up looking like a plate of spaghetti, so does the monolithic database. I can’t tell you the number of times that I’ve worked on brownfield codebases that have issues with data access, deadlocks being the most common. No matter how well factored a monolithic codebase is, the fact remains that the single database is an integration point for all the different moving pieces in that codebase.

To be able to release a microservice without affecting any other microservice we need to eliminate any integration that occurs at the database level. If you isolate the database so that only one microservice has access to it you’ve just said that the only thing that will be affected if the database changes is that one microservice. The testing surface area just shrunk for any of those changes. Another benefit is that have fewer pieces of code interacting with the database so you can, in theory, better control how and when that code does its thing. This makes it easier to write code that avoids deadlocks, row locks, and other performance killing or error inducing situations.

If you listen to enough people talk about microservices for a long enough time you’ll hear a common theme; 1 database per microservice. I’m going to disagree with the masses here and tell you something slightly different. You should have a minimum of 1 data store for each of your microservices. The difference is subtle but it’s important in my mind. There are times when you might want to store data in multiple different ways within one microservice. As an example you may be writing a Marketing  Campaign microservice. A RDBMS or noSQL database makes a lot of sense for storing the information about campaigns, content, targets, etc. But if you need to do statistical analysis of the campaign feedback (i.e. email bounces, unsubscribes, click-throughs, etc.) RDBMS and noSQL might not make the most sense. You might be better served with a data cube or some other type of data storage.

Is it going to be normal to have multiple data stores for one microservice? No…but you shouldn’t be worried if it does happen as long as you stay true to one thing: the microservice owns the data stores and no other microservices can access them.

Deployment Isolation

One of the primary goals of moving to a microservices architecture is that you’re able to deploy changes to one microservice without affecting any others. Additionally you want to ensure that if a microservice starts to fail it’s not going to bring down the other microservices around it. As such, this means that you’re going to need to have each microservice deployed in complete isolation. If you’re on a .NET stack you can’t share AppPools between them. If they do changes to the permissions or availability of that AppPool could (or likely will) affect other microservices. My experience with Apache is quite limited, but I’m sure there are similar concerns there.

One of the big current talking points around microservices is Docker. Building compartmentalized and deployable contiguous packages seems to address this goal. The only current issue is that Docker builds a smaller fence around the technologies that you can choose when solving your problems.  Docker, currently, doesn’t support Windows based applications. You can build your .NET apps and run them in a Linux Docker container, but that’s as close as you get…which might not be close enough for some “E”nterprise-y places.

Another piece of the deployment puzzle is what is commonly referred to as ‘lock-step’ deployments. A lock-step deployment is one where deploying one microservice requires a mandatory deployment of a different one. Usually this happens because the two components (or microservices in this case) are tightly coupled. Usually that coupling is related to api signature changes. I’m going to do a whole blog post on this later in the series, but its suffice to say for now that if you are doing lock-step deployments you need to stop and solve that problem before anything else. If you aren’t doing lock-step deployments you need to be vigilant to the signs of them and fight them off as they pop up.

Something that makes noticing lock-step deployments more difficult to notice, but is going to be mandatory in your deployment situation, is automation. Everything about your deployment process will need to be automated. If you’re coming from a mentality, or reality, of deploying single, monolithic applications you’re in for a big shock. You’re no longer deploying one application. You’re deploying many different microservices. There are a lot more deployable parts, and they’re all individually deployable. My project only had 4 microservices and we found that manual deployment was worse than onerous. Everything from the installation of the microservice to the provisioning of the environments that the microservice will run in has to be automated. Ideally you’re going to have automated verification tests as part of your deployment. The automation process gives you the ability to easily create frictionless lock-step deployments though…so you’re going to have to be vigilant with your automation.

I know that some of you are probably think “That’s all fine and good but I will always have to change APIs at some point which means that I need to deploy both the API and the consumers of the API together”. Well, you don’t have to…which kind of leads to…

Microservice <-> Microservice communication

At times there will be no way to avoid communication between two microservices. Sometimes this need to communicate is a sign that you have your bounded context wrong (I’ll be going over bounded contexts in a future post). Sometimes the communication is warranted. Let’s assume that the bounded contexts are correct for this discussion.

To keep microservices isolated we need to pay attention to how they communicate with each other. I’m going to do an entire blog post on this but because there can be so many things that come into play. It’s safe to say that you need to pay attention to a couple of different key pieces. First, take the approach of having very strict standards for communication between microservices, and loose standards for the technology implementations within each microservice. If you’re going to use REST and JSON (which seems to be the winning standard) for communication. Be strict about how those endpoints are created and exist. Also, don’t be afraid to use messaging and pub/sub patterns to notify subscribing microservices about events that happen in publishing microservices.

The second thing that you need to do, which is related to the first, is spend some time up front in deciding what your API versioning story is going to be. API versioning is going to play a big part in maintaining deployment isolation. Your solution will probably require more than simply ‘adding a version header’ to the communication. You’re probably going to need infrastructure to route communications to different deployed versions of the APIs. Remember that each microservice is an isolated deployable so there is no reason that you couldn’t have two or more instances (say v1 and v2) up and running at any one time. Consuming microservices can continue to make uninterrupted use of the v1 endpoints and as they have time/need they can migrate to v2.

Source code

Now that we’ve talked about the architecture of your microservices let’s take a look at the source code. So far we’ve been talking about isolating the microservice data stores, the deployment strategy and cross microservice communication. In all of those areas we didn’t come straight out and say it but each microservice is a separate application. How do you currently organize separate applications in your source control tool? Probably as separate repositories. Keep doing this. Every microservice gets its own repository. Full Stop. The result is that you’re going to see an increase in the number of repositories that you have to manage. Instead of one (as you’d probably have with a monolith) repository you’re going to have 1+N where N is the growing number of microservices that you’re going to have.

How you organize the pieces of the puzzle within that application is going to depend on many things. It’s going to depend on the components the microservice needs. The more “things” (service buses, web services, background processing, etc) then the more complicated the application’s source code structure is likely to be. You might have four or five assemblies, a couple or executables and other deliverables in a single microservice. As long as you have the minimum required to deliver the required functionality then I think you’ll be okay. More moving pieces can mean that you’re doing too much in the microservice though. It could be creeping towards monolith territory. So carefully watch how each microservice evolves into it’s final deliverable.

Another thing to consider is how you perform development isolation in your VCS. I’m not going to get into a branches vs feature-toggles discussion here, but you have to do something like that in your development practices. Things are going to move a lot faster when you’re developing, enhancing and maintaining microservices. Being able to rapidly turn around bug fixes, feature additions or changes becomes a competitive advantage. You need to work within your VCS in a manner that supports this advantage.

Continuous Integration

Following in the steps of the source code is the continuous integration process. Because every microservices is an independent application you’re going to need to have CI processes to support each and every microservice. This was one of the things that caught my project off guard. We didn’t see it coming but we sure noticed it when it happened. As we created new microservices we needed to create all of the supporting CI infrastructure too. In our case we needed to create a build project for every deployment environment that we had to support. We didn’t have this automated and we felt the pain. TeamCity helped us a lot but it still took time to setup everything. This was the first hint to us that we needed to automate everything.

Teams

There is a lot of talk about sizing microservices (which, again, I’ll cover in a future post). One of the things that continually seems to come up in that discussion is the size of teams. What is often lost is that teams developing microservices should be both independent and isolated…just like the applications that they’re building. Conway’s law usually makes this a difficult prospect in a ‘E’nterprise development environment. The change to developing isolated and compartmentalized microservices is going to require team reorganization to better align the teams and the products being produced.

Teams need to be fully responsible for the entirety of the microservice that they’re developing. They can’t rely on “the database guys to make those changes” or “infrastructure to create that VM for us”. All of those capabilities have to be enabled for and entrusted to the team doing the development. Of course these independent teams will need to communicate with each other, especially if there one of the teams is consuming the other’s microservice. I think I’ll have a future post talking about Consumer Driven Contracts and how that enhances the communication between the teams.

Summary

When talking about isolation and microservices many conversations tend to stop at the “one microservice, one database” level. There are so many other isolation concerns that will appear during the process of building, deploying and maintaining those microservices. The more I’ve researched and worked on microservices the more I’ve become of the opinion that there are a bunch of things related to microservices that we used to get away not doing on monolithic projects but that we absolutely can’t ignore anymore. You can’t put off figuring out an API versioning scheme. You’re going to need it sooner than you think. You can’t “figure out your branching strategy when the time comes” because you’re going to be working on v2 much sooner than you think.

Isolation is going to save you a lot of headaches. In the case microservices I’d probably consider leaning towards what feels like ‘too much’ isolation when making decisions rather than taking what likely will be the easier way out of the problem at hand.

Published at

Originally posted at

Comments (1)

Microservices; A Gentle Introduction

This past winter I started working on a project that was being architected with a mind towards using microservices. Prior to this I’d only seen the term ‘microservices’ floating around in the ether and really hadn’t paid much attention to it. I wanted to share what we did and what I learned through the process and my subsequent research. That experience and research has led me to one belief: the microservice topic is massive. This post is going to be a kick-off to a series that will cover that material. With that, let’s dig in.

What are microservices?

Sometimes its easier to start by describing what something isn’t. Microservices are not monolithic applications. That’s to say our traditional application was one unit of code. It is:

  • developed together (even split as multiple assemblies, it was developed as one codebase)
  • deployed together (the vague promise of “just deploy one dll from the application by itself” has never been practical or practiced)
  • goes through the full application lifecycle as one contiguous unit (it is built as one, maintained as one and dies as one application)

So a microservice is none of these things. In fact, to start the definition of what a microservice is you’d be safe in saying that, at a high level, microservices are the opposite of all these things. Microservices are a bunch of small applications that represent the functionality we once considered as one application. They are:

  • developed in isolation (contracts between microservices are established but this is the extent of their knowledge of each other)
  • deployed in isolation (each microservices is it’s own encapsulated application)
  • lives and dies no relationship to any other microservice (each microservice’s application lifecycle is managed independently)

There’s a pretty common comparison between microservices and SOA. If you missed the whole SOA bandwagon then 1) you’re younger than me, 2) I’m envious of you, and 3) it had its merits. Some people will say “microservices are just SOA done right”. I’m not sure that I fully agree with that statement. I don’t know that I disagree with it either. As an introduction to microservices you should understand that there are many parallels between them and SOA. Applications that aren’t monolithic tend to be distributed. Both SOA and microservice architectures are decidedly distributed. Probably the biggest difference between SOA and microservices is that SOA was quickly absconded by the big software manufacturing companies. According to them, doing SOA right meant using an Enterprise Service Bus (most likely an Enterprise Service Broker being marketed and pitched as a service bus)…and preferably using their ESB, not a competitors. Gradually those SOA implementations became ESB implementations driven by software vendors and licensing and the architectural drive of SOA was lost. Instead of a distributed system business logic was moved out of application codebases and into centralized ESB implementations. If you’ve ever had to debug a system that had business logic in BizTalk then you know what the result of vendors taking over SOA was.

Thus far (and microservices are older as an architecture than you probably think) the microservice architecture hasn’t been turned into a piece of software that companies are flogging licenses for. It’s questionable whether the hype (justified or not) around Docker as a core component of a microservices implementation is just the start of microservices heading the same direction as SOA. It might be, it might not be.

Each microservice is a well encapsulated, independent and isolated application. If someone pitches microservices at you and there’s a shared database, or two microservices must be deployed in unison, or changes to one microservice forces changes on another then they’re pitching you something that’s not a true microservice. These concepts are going to lead to a bunch of the items I will discuss in future blog posts.

What does all of this mean?

Good question. The short story, from what I’ve been able to collate, is that microservices promise a bunch of things that we all want in our applications; ease of deployment, isolation, technology freedom, strong encapsulation, extensibility and more. The thing that people don’t always immediately see is the pain that they can bring. Most organizations are used to building and managing large monolithic codebases. They struggle in many different ways with these applications, but the way that they work is driven by the monolithic application. As we covered earlier, microservices are nothing like monoliths. The processes, organizational structures and even technologies used for monolithic applications are not going to work when you make the move to microservices.

Think about this: how does your team/organization deal with deploying your monolithic application to a development environment? What about to Dev and Test? Dev, Test and UAT? Dev, Test, UAT and Prod? I’m sure there’s some friction there. I’m sure each one of those environment deployments takes time, possibly is done manually, and requires verification before its declared “ready”. Now thinking about all the time you spend working through those environment releases and imagine doing it for 5 applications. Now 10. What would happen if you used the same processes but had 25 applications to deploy? This is what your world will be like with microservices.

Part of what I hope to convey in this series of blog posts is a sense of what pain you’re going to see. There’s going to be technical pain as well as organizational pain. And, as I know from experience over the last 6 months, there is going to be a learning curve. To help you with that learning curve I’ve created a list of resources as part of a github repository (https://github.com/dbelcham/microservice-material) that you can go through. Its not complete. It’s not finished. If you find something you think should be added please submit a pull request and I’ll look at getting it added. As of the moment I’m writing this it’s probably weakest in the Tooling, Techniques and Platforms section. I’m hoping to give that some love soon as it will be important to my writing.

In closing

Microservices are a topic that is gaining traction and there’s a lot of information that needs to be disseminated. I don’t think I’m going to post any earth shattering new concepts on the topic but I want to get one location where I can put my thoughts and, hopefully, others can come for a cohesive read on the topic. I am, by no means, an expert on the topic. Feel free to disagree with what I say. Ask questions, engage in thoughtful conversation and tell me if you think there’s an area that needs to be covered in more depth.

Published at

Originally posted at

SaaS and Commodities

I’m doing some work right now that requires us to send SMS messages. The organization I’m working with has never had this capability before so we are starting at ground level when it comes to looking at options. As part of our process we evaluated a number of different criteria on about four different SaaS options; twilio, plivo, nexmo and sendinblue. For reasons not relevant to this post, plivo was the initial choice of the client. We moved from analysis to writing a proof of concept.

The first stage of doing a proof of concept is getting a user account set up. When I tried registering a new account with plivo I got the following message:

plivo_error

I did, however, receive an account confirmation email. I clicked on the link in the email and received the same message. Thinking that this might just be a UI issue and that the confirmation was accepted I decided to try to login. Click the login button and wham…same error message…before you even get to see the username and password screen. This, obviously, is starting to become an issue for our proof of concept. I decide, as a last resort, to reach out to plivo to get a conversation going. I navigate to the Contact Us page to, once again, see the same error message before I see the screen. At this point the website is obviously not healthy so I navigate to the status page and see this

plivo_status

So everything is healthy….but it’s not. A quick glance at their twitter account shows that plivo attempted to do some database maintenance the day prior to this effort and they claimed it was successful. Related to the database maintenance or not, I needed to move on.

This is the part that gets interesting (for me anyways). Our choice in SMS provider was a commodity selection. We went the store, looked on the shelf, read the boxes and picked one. It very well could have been any box that we selected. But the fact that making that selection was so simple means that changing the selection was equally simple. We weren’t (in this specific case which may not always be the case) heavily invested in the original provider so the cost of change was minimal (zero in our case). All we had to do was make a different selection.

This highlighted something that hadn’t clicked for me before. Software as commodities makes our developer experience both more dynamic and more resilient. We are able to change our minds quickly and avoid unwanted interruptions easily. The flip side of the coin is that if you’re a SaaS provider you need to have your A game on all the time. Any outage, error or friction means that your potential (or current) customer will quickly and easily move to a competitor.

Published at

Originally posted at

Comments (1)

Sharpening chisels

I’m working on a cedar garden gate for our back yard. It’s all mortise and tenon joinery which means I make a lot of use of my Narex bench and mortise chisels. The more you use chisels the duller they get. Dull chisels cause you two problems; you can’t be as precise with them, and you run the very real risk of amputating a finger. As much as I have two of each finger I really do want to keep all eleven of them. Getting tight fitting tenons requires fine tuning of their thicknesses by the thousandth of an inch. Both of those fly directly in the face of what dull chisels are good at…so tonight was all about sharpening them up.

There are a number of different ways that you can sharpen edged tools (chisels and hand planes). There are machines, you can use water stones, Arkansas stones, diamond stones or, my personal choice, the “Scary Sharp Technique”. For those of you that couldn’t be bothered click that link and read through the original usenet posting on the topic in detail (and who can blame you, this is a software blog after all) here’s the TL;DR; for Scary Sharp.

  • Sharpening is done with wet dry automotive sand paper, not stones
  • Progression is made to finer and finer grits as you get sharper. i.e.  400 –> 800 –> 1200 –> 2000 grit
  • Sandpaper is glued down to a perfectly flat surface such as float glass, a tile, or granite counter top (ask the missus first if you’re planning on using the kitchen)

My station for tonight looked like this (400 grit to the left, 2000 grit on the far right):

sharpening station

So, why am I boring you with all this detail about sharpening my chisels? There’s a story to be told about software tools and woodworking tools. The part of it that I’m going tell in this post is the part about maintaining and fine tuning them.

Maintaining your tools

For me to be able to effectively, and safely, create my next woodworking project I need to constantly maintain my chisels (amongst other tools). I have to stop my project work and take the time to perform this maintenance. Yes, it’s time that I could be getting closer to being finished, but at what cost? Poor fitting joinery? Avoidable gouges? Self amputation? The trade off is project velocity for project quality.

Now think about a development tool that you use on a regular basis for your coding work. Maybe it’s Visual Studio, or IntelliJ, or ReSharper, or PowerShell, or…or…or. You get the point. You open these tools on an almost daily basis. You’re (hopefully) adept at using them. But do you ever stop and take the time to maintain them? If it weren’t for auto-updating/reminder services would you even keep as close to the most recent release version as you currently are? Why don’t we do more? I currently have an install of Resharper that I use almost daily that doesn’t correctly perform the clean-up command when I hit Ctrl-Shift-F. I know this. Every time I hit Ctrl-Shift-F I cuss about the fact it doesn’t work. But I haven’t taken the time to go fix it. I’ve numbed myself to it.

Alternatively, imagine if you knew that once a week/month/sprint/<timeframe of your choosing> you were going to set aside time to go into the deep settings of your tool (i.e. ReSharper | Options) and perform maintenance on it. What if you smoke tested the shortcuts, cleaned up the templates, updated to the latest bits? Would your (or mine in the above example) development experience be better? Would you perform better work as a result? Possibly. Probably.

Tools for tools

I have tools for my woodworking tools. I can’t own and use my chisels without having the necessary tools to keep them sharp. I need a honing guide, sand paper, and granite just to be able to maintain my primary woodworking tools. None of those things directly contribute to the production of the final project. All their contributions are indirect at best. This goes for any number of tools that I have on my shelves. Tools for tools is a necessity, not a luxury. Like your first level tools, your second level of tools need to be maintained and cared for too. Before I moved to the Scary Sharp system I used water stones. As you repetitively stroke the edge over the stone it naturally creates a concave in the middle of the stone. This rounded surface makes it impossible to create a proper bevel angle. To get the bevel angle dead on I needed to constantly flatten my stone. Sharpen a tool for a while, flatten the stone surface…repeat. Tools to maintain tools that are used to maintain tools.

Now think about your software development tools. How many of you have tools for those tools? Granted, some of them don’t require tools to maintain them….or do they? So you do some configuration changes for git bash’s .config file. Maybe you open that with Notepad ++. Now you have a tool (Notepad++), for your tool (git bash). How do you install Notepad++? With chocolatey you say? Have you been maintaining your chocolatey install? Tools to maintain tools that are used to maintain tools.

Sadly we developers don’t put much importance on the secondary and tertiary tools in our toolboxes. We should. We need to. If we don’t our primary tools will never be in optimal working condition and, as a result, we will never perform at our peaks.

Make time

Find the time in your day/week/sprint/month to pay a little bit of attention to your secondary and tertiary tools. Don’t forget to spend some quality time with your primary tools. Understand them, tweak them, optimize them, keep them healthy. Yes, it will take time away from delivering your project/product. Consider that working with un-tuned tools will take time away as well.

Tags:

Published at

Originally posted at

Choosing AOP technologies

I get asked how to pick an AOP technology on a pretty regular basis. The other day while I was answering the question I got to thinking that there is a pretty logical flow to selecting which technology to use on a project. It seemed like a pretty good opportunity to document my thoughts so…here you go. A pdf of a flowchart outlining the decision process I use when picking AOP tools. I’ve also put it up in a github repo so that you can submit pull requests with alterations and/or additions. Make your case in the pull request and I’ll look at modifying the document. And, yes, I know….it’s in Visio…whatever.

Tags:

Published at

Originally posted at

Arduino and logging to the cloud

I participated in a lunch and learn today that demo’d the capabilities of logentries.com. It was impressive in the ease that you are able to parse, analyse and digest logging information. Once you have the log data being pushed to logentries.com there are any number of different ways that you can play with it. Seeing that, and knowing that we were going to push for it on my current project, I decided to take a look at it tonight from a slightly different angle. Instead of importing a nuget package and pumping data into it from that direction I figured I’d try to feed the service data at a much more raw level. Over the last few weeks of working with my Arduino I’ve come to appreciate how raw network communications can be…so why not just go there.

First thing I had to do was set up an account at logentries.com. It’s easy to do and it gives you 30 trial days of the full suite of features before reverting back to a free tier. There are a lot of different options for setting up logs once you’re in the system. At first I wanted to try sending log entries via a REST endpoint since it was what I knew best from my previous work with the Arduino. logentries offers a REST endpoint (HTTP PUT), but it’s being deprecated. So I looked at the other raw API options; Plain TCP/UDP and Token-based.

Plain TCP/UDP

This type of endpoint is all about sending TCP and/or UDP packets containing your log entries to an endpoint. The tricky thing with it is that it ties you to a specific IP address as the source of your log entries. The trick is that there is a 15 minute window which links the incoming message and it’s source IP address to your account’s log. Not a horrible thing, but a setup restriction none-the-less. More information on how it works can be found here.

Token-based

Like the Plain TCP/UDP option, all log traffic is sent to an endpoint via TCP and/or UDP. The difference is that your log data will contain a token and that will be used to tie the messages you send to the account you’re using. This is much easier than worrying about getting the right IP address linked to the account in the first 15 minutes of the log’s life. I chose this option because of the simplicity. More info on it here.

Arduino code

I’ve been playing with sending data from my Arduino to Azure Mobile Services (more on that another time) over the last few weeks. As a result I had some pretty good code written (mostly borrowed from Adafruit examples) to hook up my Arduino to WiFi. I bought a CC3000 shield from Adafruit for my Uno and made use of it for this project too. There’s a lot of boilerplate code to get the WiFi up and running, but that’s not the interesting part. What you’re here to see is how the data is sent to https://logentries.com/logentries.com after a connection has been established.

   1: void Log(char logEntry[100]){
   2:   t = millis();
   3:   if (!logger.connected()){ 
   4:     Serial.println("Connecting logger");
   5:     do {
   6:       logger = cc3000.connectTCP(log_ip, 80);
   7:     } while ((!logger.connected()) && ((millis() - t) < connectTimeout));
   8:   }
   9:   Serial.println("logger connected");
  10:   
  11:   if (logger.connected()) {
  12:     Serial.println("logging");
  13:     logger.fastrprint(LOG_TOKEN);
  14:     Serial.print(LOG_TOKEN);
  15:     logger.fastrprint(" ");
  16:     logger.fastrprintln(logEntry);
  17:     Serial.println(logEntry);
  18:     logEntry[0] = 0;
  19:   }
  20: }

There are a couple of things going on in this method.

  1. On line 6 we establish a TCP connection to the logentries.com endpoint (data.logentries.com).
  2. If the connection to the endpoint is made successfully we move on to sending the log entry. Lines 13, 15 and 17 make this happen. “logger” represents the connection to the endpoint and we call .fastrprint(….) to send data and .fastrprintln(….) to send data with a line ending.

All we need to send out in our stream to the endpoint is the log data we want to include and the token we got when we created the log on the logentries.com website. By sending the token as a .fastrprint, the blank as a .fastrprint, and the message as a .fastrprintln we’re essentially sending all three of these pieces of information as one line to the endpoint.

The output

Here’s what things look like when you look on the website.

logentries_sample

Now you can start making use of logentries’ tagging and reporting functionality to understand how your Arduino code is working.

Source code for the sample project is available on github: https://github.com/dbelcham/logentries_arduino

Published at

Originally posted at

Summer is over…

…and I can start looking at home automation again. So my first bit of research post warmth has been to look at setting up a browser based interface for opening, closing and displaying the current state of the garage door. Yah, I hear you…it’s not that sexy of a project. It fills a need though. You see, we’ve had a couple of incidents where a resident of this house (who will remain unnamed) has left the garage door open for extended periods of time. By extended, I mean all night. So wouldn’t it be nice to have something where I can tap my phone, have a scheduled ‘closing’ event or just be able to look and see if it is closed.

Enter our garage door system…the LiftMaster 8355

We have a keypad outside for entry to the garage and by the door into the house there is a multi-function control panel that allows us to turn on the system’s lights and open/close/stop the door. From what I gather most garage door systems traditionally have had a ‘short-to-trigger’ style system which allowed you to easily interface with them and kick off the movement of the door. Chamberlain (the underlying manufacturer of this system) did away with that and has implemented a proprietary communication between the keypad and multi-function control. From what I can figure out the communication across the wire is serialized so you can’t simulate another device.

That’s okay though because Chamberlain has created something that they call MyQ. This is a system that requires you to purchase a ‘hub’ that enables wireless communication between phone apps and a website (hosted by Chamberlain) and your garage door. Here’s what the system looks like conceptually.

chamberlain_system

So yah…there are no moving parts in there what-so-ever. There is no architectural way around this. The wireless protocol from the LiftMaster to the MyQ hub is proprietary and unknown. There is some speculation that it is a modified implementation of Z-Wave, but nobody seems to know speculate more than that. The communication between the MyQ Hub and the Chamberlain servers is also proprietary, although I think I could sniff the wires to see what is being sent. Certainly the calls between the phone apps and your browser can be sniffed and someone has created an unofficial api based on some of that work. That api doesn’t help the overall architecture any. The two main points in the architecture are the MyQ hub (which is an additional $129 US investment that you must make) and the Chamberlain servers.

If the Chamberlain servers go offline (for maintenance or otherwise) the whole system goes offline. You can be sitting 5 feet from the MyQ hub clicking furiously on your phone app and the door will not move. Granted, you could walk over and push the multi-function panel button, but that’s not the point here. If you can’t get data service from your smart phone, neither the phone nor the website will be of use to you since you can’t get to either. The problem with these is that both of these known points of failure that you and I have no control over. The success of our “smart home” experience is being entrusted to someone else.

Since you’re here at my blog, you’re likely a developer, so let me put this to you another way. Today nuget.org experienced an outage. Not a “server is down” outage. Not a “internet is unavailable” outage. It was an outage caused by a bad DNS record. Remember when Azure had that happen? Not cool right? Nuget.org being down today was one of those things that probably really annoyed a whole bunch of developers. It likely didn’t stop them from doing some work, but it was another burr under the saddle. The whole architecture implemented by Chamberlain has the same potential. DNS issues, routing issues, maintenance, etc…they can all become a burr under the saddle of our smart home experience. I’ve had my share of burr under the saddle moments and as I get older I’d like to have fewer.

To me that means eliminating moving parts. The first thing I want to do is drop the reliance on the Chamberlain servers. That takes a lot of the possible fail points out of the equation. Ideally I’d like to drop the MyQ Hub. It’s $129 expense and it doesn’t add any additional value to the smart-home experience beyond enabling communication to the LiftMaster. That leaves two options:

  1. direct access to the wireless protocol used to communicate between the LiftMaster 8355 and the MyQ Hub
  2. direct connectivity (short-to-trigger) at the LiftMaster

Today I took to twitter to ask Chamberlain why they don’t want to enable option #1 as a direct API. Ironically their twitter handle is @chamberlaindiy….yet you can’t DIY anything with this system.

So they think that keeping things proprietary and in a tight circle of integrators is the best thing since “…safety & secruity are our top priority”. Essentially they’re promoting security through obscurity which is commonly accepted as one of the most dangerous things you can do with a security system. Think about the security issue a bit more practically though. Is the garage door the biggest attach surface that is available on your house to a criminal? Is it the easiest? I’d answer no to both of those questions and argue that the windows and exterior doors offer more options for entry and that those options are easier than hacking your garage door opener. Even if you did hack the garage door, and get it open, an attached garage will usually have an exterior grade door between it and the house. Maybe I’m a chronic door locker, but I treat that door like any other exterior door when it comes to locking it. So really, at a practical level, the only thing that you’re trying to protect by having proprietary communications is whatever is in the garage. That’s the worst case scenario…a poorly implemented device/system using the API (much like was exposed at Black Hat 2013 on the proprietary Z-Wave home automation protocol). Heck, that can happen right now if one of Chamberlain’s integration partners do a poor job of implementing the proprietary MyQ protocol.

@chamberlaindiy reached out to me to have a conversation offline about my concerns so hopefully sometime in the near future that will happen and maybe…..just maybe….I can convince them that opening up a well written communications api would be in their best interest.

Published at

Originally posted at

PrairieDevCon 2014 content

We’re just wrapping up at the conference and it’s time to put up our materials. Thanks to everyone who attended my talks. Feel free to contact me if you have any questions.

If you’re interested in my Circuit Breaker implementation you can find it at https://github.com/dbelcham/dervish

Published at

Originally posted at

A solid foundation

As I started going though the ideas I had for automation and connectivity in the house one thing became very obvious: Everything required some sort of connectivity. It could be coax for cable/satellite TV, or Cat5 for phones, or Cat6 for network and AV distribution. The common denominator was that there needed to be some kind of connectivity.

I’ve dabbled on the IT Pro side of the fence in the past so I know that cobbling together a wiring solution was likely to end up in a world of pain. Rather than do that I decided that one of the first things that needed to be done to the house was to add some wiring in. As with many “stock” houses these days, the only wiring done was coax to 3 locations for TV and some Cat5e run for phones. Everything terminated at the power panel with hardly any extra cable to use. The way that our house is setup, the power panel is in an awkward location that didn’t lend well to much of anything. There was no way I was building an accessible wiring closet in that location without completely blocking the power panel so I had to come up with another option (plus, the area near the power panel, awkward as it is, was quickly designated the future wine cellar…priorities right?).

The location that I ended up settling on was about 15ft away from where all the builder installed wiring was terminating. So I had a couple problems on my hands. First, I needed to get more wiring run, and second I needed to extend the current cabling to this new location.

Extending cabling

As I said earlier I had Coax and Cat5e that needed to be extended. The Coax isn’t so hard. there are plenty of male-to-male connectors available. All you need to do is buy/build some cables to get from the current endpoint to the desired endpoint. Because I hate sloppy cabling I decided to custom make my cables so that they fit precisely to length. A few tools, 30 minutes in Home Depot watching a YouTube video on how to crimp ends on and I was good to go. Because I didn’t want the cables running willy-nilly under our stairs I spent some time with a speed bore and punch holes in the studs so that I could run the Coax through the walls and have it pop out right where I needed it.

cat5e junction box

The Cat5e extension was a bit more of an issue. There really aren’t that many ways to extend network cable. I did manage to find a set of 110 punchdown boxes though. Wire goes in one side and punches down; wire comes out the other side where it was punched down. A small PCB board in the box makes all the connections for you. So, like the Coax, I custom cut some Cat5e, ran it through the studs and ended it where my new termination location was going to be.

Running more wire

Most new houses are built with no data wiring in them. It seems that the common belief is that WiFi is ubiquitous and convenient enough that there’s no value in doing so. I disagree. WiFi is convenient. It is easy to procure and setup. It doesn’t, however, offer a very good data transfer rate for most devices. Yes 802.11N offers decent speeds but its nothing compared to gigabit and if the WiFi signal is weak in an area of the house, the data connection will be slower than advertised. On top of that, not all the devices in our house are WiFi enabled so they either have to sit near the router or be off the network. Neither of those options will work for us here. And don’t get me wrong, there will be WiFi in the house.

So to fill my need for gigabit speed I got some electricians to stop by and run a bunch of Cat6 cables for me. Each run ends in the basement termination location. Here’s what I ended up getting:

  • 4 runs to the upstairs TV area (more on why 4 in a future post)
  • 4 runs to the main floor TV area
  • 1 run to each of two different office areas
  • 1 run to the laundry room (again, more in a future post)

Unlike the home builder, I had the electricians leave 20+ feet of extra cable on the termination end just in case I changed my mind about the termination location. Most of that extra cable, once trimmed, has gone to building patch cables so it wasn’t a waste at all.

If you have the chance do get data cabling done before the walls in a house are closed up…or get lots of empty conduit run. We were lucky that all that cabling only required one hole in an inner closet wall. Not much damage but the time spent getting the cables pulled sure added to the cost.

Termination point

Once I had cables run to a centralized location I needed to figure out what I was going to do to manage the termination of these feeds. After some googling around I found out about media enclosures. These are just a metal box that gets installed between two wall studs and gives you a solid platform to mount different devices to. I had a bunch of small devices that I wanted to house so this seemed like a good idea. In the end it is home to my 8-way coax splitter, 8-way phone junction point, cable modem, cable-to-phone box and a WiFi router (more on that in a later post too).

I waffled on the idea of terminating the long runs in this box. I knew it wasn’t the cleanest or most flexible solution but for something like the coax lines I likely wasn’t going to be changing their configuration ever so in the end simplicity won out. All of the Coax runs terminate in the enclosure. None of the data cables or phone runs go to the media enclosure. The connection between the cable modem and the WiFi router stays contained in the enclosure and a single data run leaves the WiFi router and the enclosure to connect to a gigabit switch. The same is true for the cable-to-phone box; all of its connections are kept in the enclosure and only 4 cables from the phone junction point exit the enclosure. In the end there is 1 Coax cable into the enclosure and 1 Cat6, 4 Cat5e and 3 Coax out of the enclosure.

Now, I needed to manage the termination of the 12 or so data lines and the 4 phone lines that I had coming to this central location. Unlike the coax lines flexibility in configuration was going to be a huge benefit here. To that end I ran all of those cables into a rack and terminated them in a patch panel. I also terminated the line from the WiFi router in the patch panel. This give me the ability to directly connect any data line wall jack in the house directly to the internet connection. I can also create a physically separate network if I need/want to. Right now all required data lines are patched from the patch panel to the switch giving full gigabit connectivity within the house.

Extending WiFi

I hate weak WiFi signals. To combat this I put a WiFi router (configured as an access point) on each floor and connected it via one of the gigabit data runs back to the main switch. With that I actually killed two birds with one stone: I was able to get stronger WiFi everywhere, and I got a 4 port switch in those locations. The 4 port switch actually turned out to be very useful. At one TV location all of the following devices can be connected:

  • Xbox
  • WDTV Live Hub
  • TV
  • HD cable box
  • DVD player (this one lost out as we rarely use it)

 

The end configuration

In the end it all logically looks something like this

LogicalNetwork

And, amazingly enough, it all works as desired and with all of this I now have a foundation to add home automation components onto.

Published at

Originally posted at

Home Automation

Recently we moved into a new house. One of the things that I have always wanted to do was wire up a house and automate as much of it as possible. So here’s my chance!

This isn’t going to be something that happens over night and as proof you need to know how long it’s taken to get the first parts done. Because I’m taking an incremental approach to adding functionality that is going to be one of my primary concerns when choosing technology and hardware.

Overall I have an idea, but no big detailed plan. I’m taking an agile approach to things. I’m going to wait until the absolute last responsible moment to decide on everything. That said I do have that big picture idea, and here it is.

The idea


  • Everything that gets automated must be reachable via some form of communicate. I want to build my own central automation platform/software to integrate any/all the different technologies that will end up in the house and if I can’t program against them then I can’t do this.
  • Automation should be focused on providing value and usefulness. For example, automated blinds are nice. But once you see the 20ft entry way in our house and the 15ft climb to the window in it and then combine that with my fear of heights you can now make the case that automated blinds would be both valuable and useful.
  • Automation should not be intrusive to install. I do not want to have rip open walls just to add an automated item. I understand that there will be a small number of situations where walls will have to be opened up. If there are two options for a given need and one requires no drywall work then it shall be the choice.
  • While preferred, off the shelf solutions will not be the sole way to accomplish an automation task. I have dabbled enough in Arduino and embedded coding to know that if I can make something that better fits my needs then I will.

With those three concepts in mind I have started researching and, at the time of this writing, started my first project which I’ll cover in the next post. Until then here’s a list of some of the ideas (some good, some bad) that are in the current backlog.

  • blinds
  • zone/scene controlled lighting
  • HVAC control
  • irrigation (probably should get a lawn first)
  • whole house audio
  • centralized audio/visual components
  • water leak detection
  • utility shut off

Let the games begin…

Published at

Originally posted at

The myth of “Best Practices”

TL;DR – When you see a “Best Practices” article or conference session, read or attend with caution. Its likely not to help you with your current problems.

Today I read a very informative blog post about passwords and the security that they don’t provide. The one thing that stood out in that post more than anything else was the following sentence:

“Best practice” is intended as a default policy for those who don’t have the necessary data or training to do a reasonable risk assessment.

You see, I’ve always cringed when I’ve read blog posts or seen conference sessions that claim to provide “Best Practices” for a given technology or platform. People read or attend feeling that they will leave with a profound solution to all of their problems within that sphere. In the end these sessions don’t help the attendee/reader. They lack two important things; context and backing data.

“Best practices” are an attempt to commoditize solutions to what usually are very complex problems in even more complex environments. I have seen “always use HTTPS for communication between browser and web server when authenticating onto websites” as a best practice. I’m sure you have too. But does that make any sense when the traffic between the browser and web server only exists on an internal network? Possibly, if the system needs to be hardened against attack from internal company users, but this is a pretty rare scenario. So what benefit do we get from blindly following this “Best Practice” for our internal system? We have to purchase, manage, re-new, deploy and maintain SSL certs (amongst other things). And to what benefit if the risk of attack from our internal users is deemed to be low (which is what most organizations I’ve experienced would categorize it for their internal apps)?

The “Best practice” of always using HTTPS is a broadly painted practice intended to cover more situations than necessary. Why? Well these “Best practices” are intended for organizations and people that “…don’t have the necessary data or training…” These organizations and people need solutions that err on the side of caution instead of being focused for their needs. In essence, “Best Practices” are intended for audiences that are either too lazy or too uninformed about their scenarios, tools or platforms to make decisions on their own.

I know that I’m using a security scenario and referencing a security related blog post. On top of that I used phrases like “side of caution”. Don’t mistake this as a condemnation only of “Best Practices” for security related matters. I intend to condemn all “Best Practices” using the same arguments. Regardless of if those “Best Practices” are for MVC, IIS hardening, network security, design patterns, database technologies or anything else that we deal with in software development, I opine that they are worthless. Well, they have one use; to identify organizations that have neither the interest or capability to assess their specific situations and needs.

Tags:

Published at

Originally posted at

The *Specified anti-pattern

I’ve spent far too much energy on this already. But it needs to be said to a broader audience. When your svc code creates *Specified properties for struct properties you’re doing more harm than good.

WCF allows you to define certain output elements as optional. This means that they won’t be sent in the message body. When the message is deserialize the message object will contain a property for that element but somehow there must be an indication that the element wasn’t received. If the type is nullable (like a string), no problem. But if the type is a struct, which aren’t nullable by definition, then how do you indicate the lack of a value? Welcome the *Specified boolean property.

So if our message has a DateTime element that has been marked as optional on transmission, and there is no data, that element won’t be included in the message. When WCF deserializes that message into the object, the property for the element isn’t nullable so it has to put a value into it. In the case of DateTime it will put in DateTime.MinValue which is an actual valid date. So for you to know that there wasn’t an value (or element for that matter) you will have to check the correlated *Specified property. Now the consumer of the WCF service has to write if…else statements in their code to translate the lack of value into something meaningful; like a Nullable<DateTime>.

As soon as you see if…else statements like this you can be assured that you have a leaky abstraction. The consumer of the WCF service has too much knowledge of the WCF service’s implementation details. The consumer shouldn’t care to have to look at one value to know if another value should be empty or not. It should just look at the value and be able to say “Oh, this is empty”. That’s why we have nullable types. In a lot of situations having no value is a valid state for an object or property. Worded another way, the absence of data is data in itself.

If we have to deal with checking the *Specified properties we’ve just introduced a piece of conditional logic into our application for every pair of properties that use this pattern. Conditional logic implementation are some of the easiest code to get wrong. In this case you may get your true and false if…else condition reversed. You may simply forget to do the conditional check. The use of a pattern that requires the implementation of conditional logic immediately makes your code more brittle and error prone.

On top of that, patterns like the *Specified one are not normally seen in the wild. The inexperience that people have with it means that they will make mistakes, like forgetting to check the *Specified property before working with its partner property. Again, we see the pattern introducing possible points of failure.

All of these problems could be alleviated if we adhered to two ideas: good encapsulation and null data being valid data. Until then, avoid the *Specified pattern like the plague.

Published at

Originally posted at

PhoneAnnotations

Recently I was trying to find a good DataAnnotation extension that would provide validation for phone numbers. I stumbled on this blog post from AppHarbor and decided that I should take that idea and make something of it. With that, I introduce Igloocoder Phone Annotation. You can grab it from github or nuget.

Tags:

Published at

Originally posted at

Support lessons learned

It’s been quiet around here for the last while. I’ve been spending my time and energy on some side projects that don’t really pertain directly to the art of programming. They do, however, have a degree of interaction with a programming community….and that’s where it all goes sour.

I’ve had two run-ins with this programming community over the last few months. Both times it has been related to support. I’ve been playing around with creating things for a flight simulator (www.flightgear.org to be precise). One thing that I have been working on was the modeling of an aircraft flight and autopilot systems. Again, I hoped that some of the elements of programming a simulated computer system would possibly lead to some greater development enlightenment, but the ultimate desire was to have a great simulation of this specific aircraft.

In just the last couple of weeks I ran into a problem with the graphics functionality of the software. I’m still not sure what it is. FlightGear is open source so I could dive into the codebase and figure out what is going on. I have absolutely zero desire to look at a bunch of C++ graphics code though. Since this feature is a fundamental framework support piece for what I am trying to do what I really want is for it to just work or a fix to be made. My mentality is that of yours and my mother; I don’t care how sausage is made, I just want to eat it.

With that in mind I log a defect with all of the details that the project asks for in a defect entry….and then I wait. A few weeks, and nightly builds, later I update to the latest build and the problem still persists. So I update my defect. I’m already thinking about my options for moving to another flight sim, but finally someone responds to the defect.

Lesson 1: The longer you wait to respond to defects, the more likely the defect reporter is to go find another tool/system to replace the problem system.

The first response to the defect is request for me to turn some stuff, which I vaguely recognize the meaning of, on and off. So I go looking in menu items, the project wiki and other things to see if I can figure out how to trigger these changes. Nothing. I’m stumped so I report back as such.

Lesson 2: No matter how close you are to walking away, communication is always helpful.

The next messages follow a similar pattern but also include statements like “see the wiki”, but don’t include links or specific topics that I should be referring to. So, again, I stumble around looking for the mysterious piece of information that is supposedly included in the wiki. I find nothing related to that topic. I end up digging around and eventually finding something that works the same as I was originally told to use.

Lesson 3: If your support technique is to send people to documentation to help gather debugging information, that documentation had better be both comprehensive and complete.

I’m told to increase the logging verbosity and “send the console output to a file”. I get the verbosity jacked up without any problem but I cannot get the output into a file. After a fair bit of research I find out that sending the output to a file isn’t supported in Windows for this application. Well isn’t that just nice.

Lesson 4: If you know you’re going to be supporting different platforms/environments you need to be 100% sure that your support tools and techniques work equally well for all of them.

I have been told that I should reconfigure my hardware to ensure a known baseline. My hardware is not abnormal. In fact, it is using up to date drivers and out of the box configurations.

Lesson 5: Asking for hardware configuration changes makes you look like you’re flailing and unprofessional.

So after 2 weeks, and about 6-8 hours of effort, I’m absolutely no closer to a solution that I was before.

Lesson 6: If it takes the user more than a few minutes to start the process of gathering debug information for you, then it is you (the project/developer/team) that has failed, not the user.

Before anyone says “Oh, it’s open source so you could just go fix it yourself” everyone needs to start remembering one thing; You’d never do this on a project for your employer. If you did you’d probably not be long for employment with them. So why is it any different in OSS

Lesson 7: An OSS codebase is not justification to throw good development and/or support practices out the window.

Tags:

Published at

Originally posted at

Comments (2)

.NET Developer presentation resources

There are a lot of people who have lists of tools that developers should use. There are also a lot of people that have suggestions of how to make presentations ‘better’. Rather than duplicate all of that good information, I’m just going to supplement it with a few things.

Coding

I’ve been in a tonne of presentations (in person and screencast training) where the presenter (myself included) has insisted on typing out every single character of code manually. It’s tedious to watch and error prone to do. Unless each keystroke has some invaluable purpose that your audience must see and hear about, don’t do it. There are a couple of tools that can help you out and you probably already have one or both of them; Visual Studio Snippets and ReSharper Live Templates.

If you’re going to type out long lines of code, entire methods, method sturctures or more, consider adding most/all of that into a Snippet or Live Template. Here’s a Live Template I used recently.

livetemplate

As you can see, all I need to type is three letters (pcw) and an entire line is filled in for me. In this case all I wanted to do was talk over the addition of the line with something like “…so let’s just add a console write line here to output that the code executed…” The attendees didn’t need to know the details of the whole line of code (what value would watching me write string.Format(…..) bring?)

Source Control

Put the code that supports your presentation in source control….please. Put it on github, bitbucket, or Google Code. I don’t care about the platform it’s stored on, just put it some where that I can easily get to it. Then don’t forget to tell us about it. Include a slide at the end of the presentation that has the repository URL on it. Nothing special is needed, just “Source code: www.github.com/myname/myrepo”.

Your Details

Put your contact details at the end of your presentation. Setup a custom email account of this purpose if you’re worried about being inundated afterwards. Even if you just put up your Twitter handle, at least you’ve given someone the opportunity to reach out to you for clarification on your presentation. Not everyone comes up to ask questions after the presentation ends.

On a related note, please, for the love of all things, don’t include an “About Me” slide at the start of your presentation. That’s what your bio on the conference website is for.

Slide Delivery

Make your slides available if the conference isn’t going to. I know that the slides without the overlaying presentation aren’t worth a crap for most people, but for those that did attend the session, did pay attention and do want to refer back to what they saw, the slides are an invaluable resource. Even if only for the URL that links to your sample code’s repository or your contact info, having the slides available is valuable to attendees. There are a number of different sites out there that you can use to make your slides available while still preventing people from downloading the raw PowerPoint (or whatever you use) and turning your presentation into their presentation. Try SlideShareAuthorStream, or SlideBoom.

Published at

Originally posted at

Comments (1)

Branching for success

I’ve always struggled while trying to explain to teams and organizations how to setup their version control system so that the project can succeed. Ask almost any business person and they want the dev team to be able to release both at a scheduled steady pace (versioned releases) and spontaneously when needed (hotfixes). Most organizations that I’ve dealt with have a very naïve version control plan and struggle immensely once projects head to production. Usually they flail around removing partially completed, or untested, features so that a hotfix can be released quickly. After they get the hotfix pushed to production they flail away ensuring (and not always succeeding) that the hotfix will be included in the next scheduled release.

For anyone that has seen this, it’s an unmaintainable work pattern. As the application becomes more complex, ensuring that the product is stable when a hotfix is required becomes a game of chance and luck. To many organizations pour endless time (and ultimately money) into maintaining this situation in a (perceived) working state.

A solid branching pattern can help ease the pain. Don’t get me wrong, branching won’t solve the problem for you by itself. You have to have a plan. How and when do we branch? Where to we branch from and where do we merge back to? What conventions do we have in place to identify the different branch purposes? All of these things need to be standardized on a team (at a minimum) before branching has a hope of enabling a team to do what every business seems to want.

I won’t try to explain my branching policies on projects. All I’d be doing is duplicating the work by Vincent Driessen over at nvie.com http://nvie.com/posts/a-successful-git-branching-model/

Tags:

Published at

Originally posted at

AOP Training

A couple of announcements to make here. First, I’ve hooked up with the fine folks at SharpCrafters to become one of their training partners for Aspect Oriented Programming with PostSharp. We’re now offering a 2 day Deep Dive training course for the product. I’m currently working on writing the materials and every day I’m finding more interesting little corners of the tool. I’m really looking forward to some of the things that Gael has in store for v3 of it. Contact me (training@igloocoder.com) if you want more information about it.

Also, I’ve started working with the fine folks at SkillsMatter to offer an AOP course. This one is much more general 2 day course that talks about AOP’s purpose, uses and different techniques for implementing it. The first offering is coming quickly (May 24-25) in London and I’m quite excited for that.

I’m beating around a few other course and location ideas. If you would like to see an AOP or PostSharp course in your city, let me know and we’ll see if we can make something happen.

Tags:

Published at

Originally posted at

Migration

I’m sure if you subscribe to the RSS feed of this blog you’ve probably been flooded with old posts in the last couple of days. That’s because I’ve changed blog engines and migrated to a different hosting scheme. The old blog, my wiki and an SVN server were all hosted on a Virtual Private Server. Now VPSs tend to get expensive when all they’re doing is exposing a website. So I was looking for a way to replace all of them. Here’s what I did.

The blog

My old blog was being hosted using an archaic version of SubText. I can’t remember the last time I upgraded it or even looked at modifying the theme. It just worked. So going forward I wanted to make sure that would continue, but I also wanted to have something where I could easily modify the codebase to add features. Hopefully you’ll see some of them on this site over the next few months. As always there was an underlying thread of learning to my desires so I was looking to use some thing that I could pick up new skills from. I settled on RaccoonBlog. Ayende has built it as a demo app for RavenDB, but is also flat out works.

After some testing I dived into the migration project. There were three big things I needed to do. First, migrate the data. This was handled by the migration tool included in the RaccoonBlog codebase. Change the users that it creates and voila.

Second, create a good theme that met my corporate standards. This was harder. Oren never built a theming engine for RaccoonBlog, and I can’t blame him. Why would you when there’s CSS? So I had to dive into the CSS and make some changes. My first skillset is not as a web dev so this was a bit challenging. Add to that some HTML5 stuff that I’d never seen before and you have the recipe for a lot of cussing.

The third thing was get it deployed to a new hosting site. I made the choice to go with AppHarbor since it seemed to have the right price point. I also liked the continuous deployment option that is embedded into it. So deployment was pretty easy. Follow the instructions for setting up the application and add the RavenHQ add-on. Since I had imported the data to a local RavenDB instance earlier all I needed to do was export that and import it into the RavenHQ instance. Connecting the application to the RavenHQ instance was a bit trickier. AppHarbor has the ability to replace connection strings during it’s deployment cycle. To ensure that this happened I had to consolidate the connection string out of it’s own config file and into the main web.config. I also had to change the RavenDB initialization code in the Application_Start for RaccoonBlog so that it constructed the connection string correctly. It’s a bloody pain since the default development connection string doesn’t work with the new code. I’ll have to figure that out and post more about the overall process. The other thing that I needed to do was add a Nuget package that would mute the AppHarbor load balancer’s tendency to add port numbers to URLs. That was easy as well. Commit the changes to github and, voila, I have a working blog. Change the DNS settings and add a custom HostHeader to AppHarbor ($10…the only cost for the blog per month) and it’s like I never had the old one.

The custom app

I have a custom time tracking and invoicing application that I pay little love to. It just works…not well, but it works. Asp.Net, MonoRail and SQL Server. Again, off to AppHarbor to create an app and add the SQL Server extension. My biggest problem here was dealing with moving from dependencies stored in source control to using Nuget. After I got that sorted out it was easy.

The other big hassle with a SQL Server backed system on AppHarbor is that there’s no way to do backups or restores. So you have to use SQL Management Studio to script the DDL and the data from your existing database and run those scripts against your AppHarbor hosted database.

Source control

This was easy. For my public facing repositories GitHub works just fine. I did, however, have a number of private repositories. They weren’t active enough to warrant buying private repos from GitHub, so I looked for alternatives. I figured that if I was going the git route, I might as well go in wholesale. BitBucket by Atlassian offers private Git repositories for free. So there’s that problem solved. The best part of both repository providers is that they integrate seamlessly with AppHarbor for continuous deployment.

Wiki

I had a small ScrewTurn wiki site that I hosted for documentation of my public facing repositories. Both GitHub and BitBucket offer wiki functionality that is directly tied to the repositories themselves so there was no effort required to make this migration.

Conclusion

Getting from a VPS to a fully cloud hosted situation was less painful that I thought it would be. I had been putting it off for a long time because I didn’t think that my situations were going to be easy to handle. I always complain about clients who thing “we have the hardest/biggest application of anyone anywhere”. Guess what I was doing? After about 5-10 hours of research I had a plan and I was able to implement it with ease. By far the most work I did was creating a theme/style for the blog. In the end I went from about $150/month in outlay to $10. Not bad for 20 hours of work.

Published at

Originally posted at

Eagerness to fail

If your developers are eagerly taking blame for failures on your project they’re either:

a) buying into the concept of collective code ownership and have a commitment to quality
     or
b) are trying to get blamed for everything so that they can be fired and rid of your place of employ.

Tags:

Published at

PostSharp Training

I’ve hooked up with the fine folks over at SharpCrafters to build some training materials for their AOP product PostSharp. Starting in January of 2012 we will be offering training on the use of PostSharp for all your Aspect Oriented Programming needs. I’m currently working on writing the materials and every day I’m finding more interesting little corners of the tool. I’m really looking forward to some of the things that Gael has in store for v3 of it.

If you’re interested in getting some training on PostSharp, shoot me an email at training@igloocoder.com.

Published at

Professional Neglect and Clear Text Passwords

For that past few years I’ve been the recipient of a monthly reminder from Emug (Edmonton Microsoft User Group). The contents of that email is where the problems lay. Every month that email comes in and it contains 3 pieces of information (plus a lot of boilerplate):

  1. A link to the Emug mailing list admin site
  2. My username
  3. My password in clear text

It doesn’t take much thought to know that storing clear text password is a prime security issue. Sending those passwords in emails doesn’t make it any better. Emails can be intercepted. Systems can be hacked. It’s happened before. Just read about the hack of PlentyOfFish.com. Or the hack of HB Gary. Two things stand out in these attacks. First, PlentyOfFish stores its passwords in clear text which made it easy to compromise the entire user base once access was achieved. HB Gary (an IT security consulting firm no less) had many users who used the same password between different systems which made it easy to hop from system to system gaining different access.

Most web users don’t heed advice to have a different password for every user account they create. First, it seems unreasonable to try to remember them all. Second, most people believe that using their dogs name combined with their birth date is never going to be hackable. As system designers and operators (which the Emug membership is a professional community of) we should know that we can’t do much of anything to prevent users from choosing bad passwords. We can, however take the steps to ensure that those passwords are adequately protected.

So with all of that in mind I decided to call the Emug people on their password practices. I sent an email of concern to them along with a request that they take the time to do the correct professional thing with regards to their members passwords. The response I received back included…

I know what you're saying about the passwords though, the first one you get is randomly generated and if you ever did go on and change it to a common one then it is there within all the options you can also set it to the option of the password reminder. The option "Get password reminder email for this list?" is a user based control option and you can set that to your liking. It's in with all the digest options.

That’s great. So basically the Emug response was “You don’t have to see that we store your password in clear text if you just go uncheck this one box”. Jeez guys, thanks. So you’re suggesting that I should feel that my password is secure just because I’m not seeing it in an email anymore? Security through naiveté?

Most places / sites/ subscriptions now have an automated email reminder method. It does make you ponder its value but I think the focus on that this is a very low level security setting.

Okay…so because you think “most places/sites/subscription now have an automated email reminder” it’s okay for you to follow the same bad practices? Really? What happened to professionalism? Or integrity? Yah I know, that takes effort and you’re just a volunteer running a community group. Except for one little thing: the members of that community entrusted you with their passwords. There was an implied belief that you would protect those passwords in an acceptable manner. Clearly you’re not.

I also ask you to enumerate “most places / sites / subscriptions” please. I don’t get an email from Google Groups, StackOverflow, etc that contains my password in clear text. I know that those are professional companies and you’re not, but remember that professionalism has nothing to do with the size or revenue of your organization.

The piece of the email that really rubbed me the wrong way was this:

The mailman list serve server and application is maintained centrally not by us for the record. It is more of a self-service model and is intentionally designed for little to no maintenance or requirement to assist an end user.

So you don’t administer the system. That’s fine.  Yes, the current system may have been designed/implemented to require as little end user support as possible. That’s fine too. Here are my beefs. You have the choice to change what tooling you’re using. I’m pretty sure that you’re able to use Google to find replacement options. It will take some time and effort to see the change through, but don’t you think the integrity of your member’s passwords is worth it?

So to Brett, Colin, Ron and Simon: Please show a modicum of professionalism and take care of this issue. Since you chose not to continue the conversation with me via email, I’ve resorted to blogging. I’m submitting your mailing list email to www.plaintextoffenders.com. I’ll be contacting other community members in the hopes that they can get through to you. I suspect they won’t be able to, but I feel that I have a professional obligation to at least try.

UI Workflow is business logic

Over my years as a programmer I’ve focussed a lot of attention and energy on business logic.  I’m sure you have too.  Business logic is, after all, a huge part of what our clients/end users want to see as an output from our development efforts.  But what is included in business logic?  Usually we think of all the conditionals, looping, data mangle-ment, reporting and other similar things.  In my past experiences I’ve poured immense effort into ensuring that this business logic was correct (automated and manual testing), documented (ubiquitous language, automated testing and, yes, comments when appropriate) and centralized (DDD).  While I’ve had intense focus on these needs and practices, I’ve usually neglected to recognize the business logic that is buried in the UI workflow within the application.

On my current project I’ve been presented with an opportunity to explore this area a bit more in depth.  We don’t have the volume of what I have traditionally considered business logic.  Instead the application is very UI intensive.  As a result I’ve been spending a lot more time worrying about things like “What happens when the user clicks XYZ?”  It became obvious to us very early on that this was the heart of our application’s business logic.

Once I realized this we were able to focus our attention on the correctness, discoverability, centralization and documentation of the UI workflow.  How did we accomplish this then?  I remember reading somewhere (written by Jeremy Miller I think, although I can’t find a link now) the assertion that “Every application will require the command pattern at some point.” I did some research and found a post by Derick Bailey explaining how he was using an Application Controller to handle both an Event Aggregator and workflow services.  To quote him:

Workflow services are the 1,000 foot view of how things get done. They are the direct modeling of a flowchart diagram in code.

I focused on the first part of his assertion and applied it to the flow of user interfaces.  Basically it has amounted to each user workflow (or sequence of UI concepts) being defined, and executed, in one location.  As an example we have a CreateNewCustomerWorkflowCommand that is executed when the user clicks on the File | Create Customer menu.  It might look something like this:

1: public  class  CreateNewCustomerWorkflowCommand  : ICommand <CreateNewCustomerWorkflow >
2: {
3:     private  readonly  ISaveChangesPresenter  _saveChangesPresenter;
4:     private  readonly  ICustomerService  _customerService;
5:     private  readonly  ICreateNewCustomerPresenter  _createNewCustomerPresenter;
6: 
7:     public  CreateNewCustomerWorkflowCommand(ISaveChangesPresenter  saveChangesPresenter,
8:                                             ICustomerService  customerService,
9:                                             ICreateNewCustomerPresenter  createNewCustomerPresenter)
10:     {
11:         _saveChangesPresenter = saveChangesPresenter;
12:         _customerService = customerService;
13:         _createNewCustomerPresenter = createNewCustomerPresenter;
14:     }
15: 
16:     public  void  Execute(CreateNewCustomerWorkflow  commandParameter)
17:     {
18:         if  (commandParameter.CurrentScreenIsDirty)
19:         {
20:             var  saveChangesResults = _saveChangesPresenter.Run();
21:             if  (saveChangesResults.ResultState == ResultState .Cancelled) return ;
22:             if  (saveChangesResults.ResultState == ResultState .Yes)
23:             {
24:                 _customerService.Save(commandParameter.CurrentScreenCustomerSaveDto);
25:             }
26:         }
27: 
28:         var  newCustomerResults = _createNewCustomerPresenter.Run();
29:         if  (newCustomerResults.ResultState == ResultState .Cancelled) return ;
30:         if  (newCustomerResults.ResultState == ResultState .Save)
31:         {
32:             _customerService.Save(newCustomerResults.Data);
33:         }
34:     }
35: }

As you can see the high level design of the user interaction, and service interaction, is clearly defined here.  Make no mistake, this is business logic.  It answers the question of how does the business expect the creation of a new customer to occur.  We’ve clearly defined this situation in one encapsulated piece of code.  By doing this we have now laid out a pattern whereby any developer looking for a business action can look through these workflows.  They clearly document the expected behaviour during the situation.  Since we’re using Dependency Injection in our situation, we can also write clear tests to continuously validate these expected behaviours.  Those tests, when done in specific ways, can also enhance the documentation surrounding the system.  For example, using BDD style naming and a small utility to retrieve and format the TestFixture and Test names we can generate something like the following:

1: public  class  When_the_current_screen_has_pending_changes
2:  {
3:     public  void  the_user_should_be_prompted_with_the_option_to_save_those_changes(){}
4: }
5: 
6: public  class  When_the_user_chooses_to_cancel_when_asked_to_save_pending_changes
7:  {
8:     public  void  the_pending_changes_should_not_be_saved(){}
9:     public  void  the_create_new_customer_dialog_should_not_be_displayed(){}
10: }
11: 
12: public  class  When_the_user_chooses_not_to_save_pending_changes
13:  {
14:     public  void  the_pending_changes_should_not_be_saved(){}
15:     public  void  the_create_new_customer_dialog_should_be_displayed(){}
16: }
17: 
18: public  class  When_the_user_chooses_to_to_save_pending_changes
19:  {
20:     public  void  the_pending_changes_should_be_saved(){}
21:     public  void  the_create_new_customer_dialog_should_be_displayed(){}
22: }
23: 
24: public  class  When_the_user_chooses_to_cancel_from_creating_a_new_customer
25:  {
26:     public  void  the_new_customer_should_not_be_saved(){}
27: }
28: 
29: public  class  When_the_user_chooses_to_create_a_new_customer
30:  {
31:     public  void  the_new_customer_should_be_saved(){}
32: }

As you can see, this technique allows us to create a rich set of documentation outlining how the application should interact with the user when they are creating a new customer.

Now that we’ve finished implementing this pattern a few times, have I seen any drawbacks?  Not really.  If we didn’t use this technique we’d still have to write the code to coordinate the screen sequencing.  That sequencing would be spread all over the codebase, most likely in the event handlers for buttons on forms (or their associated Presenter/Controller code).  Instead we’ve introduced a couple more classes per workflow and have centralized the sequencing in them.  So the trade off was the addition of a couple of classes per workflow for more discoverability, testability and documentation.  A no brainer if you ask me.

Is this solution the panacea?  Absolutely not.  It works very well for the application that we’re building though.  In the future will I consider using this pattern? Without doubt.  It might morph and change a bit based on the next application’s needs, but I think that the basic idea is strong and has significant benefits.

A big shout out to Derick Bailey for writing a great post on the Application Controller, Event Aggregator and Workflow Services.  Derick even has a sample app available for reference.  I found it to be great for getting started, but it is a little bit trivial as it only implements one simple workflow.  Equally big kudos to Jeremy Miller and his Build Your Own CAB series which touches all around this type of concept.  Reading both of these sources helped to cement that there was a better way.