Let’s talk about software design and refactoring in LabVIEW. When I first read Martin Fowler’s Refactoring book several years ago, it was very liberating. Prior to that, I had always had the idea that I had to get things right up front. It was kind of paralyzing. I had the idea that the design had to be perfect before I started coding. It would take me forever to get started on the coding phase and once got I started, any change would throw me for a loop because it would mess up my “perfect” design.
First let’s talk about some of the various software design philosophies.
Big Design Up Front (BDUF)
What I was practicing is often referred to as Big Design Up Front (BDUF). In general, this is associated with the Waterfall model, which makes sense, since I did work for a company that built nuclear power plants. Our process was mired in bureaucracy. Behind BDUF is the idea that we need to have a complete design upfront. That means before we start coding we sit down and layout all our classes and all our modules. Taken to an extreme we would draw up a bunch of UML diagrams and then click a button and generate all of our code. This is actually possible in LabVIEW using the Symbio GOOP toolkit. I liken BDUF to an architect who draws up blueprints and then hands them to a builder to build. Then the builder just follows the blueprints.
Just-in-Time (JIT) Design
In agile circles, you hear a lot about just-in-time design. This is the idea that you wait to design a feature until just before you implement it. You collect user stories about the features they want to have implemented, but you don’t actually design those features until right before you implement them. The idea is that the requirements change and sometimes even disappear as we write more code and learn more about the problem. Any design work done before we are ready to implement something is viewed as premature optimization and a waste of energy. By waiting until just before implementation, we eliminate that waste.
Emergent design is something that is often talked about in Test Driven Development circles. It takes just-in-time design to an extreme. The idea is instead of designing upfront or even just-in-time, we start instead by writing a test. We have no idea what our end code is going to look like, we just know this is the test we have to pass. Then as quickly as possible we write some code to make the test pass, even if it is ugly. Then we do a little refactoring and we repeat the process. After a handful of iterations, we look at the soup of code that we have and see if any design patterns or classes or modules fall out of it and that becomes our design. The design emerges from the bowl of spaghetti that we have from simply trying to make the tests pass.
Top Down, Bottom Up
The above techniques all have their strengths and weaknesses. BDUF works well when the requirements are known ahead of time and don’t change. That rarely reflects reality and unfortunately, BDUF does not respond well to change. JIT and Emergent Design both tend to handle change well and work well for the design of small features but tend to fall apart when it comes to the overall system architecture, particularly when dealing with larger systems.
I prefer a compromise on all the above techniques. I call it Top-Down, Bottom-Up. I start at the top and try to understand the scope of problem. I decompose that into a series of modules, each with a distinct responsibility. I map out the basic flow of messages and data throughout the system. Having a designated framework (I prefer DQMH) helps a lot with this step. I don’t spend a lot of time on this step. I just get a rough idea and build a very loose skeleton.
Then I switch to the bottom. I look to the data sources and syncs. I look at the hardware, network and serial connections, databases, files, etc. Then I use TDD to design and test each of these in isolation. I do this work with some idea of where they are going to end up in the bigger architecture. Then I integrate them into their respective modules and do some integration testing at the module level and then at the system level. As I am doing integration, there is invariably some refactoring required at all levels.
What is refactoring? Well, it is changing the internals of the code without changing the external functionality. We use a cycle of making small changes and rerunning our tests to make sure that we don’t break anything. We are not directly adding functionality when we are refactoring, but we are making changes that then enable us to add functionality. Kent Beck said it best:
The liberating power of refactoring is that frees us from the BDUF notion that we have to get our design right at the beginning. If we can get it close, then refactoring can get us that last 10%. That leaves us plenty of flexibility in case we need to change direction along the way, which always seems to happen.
If you are having trouble adding new features to your codebase, perhaps some refactoring might help. We be happy to talk about how we can help with that. Use the button below to set up a call.