How to start testing when there's little or no testing
Most of the current projects I'm working on currently are web applications that are already built, in varying degrees of size and scope. Some of them are simple web applications with a handful of pages simply showing some data. Others have a lot of functionality going on, ranging from running chat services to e-commerce and payment processing.
However, it's unfortunate that the majority of the projects I've worked on have very little or no automated tests. Some don't even have a test framework set up properly (despite these web frameworks providing testing out of the box!). Even worse, there are projects that do have some tests, but due to months or years of neglect, the majority of those tests are failing or no longer valid.
In my years of working with different companies, testing is typically one of the very first practices that are abandoned throughout the software development cycle. Sometimes it's decided early on in the development process. Often it's decided when it's "crunch time" (either real or perceived - that's a whole different topic I can discuss). But it seems like eventually it's something that's become a second-class (or lesser) citizen in the software world.
I'm not saying that we need to be testing always (or if you prefer, TATFT). I do understand when there are some cases when it's pragmatic to defer testing for another day when something needs to get released quickly. But my issue is that, in my experience, once testing is skipped over once, testing is skipped over and over again. And that's something that I'm noticing more with these new projects I get handed over.
I think one of my skills with software development is my ability to be able to jump into existing codebases and be able to navigate through and figure out where I need to begin working. Granted, Ruby on Rails takes a lot of that credit since the project structure, in general, is just about the same for any project using that framework. But each project is different enough where having the experience that I've been able to accumulate throughout the years definitely comes into play. But not having that safety net of running tests, it's tough to know if what I did in one section rendered another portion or the entirety of a site useless.
So, what can a developer do to combat this? Here are a few tips that have been successful for me for these types of projects.
If there's no testing framework properly set up, pick one and go with it
For the times where the project I have been working has no test suite, or an older test suite that doesn't even run properly anymore, I just pick whatever is easiest for me to implement and then I go implement it. It's very easy to fall into the trap of over-analyzing the tools you have at your disposal. There are so many tools out there, both tried-and-tested tools along with those shiny new tools. But picking one tool, regardless of which one you pick, and beginning to immediately implement it will build momentum towards the goal of building a more stable codebase.
As I mentioned before, Rails already has built-in testing. Personally, I prefer RSpec for all my Ruby projects. For Javascript, you have tools like Jasmine and Karma. Newer programming languages have testing tools already built-in, as well, like Google Go. So just pick something now and start.
Start by testing all the changes you're making now
One area where I try to improve every single day is getting over the feeling of overwhelm. But overwhelm is one thing I run into consistently, especially with projects that have no testing set up. When there's no testing, I start to doubt that whatever I'm doing, that I will inevitably break the app, that I won't be able to get anything done. This is especially true for larger apps where things are only tested in production.
Instead of feeling the urge to not do anything until all the important parts are covered with tests, I instead start by writing tests for the current code I'm working with at the moment. This works well because I should know how this specific part of the application works, making it easy to write tests. The drawback with this approach is that since you'll be beginning to build your test suite, it will take extra time to start setting things up properly. But the time spent now will definitely be saved in the near future, both in hours saved and also in avoiding a ton of frustration (and maybe even tears) for yourself and future programmers.
Spend the necessary time on the critical parts
I'm not sure if there's a name for this phenomenon, but it seems like in most of the projects I've worked on in the past ten years, the most critical parts of the application are the least tested parts of the entire codebase. I've seen chat applications have no tests around the chat engine. I've seen e-commerce applications have no testing around payment processing. These are the parts that if something breaks, the company will lose money, period.
As far as I can tell, I've discovered that this happens due to one of two reasons:
- The most critical part of the application was the first thing that was built and had to be launched quickly, so testing was skipped (as it typically happens).
- The most critical part of the application is by far the most complex part, so whoever built it didn't want to spend time writing tests to test the complexity in their solution.
There are arguments that can be made for not having written any tests for either of these reasons. However, those arguments become moot when the critical parts of the application start breaking down and hours - sometimes even days - are spent tracking down bugs in those areas. Also, because they're critical parts of the entire application, typically these areas will not be changed for years. Spending the time needed to get these areas covered with tests will pay off in the long haul.
Get others on board
You're just one person, so unless you're hired to just focus on writing tests for an application, you're limited in time by what you can do by yourself. That's why it's important to get others involved. I'm not saying you should be pushy and yell at others whenever they commit code without tests. But you can very well lead by example and teach others along the way.
In most of the startups I've worked at in the past, I've helped greatly in increasing test coverage by being the person who would always try to include tests for my changes, and gently point out whenever I noticed others push out code changes that would be well-served by a test or two. Okay, sometimes I would not so gently point it out, but I would at least let them know. I would also write those tests and show them. Very often, this helped others to write more tests. If anything, I did make an impression on at least one person - A co-worker once drew a picture of a caricature that was supposed to represent me saying "Hi, I'm Dennis, did you test it?", since I used that phrase so often.
I did want to point out that testing is not the be-all and end-all tool in your software development arsenal. There actually are a lot of ways to write tests that will cause more harm than good. For example, I've written tests where I've spent twice as much time refactoring the test than the code it was supposed to test. I'll write more about this at another time.
However, when testing is done right, there will inevitably be a time when testing will save time, money, and your sanity - guaranteed.