Preventing Errors
Hopefully some dear readers can help me out with my current quandary.
I have several components that I use on nearly every site that I develop. By using them, I save untold hours of programming and therefore provide cheaper results for my clients. At the same time, I am using solutions that have been tried and proven rather than developing from scratch every time.
There is, of course, a catch to all of this.
I store these components below the CustomTags folder and most of my sites are now on the same set of servers. Consequently, any time I change any of these components, I change code used by nearly every site I develop.
The problem here is that whenever I update any component, I risk introducing a bug on any site even though I am not working on that site. I try to be careful and it is very rare that a change results in a bug. Even so, it does happen.
Recently, in fact, I caused errors in the administrative area of one site one two different occasions within the same month. Although I was notified of the bug in each case and fixed it the same day, their experience is that the site breaks inexplicably and it makes the site seem, to them, unstable. This causes frustration for the end client (the site owner), embarrassment and frustration for my client (the design firm), and embarrassment for me.
In this case, both errors happened on this site because of some structural differences between it and most other sites as a consequence of it being an upgrade from old code that I didn't write (so I am kind of stuck with the data structure). It isn't that my code couldn't easily be fixed to work on that structure just that I failed to account for it in my change.
I am trying to figure out a way to avoid this.
I have considered moving copies of the components into each site. This, however, presents other problems.
A while back, I did have copies of the components in some sites. I had a few instances of very old bugs showing up in sites. These would be things that I had long ago fixed in the current versions of my components. I would like to think I have since squashed all bugs, but that is probably hubris.
Beyond that, of course, is the time to update the components on the sites when and if I have a new version that includes bug fixes (perish the thought).
Additionally, if I am adding features to a site I may assume it to have the newest version. This is easily remedied by upgrading all of the components in order to add a new feature. This is easy to do, but does add one more thing to remember. Errors could still happen at this point, but at least it would be when work is known to be performed on the site.
I have thought about applying unit tests to my components. The problem here may be my own ignorance of unit testing. I can write unit tests to test business rules, but I have a harder time testing against the myriad of possibilities that can be accounted for in some of my more flexible components. Those with XML APIs, of example, seem difficult to write unit tests for.
I would really like to come up with a process that dramatically reduces the odds of errors on all of my sites, without causing a massive increase in my development and maintenance cost. I am hoping some dear readers will have some thoughts.
Thanks in advance!
The down side is that if there is a bug, you have to update all the sites individually rather than in one set of CFCs. However, as I think you are seeing, this centralization can cause issues in addition to solving them.
We currently have about 100 apps running the version we have out their and they are all at various points and have different code in them and this is bad. My thought is to place a skelton app that extends the CFC's in the base app.
This way the core fuctionality can be changed and upgraded, but in the odd cases, I can method overload the base CFC, so that when that changes it won't break the those odd cases, or that is the hope anyway.
Thoughts on this method would be apprecaited also.
I think you are right. That approach does add a bit of extra work, but at least I don't risk breaking sites when I am not even working on them. This will also allow me to test the upgrades on a dev copy of a site.
The one drawback, of course, is if I discover a bug in a component it would be a lot of work to upgrade components on every single site.
Curt,
What you are talking about is similar to what I am doing. It is really efficient, but it leads to the risk of unknowingly introducing bugs into sites - no good.
I don't have a solid answer, but I am leaning towards having copies of components on multiple sites. If I do that, though, should I do the same thing with custom tags?
I actually have decent error tracking. As I said, I had the issues resolved within minutes of the first instance of an error.
I would check out your system, but I typically won't create an account for a site (even a free one) unless I have already seen something pretty compelling.
I like that idea. I haven't used application specific mappings yet (many sites aren't on CF8 yet), but that sounds like a good idea. If that covers CustomTags as well, then it could be a really good solution indeed.
Long time no chat! Hope all goes well? As to your question . . .
. . . welcome to software product line engineering :-) And I suggest you check out some of the literature as it'll help you with these challenges. You can keep a copy of each file in each project, but then you have to leave them running the old versions. If you keep a copy of the file in each site and upgrade them all, you have the same problem as you have today - sites will be broken when you upgrade them.
As you start to work on larger code bases (and a small code base on each of 100 projects is effectively a large code base as the complexity is that of the core framework plus the sum of the customizations on a per project basis), you need to understand there's a fundamental cost. If I want to pitch a shelter, give me some canvas, a string and five minutes. If I want to build a single story house, I can probably do it pretty quickly. But as I keep on adding floors to the house at some time I need to move to a real high rise with the appropriate engineering and foundations - just the cost of trying to go above 5-10 stories - there is a step function.
There are a few general approaches you can consider. One is to version all of your components and to be explicit about what version each project runs. However to upgrade a project from one version to another you are going to have to test everything in that site. Unless your clients will pay for that every time you choose to upgrade, you're gonna have to either leave them on their current version or build a set of automated regression (acceptance/integration) tests that exercise enough scenarios for you to have confidence that if those tests pass, the app is good (or you can continue to just "fix when they notice a bug" which is cheaper and may be a better busienss solution for now). Bad news, the effort of building the tests is at least 20-35% of the effort of building the site - with your efficient components it may even be more like 80-100% of the cost. That's just the cost. You can be explicit about it and charge for it upfront or you can keep on fixing bugs for free, but there is no way you can upgrade code for an existing project without extensively testing the project and have confidence you won't introduce bugs. Once you're comfortable with frameworks like selenium, creating the test suite is probably about the same effort as manually running the tests 3-4 times. So then the only questions are whether you're going to upgrade a given site more than 3-4 times over its lifetime (including bug fixes) and therefore whether it makes sense to automate the regression testing vs. doing it manually each time.
I'd also say it's time to start to think more seriously about unit testing and a true OO approach. The thing with solutions is that I think a lot of CF developers have got stuck in "local maximas". These are basically points where they have to *decrease* their productivity to eventually increase it above the current level (imagine being at the top of a small hill and wanting to get to the top of a bigger hill for a better view - you're probably going to have to climb down into a valley before you can climb the bigger hill). I don't know whether you're hitting a point yet where the cost of going down the hill will be worth the ROI from getting to the top of a bigger hill, but it wouldn't surprise me if you reached that point eventually as your needs for reusability and maintainability across projects grew. It's not a coincidence that the majority of large systems these days are build using OO best practices and I can say after spending most of the last two years climbing down from a local maxima (a procedural concatenating code generator which was great up to a point and then become impractical), it was worth the effort for me to climb down into the valley.
I'd also strongly suggest more model driven development. If you can put most of your code into model statements, you can then just write a reference implementation for the DSLs and their generators and then providing that gives good coverage, you can just test your new changes against the reference implementation. You'll still need tests for the custom per project code, but if you can make that under 10% of the code, the testing effort is under 10% of what it would have otherwise been.
Other things that will help are a clear, definitive, strongly typed set of interfaces so you make it very clear whether changes to your components violate those interfaces (only catches a small subset of the problems, but it's low hanging fruit) and at the very least a test harness (acceptance tests in selenium or the like against reference implementations may help for stuff that's harder to unit test) that you can run your compoents against. It doesn't guarantee it'll exercise the components in the same way as your custom code will, but again it's a start.
Also, bear in mind that (for non provable systems), testing is probibalistic. Asking "how can I eliminate bugs" and the answer is "you can't". Then it's a matter of how much it's worth investing in decreasingly the likelihood of bugs.
Apologies in advance if you're doing any/all of the above already!
Just my 20c :-)
20c indeed! As usual, you have provided a lot of good things to think about.
I think right now I am trying to get a low hanging fruit mentality. From my experience, clients can accept if things break rarely (especially on development sites) when they know that you are making changes to the site. That is part of what is expected.
What they seem to find worrisome (as I would) is when things break for no reason they can discern (as I haven't made any recent changes to the system). It makes the site seem unstable.
I think if I transition to pointing to a stable version of my components and custom tags, I can then upgrade versions when I make modifications to the site.
I see the value in what you are saying about "local maximus" and needing to go through a trough to hit improvement and I certainly see value in that. At the same time, my pain point is low right now and I have two new babies and more work than I can handle so it doesn't seem like a good time to go through a trough (and I am a bit skeptical as to the benefits of "true OO" for most of the ColdFusion work I am doing right now).
I would certainly like to know more about model driven development and how it differs from my current practices (it sounds quite similar so far).
There is always more to learn, which is a really fun thing about programming.
Things are going great, BTW. We have two new babies, as I mentioned, which is a lot of fun. I'll have to try to catch up with you soon as it has been a while.
Congrats on the two new babies! In terms of the MDD, the question is how much of the functionality of your applications you enter into a 3gl like CF and how much of it is described using DSL statements (whether they're stored in a database, XML, diagrams or some custom textual format that you write a parser for using XText, ANTLR, lex/yacc or your own custom parser.
The benefit of DSL statements for development is that if you have a well exercised framework/generator, you don't need to test your DSL statements, so as you change your frameworks, as long as it doesn't break your reference implementation ir's unlikely to break your live sites. That substantially cuts down on the cost/effort of upgrading sites.
You'll still typically version your DSLs and generator and may have to upgrade it when you make substiantial changes to the interface to the framework (i.e. the grammar of the DSLs), but depending on the classes of change you can probably automate the transformation of the statements from one version to another (I wrote a paper on that a couple of years back for the DSM forum at ooPSLA).
Bottom line is that code is your enemy because it costs so much more to make it testable and debug it against changes in the underlying framework, making it much more expensive than the upfront cost of "I can write this in 10 minutes". DSLs over time save a bunch of money and bugs, giving users a better experience and substantially cutting down on your unfunded costs (bug fixing), making the business more profitable.
Thanks!
Yep. That sounds like what I am doing. My challenge, I think is how to create a complete reference implementation (one that covers all possible permutations of the DSL). Some of that is XML. I imagine there are good strategies for developing unit tests for an XML-based API, but I don't have any experience with that.