Attaching a Number to Best Practices

Attaching a number or value to something that you can sort-of measure can lead to a bunch of problems. But if you’re like me you like to see some sort of quantitative way to measure something. I mean, that’s what science is all about, right? If something is better than something else, you should be able to measure that.

Unfortunately, the world doesn’t entirely work this way, how else can you explain anomalies like this album winning a Grammy? Well, you really can’t. At the end of the day there’s some subjective measure somewhere about something.

It can be this way with software. While you can measure things like performance, stability, uptime, and so on, this kind of measurement is specific. It’s specific to a service, a deployment, based on certain hardware, and so on. And we knew we needed to measure the quality and “awesomeness” of things in the Juju Charm Store. After all, if you want to deploy something in the cloud you at least want to know how well it works. We handle a bunch of the quality for you; we do reviews, we run tests, and the “science” that when you run something it will reasonably work. What we don’t really handle right now is the opinionated “and it does all these right things too”. One of the original “you’re doing it right” things people know about is the Joel Test.

These aren’t very scientific, they’re just simple questions. Some don’t even make sense in some fields. No, my company doesn’t use the best tools money can buy, because the best tools in our space are Free Software. But either way, generally speaking, you can look at that list and then look at a place where you’re applying for a job and make a reasonable assessment if that place is right for you. It can certainly help you avoid the bottom of the barrel. Of course you’ll still find many discussions about which of these questions is appropriate, inappropriate, what’s missing, and so on, but, as a quick and dirty view on something, the Joel Test works well enough.

So over the past week we proposed an idea similar to Joel’s Test but for Juju Charms in Ubuntu. This criteria is more of a list of general practices that you’d want to strive for. However many of those criteria your charm meets gives you a rough general idea on how mature a charm is.

Things like reliability, security, flexibility, handling data, scalability, and so on. While we can measure many of these things with science (For example, if your charm doesn’t pass charm proof then that’s an easy measurement), some of these things are more subjective and more generally described. We’d love to see every charm handle data in version control like the wordpress charm, so we made that one of the things we look for. We leave it up to the author to determine the best way to handle that user data, but generally it’s an area we want.

Just like I don’t need to read every review of Skyrim, just tell me if it’s a decent enough game and I’ll play it:

So while these categories are really general and we could easily argue over scales, points, and other minutia, what we’re really looking to accomplish here is a way to see where a charm stands in relation to where we want to be. So how do our charms do with this new criteria? Pretty terrible. The ideal charm doesn’t exist, and there’s things that we added to the criteria that we know will be hard work to strive to, but it’s a good watermark to see how far we need to go to give people something that they’d run in their own environments.

And then of course, since the Charm Store doesn’t freeze like say, the release of a video game, the onus is on us to provide the community with resources on how to improve their charms to improve their score over time, but that’s a topic for another day!

Comments