Wednesday, December 31, 2008

Namespace visibility in C#

Java has package scoping. It allows an element to be visible only within the same namespace. It's a wonderful thing.

Here's how it works in Java:


package com.pc-doctor.mynamespace;
package class Foo { ... }


The class Foo is only visibile within mynamespace.

Even though I'm not a Java programmer, this immediately strikes me as extremely useful. Frequently, helper classes are only needed by code that lives close by.

There are two reasons to want namespace visibility to be enforced by your compiler:

  1. If you can make those classes invisible outside the namespace, it will make life a lot easier for clients of that namespace. Having only the useful classes appear in Intelisense is a big win.
  2. Having helper classes be invisible also helps construction of the component that is in the namespace. If the compiler doesn't let anyone make calls to the helper classes, then we can make much stronger assumptions about how our clients use the code.
C# does not offer any support for namespace visibility. However, there are three ways to accomplish it. None of them are perfect, and one of them is a bit bizarre.

The Microsoft Way



Microsoft expects you to make the classes internal. This prevents anyone outside of the assembly from using the class.

However, you have to make a separate assembly for each namespace that you want to do this with.

Frankly, that's painful enough that few people do it.

The C++ Way



C++ also lacks namespace visibility. The folks over at Boost use a separate namespace underneath the main namespace for helper classes (and functions). They've standardized on the name "detail" for this namespace, and it works fine for problem #1. It doesn't do anything for #2, though.

This is an easy thing to do, but it doesn't do much in C#. C# programmers use a lot more using directives than C++ programmers should. The only place to put a using directive in C# is at the top of the file. This means that, if a single function needs a namespace, then the whole file will get it.

The end result is that a lot of detail namespaces will be visible. This means that you'll have to make a using directive for the detail namespace that you really want, and you'll have to avoid typing "detail" to get to an element.

Even with those problems, it's probably worth doing in C#. Create a detail namespace under each namespace and put things that you'd like to have package scope in there.

Getting the C# Compiler to Enforce Namespace Visibility


C# does have a feature that can be used to emulate namespace visibility. This solution is a bit weird.

I should point out that it uses a feature of C# that wasn't designed for what we're going to use it for. In C++, this kind of behavior is encouraged and well supported. In C# (and Java), you're not supposed to deviate from the party line.

I'm sure Microsoft doesn't have any tests written for this behavior; I've already found one bug in the compiler from this technique.

What is a namespace? It allows many classes to be placed in the same scope even when they're stored in different files.

A partial class does the same thing, and this can be used to emulate a namespace. Partial classes were designed to allow Microsoft's code generators to create part of a class and put those portions of the code in a separate file. C#'s designer tool makes heavy use of this.

It's also very close to a namespace!

If you use a partial class as a namespace, then you can put multiple classes in different files and have some of them be invisible outside of the "namespace".

A private class inside the "namespace" is visible to other classes in the namespace, but it is not visible outside of the namespace. Likewise, a public class is visible outside of the partial class.

All of this is enforced by the compiler, too. We get both of the benefits of namespace visibility with this technique.

Unfortunately, it doesn't work perfectly.

For example, using declarations don't work for a partial class. If you're not in the namespace, you will always have to use the partial class's name to access public members of the namespace. This may be annoying in some cases, but, if you limit this technique's use to cases where there isn't a lot of access to the namespace from outside of it, then it's not serious.

It also looks wrong in the IDE. The IDE has no idea that your class is really a namespace, so it gets colored incorrectly. The severity of this problem is a matter of opinion. It doesn't bother me.

This is all awkward enough to prevent me from completely replacing namespaces. Instead, I use this when I want to expose an extremely simple API and perhaps a type or two. It's really only worth the trouble if you get to hide a lot by using this.

The syntax is also verbose:

namespace pcdoctor {
  partial public class fakeNamespace {
    private class NamespaceVisibilityClass {}
  }
}

There's actually a hidden benefit of this technique. It's possible to make a "free function" in the fake namespace. A static member function behaves a lot like a free function. It's accessible from any of the classes in the namespace. If it's public, then it's accessible from outside the namespace.

I mentioned that I found a bug in the compiler. Unfortunately, I didn't track down exactly what the bug was. Instead, I just found a solution and moved on.

However, if you find that some of your code gets executed more than once when the fakeNamespace type is instantiated, then you might want to find another solution.

Good luck! Just remember that the central authority that controls C# programmers doesn't want you to do this. You're on your own here.

Wednesday, August 20, 2008

lnternal Training Talks at PC-Doctor

PC-Doctor is trying to start a series of internal training talks. I'm going to give the first one next week.

It looks as though there's a lot of interest from everyone on this project. Management loves the morale boost and the training that people get. Developers are excited to learn something new. QA seems excited, too!

Here's the draft of my talk:



It's probably worth another blog to discuss the dangers of posting raw slides where the whole world can see them. It's probably also worth talking about why I don't waste my time making everything look perfect.

For now, it's exciting that publishing is as easy as copying and pasting some HTML from Google Docs! :)

Tuesday, August 5, 2008

Testing the Untested: World of Warcraft Needs Help!

As you should know, I play World of Warcraft. It's been a great game for several years. Blizzard is making lots of money off of the game, and they are using that to put new content into the game regularly.

There have been a lot of changes to the game since I started playing it. Watching these changes carefully has given a lot of circumstantial evidence to the idea that World of Warcraft is primarily tested by a large quality assurance staff.

This post is a sequel to Testing the Untested. However, it will focus on one example. I think it's interesting to look at what the effects of an inadequate testing program have on a major software project like World of Warcraft.

I'll also talk a bit about what they might have to do to fix their problem. I suspect that roughly the same problems exist on any large project that has the problems that World of Warcraft has.

Blizzard's QA Staff: Are They Relevant?


This interview is interesting. There are apparently 135 "developers" working on World of Warcraft.

The Blizzard guy that they're interviewing makes an interesting distinction between developers and non-developers. It seems clear that a "developer" is a more important person to him than non-developers.

I may be reading too much into this, but that's what this sort of analysis is all about. Please read it yourself below the picture of Nova hanging out in front of an alien cave.

Interestingly, artists working on cinematics count as developers to this guy. However, QA staff does not count.

I'm biased by my work here at PC-Doctor. We hire some incredibly talented QA folks. They have a large role in the development of new and existing products, and developers tend to have a lot of respect for them.

We don't call them developers, either. We also don't sneer at non-developers and pretend they don't count in the real employee count.

I'm going to interpret that interview as a statement that Blizzard doesn't believe that QA staff are as important as artists, programmers, and designers. If Blizzard doesn't give them much responsibility, then they are probably correct.

Anyway, if the QA staff isn't given the respect they need to be relevant, then the programmers are the only ones left who can produce automated tests. In fact, it looks as though QA is the only visible source of tests for World of Warcraft. This might contribute to their perceived irrelevance. If they're spending their time doing things that could have been automated, then it may be hard to gain much respect.

I was pretty disappointed to see the lack of respect in that interview. It looks as though Blizzard's QA staff does a great job with the new content. Unfortunately, it's not possible for them to revisit old content. This is the fundamental problem with relying on a staff to do your testing. It costs a lot to run tests, and so you'll end up running them less.


Do They Have Functional Testing?


Blizzard has stated several times in the past that they're unwilling to change dusty old content that people don't run frequently. They've said that this is because the risk of screwing something else up is too great.

A statement like that is pretty much the same as saying that they don't have enough testing for the old content.

Actually, I expected this section to be a bit longer since the conclusion is so important to the rest of this post. However, if you've got a Blizzard employee who says exactly what you're hoping to prove about their project, then you don't really have to do much more!

I do wish I could find some of the other times that this has been said, but having it said once is sufficient for this article.

Adding Tests After the Fact


Here, I'm going to talk about how Blizzard should be adding tests. It's mostly interesting because the story almost exactly the same for any company that doesn't have a large set of automated tests for their software.

How should Blizzard go about creating tests? This is has already been the subject of another post. In fact, I'm going to say many of the same things.

The first thing to worry about is whether or not the corporate culture supports testing. If it doesn't, then this is the most serious problem facing someone trying to add tests. Testing has to be thought about be all of the developers. It really has to be a part of the normal operation of the programmers. It has to be a part of their culture.

World of Warcraft has been in development for almost ten years now. If they still don't have an extensive set of automated tests for the game, then they clearly don't understand what they're missing.

It's pretty hard to imagine how someone might convince them that testing is important if they haven't seen it already. The biggest advantage of automated testing is that you can make changes with some confidence that nothing was broken. However, you don't get to that point until you have relatively thorough tests.

Developing a thorough set of tests for a game as large and old as World of Warcraft would be an enormous undertaking. Therefore, some advantage would have to be found for incrementally adding tests. If tests can be created that verify parts of the game that are difficult to test with a QA staff, then these would be easy to convince people to add.

The easiest example that I can think of is a test to ensure that the floors don't have holes in them. Whenever Blizzard releases new content, there seem to be places that people can fall through the floor into a location that they're not supposed to get to. I have no idea where this problem come from, but it sounds as though it should be covered by an automated test.

Adding this sort of test allows developers to slowly add real value to the automated test infrastructure. As long as there is value in each step taken, it is easy to convince people that the work is valuable. Eventually you can hope that you'll end up with enough tests that you'll

Another, riskier approach could also be taken. Class balance would be extremely difficult to verify, but a test for it would be extremely useful and visible.

There are a large number of different character classes in World of Warcraft. Each class has different capabilities, but those capabilities are supposed to be equally useful under certain circumstances. Getting this correct is extremely important to players, and getting it correct is extremely difficult as well.

Blizzard's players and staff spend a lot of time thinking about it, and it gets tweaked over and over again. If testing this could be partially automated, then they could speed up the process. Customers and developers would both enjoy this a lot.

It's not clear that it's even possible to automate this. A few things can be analyzed easily in a simple spreadsheet. More complicated aspects of balance would require some extremely sophisticated analysis.

However, Blizzard has some really big supercomputers*. If it were valuable enough to them, they could run some fairly sophisticated tests. I can imagine some partially automated tests that could analyze even arena class balance. Input from the QA staff could be used to speed up the tests considerably.

If this worked, then it would go a long way towards convincing the rest of Blizzard to try other problems. Again, this approach would be significantly riskier. If the project failed, it might set back automated tests even further.

World of Warcraft Isn't Alone


World of Warcraft is a huge project that clearly suffers from a lack of automated tests. Everything I've said here is specific to that game, but it comes from my experience on other, smaller projects with the same problem.

A lot of projects have exactly the same problem, and solving them requires a lot of the same tools.


* Actually, we don't know this. However, The9 gets most of their revenue from World of Warcraft and runs the Chinese server clusters for Blizzard. They also have 12 of China's fastest publicly benchmarked supercomputers. It seems safe to assume that Blizzard themselves also have similar servers. While none of those are dedicated to testing, it seems likely that they've got some extra CPUs around that could be used.

Wednesday, July 30, 2008

Fingerprint Readers Don't Work

A while ago, I got annoyed at a friend's computer. It had a fingerprint reader, and I wanted to play a game on it before he woke up.

Fortunately, it turned out that my fingerprint worked just fine. It took a few tries, but I successfully logged in as him.

He did look a bit shocked when he woke up and saw me playing a game on his supposedly secure work computer. Too bad he wasn't in the IT department at his company. :)

How secure are fingerprint readers? I can't say that I'm impressed. Since you leave what is essentially your password on everything you touch, they can't be infallible.

Fingerprint readers are supposed to be intimidating. You're supposed to look at one and think to yourself that you'd have to do some kitchen trickery to defeat it. Intimidation might be most of the security they provide.

That would have worked for me. I don't make a habit of breaking into other people's work computers. Is intimidation all they've got?

It looks as though that might be true. If someone really wants to break in, they can. It's not always as easy as my attempt was, but even the most secure readers can be broken.

However, I'm not going to complain too much about fingerprint readers. It's really easy to login to a computer with one. It took about 5-10 seconds to break into my friend's. Imagine how easy it'd be if it worked the first time? Convenience is much more important than security to me on many of the computers that I use.

Incidentally, there's an interesting ending to this story. The friend whose computer I broke into was a researcher at HP Labs. After seeing me casually playing a game on his computer, he decided to do some research on alternate biometric input devices.

Wednesday, July 23, 2008

High Performance Multithreaded Code

Current CPUs have several cores per CPU. If a program wants to speed up with new hardware, the program has to exploit those extra cores. Using multiple threads is, therefore, becoming extremely popular.

Of course, people who talk a lot about multithreaded programming don't ever mention that most programs don't need to be any faster. While I feel obligated to point that out, this article is written for people who do want their applications to run faster.

In fact, I'm going to go even farther than that. This is for people who want to squeeze every last bit of performance out of their multithreaded code. This isn't for everyone.

Since I don't have a lot of experience with this, I'm going to talk about two books that I've read. They both talk about specific topics that are, I suspect, absolutely essential to some extremely high performance multithreaded code.

Interestingly, neither book talks about what I'm talking about directly. The books have absolutely no overlap, either. They both talk about different ends of the same problem without looking at the whole problem.

I enjoyed both of them.

The first, The Art of Multiprocessor Programming, was written by a couple of academics. It's a highly theoretical look at lock free and wait free data structures. It never talks about real hardware. It's also fascinating.

The second, Code Optimization: Effective Memory Usage, is an extremely practical guide to how modern hardware deals with a critical resource, memory. It talks in detail about what the hardware is doing. It doesn't touch algorithms that avoid the many problems that it talks about. It's a bit out of date as well, but it's still worth spending time with.

The Art of Multiprocessor Programming


You wouldn't be able to tell from the cover or the publisher's description of it, but this book is about lock free and wait free algorithms.

A component is lock free when many threads can access the routines and at least one thread will always make progress. If a thread holds a mutex, this is not possible. The thread with the mutex could page fault and be forced to wait. During that wait, no thread will make progress.

Wait free is an even stronger constraint. A routine is wait free if it will complete in a finite number of steps. That is, all threads will simultaneously make progress.

It's possible for a data structure to have some routines that are wait free and others that are merely lock free. The authors frequently try to make the most critical routines wait free and the less important ones lock free.

Lock free programming is a topic that's always fascinated me. It seems incredibly difficult. Researchers like the authors must agree, because there aren't that many lock free algorithms in the literature, yet. There are a few data structures out there, and a lot of work has been done on critical algorithms like heap management routines. There isn't much else, though.

The book, however, walks you through the techniques that are needed to build these algorithms. They describe and analyze the algorithms in ways that I don't normally bother with. Mathematical proofs appear to be critical to their process. Don't worry too much, though. None of the proofs outlined in the book were difficult to follow. Without the proofs, I would have had a difficult time understanding what they were doing, too.

Here's an example of an interesting theorem in the book. Modern processors have a variety of atomic instructions that are designed to help avoid locks. These instructions are critical to lock free programming. Examples include atomic increment and compare and swap.

Lock free algorithms replace locking a mutex with a number of these atomic instructions. Someone created a theorem that essentially states that most of these instructions are pathetic. (I'm only paraphrasing slightly.) Compare and swap is proven to be useful, however.

Lock free articles talk a lot about compare and swap. It's nice to understand why!

Incidentally, despite the title of my post, lock free algorithms are not necessarily faster than a conventional algorithm wrapped in a lock.

The Art of Multiprocessor Programming doesn't talk about it, but these atomic operations are expensive. They require a memory barrier. This requires communication with all of the other cores in the computer, and it's slow.

Arch, an Intel developer, puts it nicely here:
My opinion is that non-blocking algorithms are important in situations where lock preemption (arising from oversubscription) is an issue. But when oversubscription is not an issue, classic locked algorithms, combined with designing to avoid lock contention, will generally out perform non-blocking algorithms on current Intel hardware. I don't know about other hardware, but observe that:
  1. Non-blocking algorithms generally use more atomic operations than it takes to acquire a lock.
  2. Atomic operations in non-blocking algorithms generally need a memory fence.
  3. Memory fences incur an inherent penalty on out-of-order processors or processors with caches.
Do keep that in mind when you read the book! If your algorithm uses too many of these atomic operations, there's no point in doing it. Locking a mutex doesn't require many of these operations.

The authors act like typical academics and ignore this problem completely. :)

Code Optimization: Effective Memory Usage


This book is dated 2003. It's several processor generations out of date. Don't panic, though. It turns out that a lot of what Kris Kaspersky says has been true for far longer than that.

There's a good chance that some of his discussion of ways to exploit specific CPU generations isn't useful anymore. However, interleaving memory bank access, N-way associative cache behavior, and many other interesting properties of memory are unlikely to change immediately.

You'd think that memory technology would change enough that the same quirky code optimizations wouldn't work for a whole decade. Apparently, you'd be wrong.

This is, as you might imagine, the exact opposite of the previous book. This is about how the memory systems in a (somewhat) modern PC work. This is about the details of machine architecture and how to use those details to speed up your code.

In fact, translating the information in this book to highly parallel computing will require some thought. It was written without much thought to the behavior of multicore processors.

That's not the point, though. Multithreaded programming is all about memory access. If you poke the memory in the wrong order, your program will slow way, way down. Compilers are not yet smart enough to do all the work for you.

It's worth talking a bit about how important memory access is. Here's a slide shamelessly stolen from Robert Harkness at the San Diego Supercomputer Center:



Performance in on a log scale. Memory bandwidth has a dramatically lower slope than CPU speed.

You could say that, eventually, performance will be entirely dominated by the usage of memory. However, we're almost there already. High performance data-parallel programming requires the knowledge in this book.

I do wish he'd write a second edition, though. Some of the chip-specific discussions are interesting, but they aren't necessarily relevant anymore.

Lower Performance Multithreaded Code


I should emphasize that most of the stuff in both of these books is useful for pushing performance a bit faster than you'd thought possible.

If you haven't already gotten to the point where you think your code can't be sped up, then you're likely to have more serious problems that will erase the improvements available from these books.

Actually, there's one significant exception to that. If you're using a low associativity cache poorly, then you could get almost no cache utilization. In some cases, you can make minor changes to your memory usage and go from no cache utilization to good cache utilization. That's probably a more important change than using a good algorithm.

Generally, however, these books are not what you need to make your web page load faster. None of the code that I've written in the last few years needs this. I'll keep looking for applications, though, because optimizing performance in exotic ways is a fun!

I enjoyed both books a lot even though they didn't seem directly applicable. I hope you will, too.

Thursday, July 10, 2008

Integration: The Cost of Using Someone Else's Library

I don't do much Ruby on Rails development anymore, but Andy lives right next to me at PC-Doctor. He does.

Recently, he's run into an interesting problem. I've seen the problem once before in a completely different context.

Once might be coincidence, but if you see the same problem twice, then it must be a real problem. :)

Ruby on Rails


If you've got a product to develop, it's normally better to use someone else's library whenever possible.

Ruby on Rails makes this easy. They've got a fancy system to download, install, and update small modules that can be put together cleanly and elegantly to create your product.

It's a wonderful system. It works, too.


Andy discovered that it might work a bit too well!

He's got a medium sized web application that uses a bunch of external modules. He wrote it fairly quickly because he was able to pick and choose modules from a variety of sources to solve a lot of his problems.

Unfortunately, he had to upgrade to a newer version of Ruby. That means that he's got to look for problems in each of the modules he installed and find a version that works with the new version of Ruby.

Some module maintainers are faster than others, of course. Not all of the modules are ready for the new version of Ruby.

This is a problem that doesn't scale happily. As the number of modules goes up, the chance of one or more modules not being ready goes up.

As Andy discovered, this means that an application can become painful to update.

I phrased my title as though Andy might not have been doing to the right thing. I'd better be honest here, though. If one of his modules can't possibly be updated, then he's still better off rewriting that module. The alternative is to write all of the modules during application development.

Andy did the right thing. The pain he had while updating was minor compared to the alternative.

Ruby on Rails makes it extremely easy to combine large numbers of modules from different sources. The problem can be duplicated any time you get large numbers of independent developers working together.

Boost


The Boost libraries seem to be suffering from the same problem.

Boost doesn't put a lot of emphasis on stability, either. Changes to libraries are encouraged and frequent. Versions of the library aren't even required to be backwards compatible.

The end result is the same as Andy's problem. One library will change a bit, and that change will have to ripple through a bunch of other libraries. It can take a while to squeeze each contributor enough to get them to update their library enough to put out the next version of Boost. (Boost.Threads was the worst case of this. The developer disappeared with his copyright notice still in the source files!)

It's hard to blame either the release manager or the contributors. They're volunteers with paying jobs, after all.

The end result is still unfortunate. It now takes about a year or so to release a new version of the framework. Some libraries end up languishing unreleased for a long, long time because of this.

Boost has gone through a lot of releases. This makes it really tempting to look at this quantitatively. :)

To the right is a chart showing the number of days between major releases. This is, of course, a silly thing to look at. What defines a major release? There was only 5 days between version 1.12.0 and 1.13.0, for example.

The lower numbers on the chart show the number of libraries that changed with each release. There is a slight upward trend to that as well. Clearly, newer releases contain more new stuff in them than the older releases. Furthermore, not all changes to libraries are the same. Some of the more recent changes are substantial.

Despite all of that, I'm going to claim that the release schedule is slowing down over time. There are many reasons for this, but one of them could well be the same problem that Andy has.

Before a release goes out, there is often a plea on the Boost developers' mailing list for assistance on a few libraries. Those calls for help are proof that the size of Boost is slowing it down. If they have more libraries then they'll have more libraries with trouble.

Early versions of Boost had extremely light weight coupling between the different libraries. More recent versions are significantly more coupled. As developers get familiar with other Boost libraries, they are extremely likely to start using them in newer libraries. It's almost inevitable that the coupling increases over time.

The developers for each library continue to mostly be volunteers who don't always have time to make prompt updates. Getting all updates to all libraries to line up at the same time can't be easy.

Commercial Projects


Both of these examples involve open source projects. Andy isn't building an open source application, but he is relying heavily on open source modules. Boost is entirely open source.


An open source project is going to end up relying on volunteers. It's really hard to manage volunteers! Is it any easier on a large commercial project?

I don't have any direct experience with this. I've never been a part of a big company with hundreds of people all working on the same thing.

Is the problem unique to open source projects? I've got no data, but I'll make some speculations.

Some fraction of open source developers are volunteers with other jobs. This isn't true for commercial projects.

A developer who's spending their free time on a project will have to schedule their time around a higher priority project that's paying them. According to this theory, this dramatically increases the spread in the amount of time required to complete their job.

Conclusions


The problem probably isn't unique to open source projects, but I suspect that it's worse for them.

Ruby on Rails encourages using large numbers of independently developed modules. This model will exacerbate the problem.

I'd love to hear from someone who's got experience with large projects. The problem gets worse with big projects. Too bad I don't know much about what happens with them.

Monday, July 7, 2008

Rvalue References Explained

Thomas Becker just sent me a note about an article that he'd just written. Rvalue references aren't in wide use, yet, and they aren't part of the official standard, either. Not many people understand them, yet. I'm sure his article will dramatically increase the number of people who understand them since Thomas is such a good writer.

If you'd like to play with rvalue references after reading his article, GCC 4.3.1 is what you want. You can access them using the -std=c++0x compiler option.

Doug Gregor's C++0x page can be used to track the progress of that compiler option.

Wednesday, July 2, 2008

Testing the Untested

Test driven development is the cool new way to write software. TDD revolves around writing the tests for the software before or during the development of the software. If you do anything like that, then you'll end up with well tested software.

One of the fantastic things about well tested software is that you can change it confidently. Rapid change is the mantra of a large number of the newer software development methods.

I'd like to talk about the other end of the spectrum, though.

What happens if you've got a collection of code that doesn't have a good collection of tests? What happens if the code was written without thinking about testability?

It can become extremely hard to add tests to code like this after the fact.

I carefully phrased the problem so that you don't have to admit to writing the code yourself. I've written code like this, though. I'm guilty.

I've also recovered successfully from poorly tested code. What does it take to make this recovery?

Unit tests

Unit tests drive individual functions within a specific module. A good set of unit tests will help improve your API's reliability.

If you wrote the module with testing in mind, then you'd have a small executable that would load the module and run the tests. This could be done fairly quickly, so you could incorporate the testing into your build process.

If you didn't write the module to be tested, then there's a chance that the module depends heavily on the application's setup and shutdown routines. I hope that's not the case, because, if it is, you'll either have to refactor a lot of the setup code or you'll have to run your unit tests within the application.

The latter option means that, in order to run your unit tests, you've got to startup the entire application and then go into a mode where tests are run. Assuming the application startup time is significant, this will slow down unit testing and make it happen less often. Avoid this if it's possible.

If you can create a light weight process that can load the module to be tested, then you'll only be limited by the amount of time you can spend on tests.

Well... That's not quite true, actually.

In theory, you can write some unit tests and incorporate those tests into your build process, but you originally wrote that module in a corporate culture that didn't require unit tests.

I'll talk about that problem at the end of this post. It's important.

How Many Unit Tests Are Needed?

How many tests should be added right away? This is going to depend on the project's goals. You won't be able to get to the point where you can make changes with confidence until you have a mostly complete set of tests. Getting a complete set of tests will be a lot of work, however. The good thing is that you'll probably fix a bunch of bugs while you create the tests.

If you don't trust your code and you have an infinite amount of time, I'd recommend going all out and creating a complete set of unit tests. This will fix a bunch of problems with the code, and it will force you to understand the code a lot better. In the end, you'll have a lot more confidence in the code.

Of course, time constraints are likely to prevent you from spending that much time writing new unit tests. What's a more realistic amount of tests to add?

If you're adding tests late in development, it's probably because there's a problem that needs to be solved. If the problems come from a small number of modules, then these would be a great place to start.

Have a small number of modules with tests is still extremely useful. It will increase your confidence in those modules, and it will allow you to change those modules more easily.

System Testing

System tests use the entire application or almost the entire application to test a variety of use cases. Generally, these are black box tests. This means that you don't worry about what code you're testing. Instead, you test from the user's point of view. A system test will answer questions about what a user can and can't do.

System tests are often easier to insert into an application that wasn't designed for testing.

There are several ways you can create these.

First, you can use a tool that drives your application through it's UI. There are a variety of tools out there. They can click on buttons and menu items., I'm not familiar with any of them, so I won't make any recommendations.

The tools that I've seen strongly couple your tests to your user interface. This might be good if you want to test the details of your UI, but generally you'd rather have tests that are easy to maintain.

If you have a set of stable keyboard shortcuts, then you could use a tool like AutoIt or the Win32 keybd_event function to drive the program. I'd prefer this over something that looks for controls on the screen and sends mouse clicks to them, but I may be too conservative here.

A further improvement of this technique can be done by using a macro capability that's already built into the application.

This bypasses the user interface. You might consider that a problem, but a macro language is likely to be a lot more stable than the layout of menus and dialog boxes, so the tests themselves are likely to be significantly easier to maintain.

Besides sending in inputs, you'll also have to verify the program's output. Saved files, the clipboard, or scanning log outputs can all be used effectively.

System tests are significantly more complex to setup than unit tests. Furthermore, they take a lot longer to run.

Because they take so long to run, it is unlikely that you'll ever convince your team to run these as part of their build process. Instead, you're likely to have a separate machine or set of machines setup somewhere to run the tests. That's what we do here at PC-Doctor for unit tests.

It's easy to make system tests take so long to run that they become unwieldy. Try to focus your tests on individual use cases. Keep in mind the length of time that you'd like your tests to take, and keep that budget in mind.

Of course, you can always add more machines to your testing farm, but this might not happen until you convince your workplace that they're useful.

A Culture of Testing

A culture that thinks that unit tests are a waste of time isn't going to want to add tests to their build process. They aren't going to want you to spend the time to build the tests, either.

Changing corporate culture is a complex topic that will have to be in a different post. It's hard. I don't recommend it.

Instead, I'd hope that the project that you're adding testing needs it badly. In that case, the benefits of testing should be large enough that you may start to get people to notice.

Don't expect everyone to suddenly run all of your unit tests when they compile your modules. Instead, use your work to start a discussion about it.

Examples

I'd also love to give some examples. One project that I worked on many years ago went through the whole process. World of Warcraft desperately needs to go through the process. (It annoys me how few tests they've got.) I'd like to talk about both.

I'd also like to keep this post relatively short, however. I'll wait until part 2 to discuss specific examples.

Thursday, June 26, 2008

What's this Blog About?

I just switched from my company's official blog to my own blog. That's kind of exciting, but it means that absolutely know one knows about this blog. (Almost no one knew about our company blog. It was mostly employees.)

That's great! It lets me talk to myself for a bit while I try to organize the blog a bit. :)

I want to answer a simple question first. What the heck is this blog about?

Well, it's about things that interest me. However, I'm the only one who's going to visit a blog about that. (Maybe my wife? Cindy? Are you there?) Okay, so what interests me? Perhaps if people got a clear idea of what I talked about, then they'd know if they wanted to come here.

I like talking about problems that I've had. In the case of this particular post, it's about a problem that I've got right now. Many times, however, I've solved the problem already.

No one's going to read a blog with that little direction, though.

I can say that I'm a big fan of unusual solutions to problems. This is true in my life, but, if I find an unusual solution that works for me, I'm likely to write about it. You probably see a lot more unusual solutions on this blog than I typically come up with in my life. However, not all of my posts have involved unusual solutions to problems. That's okay. There's more than a few posts.

The title of my blog uses the word Programming. I do a lot of that. Programming is a great place to find unusual solutions to a wide variety of problems, too. One of the great things about diverging from our official blog is that I can talk about anything I want to.

I'm guessing that it'll end up being a blog about unusual solutions to problems in programming and game design. That's way too verbose.

Fred on Programming is wonderfully vague. I'll stick with that.

Friday, June 13, 2008

Why I'm Creating an Independent Blog

I'm separating my personal blog from PC-Doctor's blog. This isn't an uncommon thing to do, but I suspect that some people will wonder if something went wrong. Nothing did. Instead, I expect this to benefit everyone. Read on to find out how.

I'll continue to contribute to PC-Doctor's blog, but I will actively market only the new blog. Since neither I nor anyone else currently does any marketing for our official blog, this may actually help out the official blog. Furthermore, I'll be able to talk about more than just programming here.

Why make such a sudden shift? Chris Keller, our online marketing guy, convinced me that a blog should be more than just a fun place to write down your thoughts. In part, he used Avinash Kaushik's great post here to convince me. He also showed me his own blog. However, I've been thinking about doing the same thing for other reasons.

This raises several interesting questions. First, why didn't I post links to my blog posts around the internet to try to get people to notice PC-Doctor's blog? You can rephrase the question slightly and ask why its better for me to market a blog that's separate from PC-Doctor.

The fundamental problem is that the official blog is not my blog. It has and it has to have a different purpose than a personal blog. I might occasionally write about things like video game design on the official blog, but I always made it relevant to our software. Running trails around Reno and what I might do if a bunch of 12 year olds start shooting guns at the trail in front me really doesn't fit at all on a corporate blog. It might fit here just fine.

In addition, because it's a blog about my stories, I might not mind advertising it on forums or other blogs. The reward for doing that marketing is that this blog has, over the course of several years, the chance of getting a following. Except for Andy, the PC-Doctor blog does not have much of a following. An increased number of people posting in my comments would be a large reward for me. This will require me to market my blog, and it's not entirely clear that I'll have the energy to do much of that, but I certainly didn't want to do it when it wasn't owned by me.

Should PC-Doctor change something about their blog to make it easier for people like me to stay there? No, they shouldn't. PC-Doctor's blog right now is mostly a bunch of posts by me. Customers coming to my posts will see that PC-Doctor hires at least one person who think on their own, and potential employees who come to my posts will see a confusing assortment of topics that may interest some of them. However, no one will buy our products because of my posts.

In fact, if you dig a bit deeper, there are some valuable posts there by other people. Kim Seymour wrote a great article that customers probably would be interested in. (Actually, I liked it, too. You should click on it if you have any interest at all in what PC-Doctor does.) Aki has written some fantastic articles, mostly about non-PC hardware. They've got interesting stuff to say that's relevant to our customers, and my post hides what they're saying.

Hopefully, we'll do something to encourage other employees to write more. Currently, employees are supposed to write on their own time with almost no reward. Some incentive, either monetary or otherwise, might help out. When we first started out, PC-Doctor gave away a Wii for the most prolific poster. Chris Hill won it, and I was jealous. :) They've got another prize up for grabs now as well, but it hasn't been as effective.

Writing a blog has turned out to be fun for me. I'm already convinced. It's been a great way to organize my thoughts about a topic, and I've learned quite a bit while doing it. I expect to continue to have fun in its new location.

I should point out that Doug van Aman, our marketing lead, has been nice enough to get me the copyright for the existing posts that I made. (Thanks, Doug!) I copied them all over verbatim. Without my existing library of posts, it would be much harder to do this.

Tuesday, June 3, 2008

Enums in C++ Suck

Like most strongly typed languages, C++ has a way to group a set of constants together as their own type called enums. Enums are extremely useful in a wide variety of circumstances. However, enums in C++ have a lot of problems, and, in fact, they're really a mess. I'm certainly not the only person to complain about this, either.

Enums don't fit in with the rest of the language. They feel like something that was tacked onto the language to me. This is purely an aesthetic issue, and the fact that they're useful in a wide variety of circumstances probably negates this.

More practically, you can't control the conversion of the enum to and from integers. For example, you can use the less than operator to compare an enum and an integer without using a cast. This can result in accidental conversions that don't make sense.

Perhaps the worst problem is the scope of the constants defined by the enum. They are enclosed in the same scope as the enum itself. I've seen a lot of code where people prepend an abbreviation of the enum's type to each of the enum's constants to avoid this problem. Adding the type to the name of a constant is always a good sign that something bad is happening.

In addition, you can't decide ahead of time what the size of your enum's type is. C++ normally tries to give the programmer as much control as possible. In the case of enums, this allows the compiler to store your enum in whatever type it wants to. Frequently, this doesn't matter, but when it does matter, you'll end up copying the value into an integer type that's less expressive than than the enum.

After the break, I'll explain what other languages are doing about it, what the next iteration of the C++ standard will do about it, and what you can do about it now.
I'll use a simple example for this dicussion:

enum Shape {
Circle, Triangle, Square
};
bool shouldBeAnError = Circle < 0;

C# has enums that behave a lot closer to what I'd like. This is what they look like:

enum Shape {
Circle, Triangle, Square
}
// Note the wonderfully ugly conversion and the need to explicitly
// say that circles are shapes.
bool isALotMoreExplicit = Shape.Circle < (Shape)0;

I'm not the first person to notice this problem, obviously. The next iteration of C++, C++0x, is going to add a much safer enum. This is what it'll look like:

enum class Shape
: int
{
Circle, Triangle, Square
};
// Note that we have to explicitly say that Circle is a Shape. That's great.
// The current standards document doesn't say how I can convert an int
// to the enum, though. I'll see if I can post a comment on that...
bool isALotMoreExplicit = Shape::Circle < (Shape)0;

It's actually not so hard to do a lot of that yourself if you throw out idea of using enums. For example, at home, I use something that looks a bit like this:

DECLARE_ENUM( Shape, int, (Circle)(Triangle)(Square) );
bool isAsExplicitAsIWant = Shape::Circle < Shape(0);

With only a bit of preprocessor magic, you can do the same thing.

Here at PC-Doctor, we define our enums in an XML file and use a code generation step to create the enum. The end result is the same as the preprocessor method: the enum can do whatever you want it to do.

While standard C++ enums are relics of C, lack most of the safety that C++ programmers are used to, and have a variety of other problems, there are ways around the problem. C++0x will add another alternative, but it lacks the flexibility of a home grown solution. You may decide to stick with your own solution even after your compiler starts supporting the new enums.

This post was originally published on the PC-Doctor blog.

Monday, May 26, 2008

A Theory of Scheduling Low Priority Work

PC-Doctor delivers an enormous number of different products to different customers. Each customer gets a different product, and they get frequent updates to that product as well. Delivering these products requires complex synchronization between dozens of engineers. We've gotten great at scheduling the most important work. Our clients love us for that.

However, the low priority projects get released significantly less reliably. Until recently, I'd assumed that this problem was unique to PC-Doctor. Based on some extremely sketchy evidence from another company, I'm going to release my Theory Of Scheduling Low priOrity Work (TOSLOW).
Let's suppose that we've got a project (L) that is not as important as a set of other projects (H). Here at PC-Doctor, we like to deliver our important projects on time. In order to do that, we often have to drop what we're doing to get something done on a project that needs work now. That means that someone who's in the critical path for completing a project in H will not be able to do any work on L. Things may be somewhat more extreme here at PC-Doctor than they are in a typical company, but I suspect that L will always have trouble causing a delay in H.

Now, it's possible to get work done on L. For example, we could hire someone just to work on a specific project in hopes that it will, in time, start making money. That'd be great. You'd manage to get 100% of a small number of people's time for your low priority project. The trick is that the people working on L do not have any responsibilities that are needed by the high priority projects so they can be scheduled independently of H.

Until discovering TOSLOW, I'd assumed that this would mean that, eventually, the project would reach completion. The people working on L might not be perfect for each task that needs to be accomplished, but they can do each task to some extent. They're devoting all their time and energy on that project, so eventually they'll learn what they need and get to the next step.

If that assumption is correct, then L will get accomplished. Furthermore, it is likely that L can even be scheduled accurately. I've never seen this happen here at PC-Doctor, though.

Here's why. If a project requires interactions with a large number of systems that are being used by the projects in H, then the person working on the low priority project will have to get some resources from the person in charge of each of those systems.

There's a chance that those resources will be obtainable. The low priority project's schedule will be determined by the least obtainable resource. In principle, you'll always get the resource you need eventually. If one person is always the the rate limiting step for H then something should be changed to improve the scheduling of H.

However, even if we can say that L will eventually complete, if we want to schedule L accurately, we will have to be able to predict when we'll get time from each resource being used by H. In order to make this prediction, you'll have to understand the schedule of H. Project L will have to wait until resources being used by H are available. Here at PC-Doctor, this is particularly bad. Engineers working on our main projects tend to work closely together. That means that a large number of them are working on the critical path. In other words, getting an engineer to work on L requires H to be unable to use that engineer. Perfect scheduling is not possible, so this happens frequently. However, this means that L's schedule is coupled to the errors in H's schedule!

It's possible that I'm a project scheduling wimp, but I'm going to claim that, as long as the schedule of L is tightly coupled to errors in an unrelated project's schedule, then you shouldn't even try to schedule L. In the worst case, you should just say that the project will eventually reach completion, but you have no idea when it will be.

TOSLOW can be summarized in this way: If a project is low enough priority that it cannot preempt another project's resources, but it still requires some of those resources, then the error in the low priority project's schedule is going to be large.

An important corrollary to TOSLOW is that low priority projects will always be late. Errors in scheduling almost always cause things to be late rather than early!

Okay... If you're working on a low priority project like the ones that I've described, then I haven't really helped you. All I've done is give your boss a reason to kill your project. :-( How can you avoid the effects of TOSLOW?

Just being aware of the problem will put you ahead of where I was on a low priority project that I've done for PC-Doctor. If you're aware of it, then you can start work on the stuff that requires interaction with higher priority projects immediately. In fact, I'd say that, as long as you've proven to yourself that your project is somewhat possible, you should spend all of your time on the interactions. After all, pretty soon you'll be waiting for resources and can spend some time on the meat of your project.

Recognizing the problem helps in another way as well. If your boss is aware of TOSLOW when the project starts, then you may be able to get your project's priority temporarily raised when it needs some help. This is exactly what the Windows kernel does to avoid thread starvation. (The reason for this is actually to avoid priority inversion, but it's got nice side effects for low priority threads as well.) If a thread doesn't get scheduled for a while, then its priority will get a limited boost. That's what you need to ensure your project doesn't end up waiting indefinitely like this one.

This post was originally published on the PC-Doctor blog.

Wednesday, May 21, 2008

Making Regexes Readable

Regular expressions are extremely powerful. They have a tendency, however, to grow and turn into unreadable messes. What have people done to try to tame them?

Perl is often on the forefront of regex technology. It allows multiline regexes with ignored whitespace and comments. That's nice, and it's a great step in the right direction. If your regex grows much more than that example, then you'll still have a mess.

What is it that makes large programs readable? More than anything, subroutines do it. I really want to be able to create something analogous to subroutines in my regex. I'd like to be able to create a small, understandable regex that defines part of a complicated regex. Then I'd like the complex regex to be able to refer to the smaller one by name.

Once again, we can look at Perl. Well, we can almost look to Perl. Perl allows you to something called an overloaded constant. It looks as though these can define things like a new escape sequence that's usable in a regex. I won't claim that I understand it, but this page talks about it some. It seems to do the right thing, but I can't find many people who use it, so it must have problems. I'm going to guess that the scope of the new escape sequence is visible to all regular expressions. That would make it awkward to use safely.

Python, Ruby, and .NET don't have the features that I'd want. They tend to have fairly conventional regex libraries, however. It looks as though I'll have to look elsewhere.

Boost.Xpressive takes a completely different approach to regular expressions. This is an impressive C++ expression template library written by Eric Neibler. It allows you to create conventional regexes. It also allows a completely different approach, however.

This approach goes a long ways towards making complex regexes readable, but it's not without problems.

Here's an example: /\$\d+\.\d\d/ is a Ruby regular expression to match dollar amounts such as "$3.12". It's a very simple regex, and a static xpressive regex gets a lot more verbose:

sregex dollars = '$' >> +_d >> '.' >> _d >> _d;

Remember, this is C++. A lot of the operators that conventional regexes use aren't available. For example, a prefix + operator is used instead of postfix one. C++ also has no whitespace operator. >> takes the place of this. The result is a fairly messy syntax.

However, you can do some really great things with this. You can, for example, use a regex inside another regex.

sregex yen = +_d >> '¥';
sregex currency = dollars | yen;

You can start to see that, while simple regexes are worse looking, the ability to combine individual, named regexes together allows complex regexes to look much cleaner.

I'm not convinced that Boost.Xpressive is the answer. C++'s limitations show through the library's API too easily. However, if I ever have to create an extremely complex regex that will require extensive maintenance later, I'm unaware of any viable alternatives.

Ideally, some other language will take this idea and make it cleaner.

This post was originally published on the PC-Doctor blog.

Monday, May 12, 2008

Anonymous Methods in C# Cause Subtle Programming Errors.

Lambda expressions and anonymous methods in C# are more complicated than you probably think. Microsoft points out that an incomplete understanding of them can result in "subtle programming errors". After running into exactly that, I'd agree. While I haven't tried it, Lambda expressions in C# 3 are supposed to do exactly the same thing.

Here's some code that didn't do what I'd originally thought it would do:

class Program {
  delegate void TestDelegate();

  static void Test( int v ) {
    System.Console.WriteLine(v.ToString());
  }

  static TestDelegate CreateDelegate() {
    int test = 0;
    TestDelegate a = delegate(){ Test(test); };
    test = 2;
    return a;
  }

  static void Main() {
    CreateDelegate()();
  }
}

This prints 2. This is not because of boxing. In fact, exactly the same thing happens if you replace the int with a reference type.

Instead, the compiler creates an anonymous inner class and moves all of the captured variables into that. All references end up pointing to the inner class, so the second assignment to the test variable actually modifies a member of this class.

Here's roughly what it looks like:

class Program
{
  delegate void TestDelegate();

  static void Test( int v ) {
    System.Console.WriteLine(v.ToString());
  }

  class __AnonymousClass {
    int test;
    void __AnonymousMethod() { Test(this); }
  }

  static TestDelegate CreateDelegate() {
    __AnonymousClass __local = new __AnonymousClass();
    __local.test = 0;
    TestDelegate a = __local.__AnonymousMethod;
    __local.test = 2;

    return a;
  }

  static void Main() {
    CreateDelegate()();
  }
}

Anything starting with a few underscores is a compiler generated name. The names I used are not correct.

Here's the catch. The local variable no longer exists. The variable you thought was local is now located inside an object created to hold your anonymous method.

Interestingly, while this is the whole story as documented by Microsoft, there is more to it. For example, it's possible to have two anonymous methods that reference the same local variable. It looks as though that variable is shared between the two anonymous method objects, but someone who's willing to disassemble the compiled code should confirm that.

You really do have to know about some of this behavior. The problem would disappear if anonymous methods could only read local variables. Then a copy of the value could be stored.

This post was originally published on the PC-Doctor blog.

Monday, May 5, 2008

Developing a New Framework

This post is a bit of a change for me. I'm actually going to write about my work for PC-Doctor! I'm actually a bit embarrassed at how rare that's been.

I want to talk about how to design a brand new framework. It's not something that everyone has to do, and it's not something that anyone does frequently. However, there's very little information on the web about the differences between creating a library and a framework.
I've been working on a framework here at PC-Doctor, and I've worked on a few others in a previous job. I'll admit that I'm assuming that all similarities between these projects will be true for any new framework.

There are three things that I want to see from a new framework project. After I get those, it becomes more difficult to generalize across framework projects. The first framework I developed had all three of these aspects mostly by accident. My current framework has a tighter deadline than the previous one, but it still has all three of these to some extent. I'm a strong believer in all of them.

Goals


All of the frameworks that I've created try to do something new. This makes requirements gathering extremely difficult. If no one understands what the framework will accomplish, you almost have to decide on your own what it'll accomplish. You'd better be right, though!

Management will typically have a short term goal that they want accomplished with the framework. They might even be able to come up with a set of requirements. Don't get trapped by this. A framework must be much more than that. It enables programmers to use a common language to describe a class of application. Learning this language requires a significant time investment. If it only solves a short term goal, programmers won't be able to make that investment. Your framework has to solve a significant problem in order to be worth the investment required.

In another sense, however, requirements gathering for a framework is easy. You can get some details of the requirements wrong without damaging your users' ability to figure out what they want to do with it. After all, they typically write code using the framework. They can just write a bit more code than they should and get things working. Later iterations can use this experience to refine the design, but this won't happen until late in the design of the framework.

Instead of trying to find formal requirements, I prefer to find something that I call "goals". This is closer to a set of use cases than requirements, but they're reformed so that they look like requirements.

After developing a framework in my previous job, I saw how critical these goals were in the framework development. A good set of goals can let you make decisions quickly and accurately about design problems. If the goals are relatively small and simple, then they can be applied uniformly and accurately throughout the life of the project. That means that you're likely to fulfill the goals.

As an example, I'll give you a few of the goals for my current framework:

Goal: Relatively inexperienced developers should be able to use the framework to do somewhat sophisticated things.

This goal has driven a large fraction of my decisions on my current project. In my vision of the future, there will be hundreds of mini-applications using this framework. Having an enormous number of these applications would allow us to do some really amazing things, but that's simply not possible if I have to write them all. In fact, that's not possible if PC-Doctor engineers have to write them all.

This goal is designed to allow us to recruit developers who are more interested in the problems that the framework can solve than the techniques required to write code with it.

If this goal were a requirement, it would state something about the usability of the framework. Perhaps it would say how long a typical programmer would take to develop their first application with it. In its current form, it's almost completely useless to our QA department. That's not what it's for.

Goal: The appearance of the UI elements created with this framework should be directly modifiable by Chris Hill, our art department.

Again, this is me recruiting other people to do my work. :-) Stuff that Chris creates looks about a hundred times better than stuff that I create. Looking good is an important goal for us since we want our product to be fun to use rather than merely possible to use.

This is a better requirement than the previous one. This can be verified directly.

However, it turned out not to be that useful of a goal. While this goal verified some of the design decisions that were made for other reasons, it hasn't been all that useful to me directly. This might be a good thing, actually. If the goals work well together, they shouldn't have to conflict.

Goal: A future iteration of the framework should be portable to a variety of other platforms.

This goal is a good example of a goal that mostly gets ignored. The architecture does indeed support Linux, and a lot of the code should be easily ported to alternate platforms. However, it's hard to pay attention to a goal that isn't needed in the next release. PC-Doctor has some tight deadlines; we don't get to develop frameworks out of sight of our paying customers.

Not all goals have equal importance, and not all goals are actually useful. I don't consider this a failure, yet. Try to have as small a set of goals as possible. The more you have, the more difficult it will be to accomplish all of them simultaneously.

Okay, I've got a few other goals, too, but that gives you the idea. These are extremely high level goals. You could call them requirements, but that would be stretching things a bit. They really aren't that formal.

These goals are extremely important to the project. Choosing the wrong ones can kill the chance of success. Choosing the right ones will make design decisions extremely easy.

Usability


The next thing to worry about is the users of the framework and its usability for those users. I've talked about this before. The things I say in that post are even more valid for a framework than they are for a library. Go ahead and read that. I'll wait.

In my current project, I've got two types of users to think about. The first are mentioned in my goals. These are the developers who will write mini-applications with it. The second group are my coworkers who will help me maintain it.

Unfortunately, this framework has ended up putting the two groups in conflict. I frequently end up adding complexity to the framework in order to reduce the complexity of the API. In fact, I frequently go to great lengths to simplify the API slightly without worrying about adding half a dozen states to the framework's state machine.

It's still too early to tell if this will be a success. However, there are some preliminary indications:

1. Our first client to see the early results of the framework liked it and used it enough to have a lot of feedback for us.

2. Stephen, the product manager in charge of the first product, is currently busy writing a mini-application to test an optical drive. He doesn't complain much anymore. (I need to get him to complain more, actually. He's my only usability testing!)

3. Soumita, the only other programmer to actually dig into the framework so far, complains loudly. While I feel bad for her, making the internals simple wasn't one of my goals. I'm a bit worried now that it should have been, though.

To summarize, the UI of a framework isn't any different from the UI of an application. Use the same techniques to improve it. Above all, take the usability seriously. Frameworks are complicated and require significant investment to begin using them seriously. People will not do that if it's not easy to use.

Tools


You want to make your framework easy to use. You can do that by making a nice, clean API, or you can do it by making tools that allow users to ignore the ugly parts of your API. Both possibilities should be considered.

Boost and XAML are two frameworks that take this principle to opposite extremes. It's worth looking at both.

Boost has a wonderfully clean API. The tools that they've created suck. (Boost.Jam and BoostBook are horrific messes that make me cry.) The framework itself is a joy to use because you don't frequently touch their tools. This is a valid approach to framework design, but it's not the only approach.

Microsoft's XAML, for example, is the complete opposite. XAML is completely unreadable and extraordinarily difficult to use by itself. XAML data files are as readable as object files. However, Microsoft doesn't want you to use it by itself. They created a set of tools that let you completely bypass the XML obfuscation that XAML requires. The tools themselves are clean and easy to use. Again, this is a valid way to approach framework design.

I prefer something in between, though. Make sure there are some tools to help users deal with the worst parts of your framework. At the same time, make the framework itself clean. Solve all aspects of the usability problem using the most effective tool for the problem.

For my current project, I didn't have time to create any tools. However, I did manage to make a lot of the code that users create editable with CSS and XHTML tools. There are a lot of great tools for web development. All I had to do was enable my users to use these. The jury is still out on that decision, but I'm still optimistic.

This originally appeared on PC-Doctor's blog.

Sunday, April 27, 2008

C++0x: The Lambda Expression Debate

he next C++ standard (C++0x) will have lambda expressions as part of the standard. N2550 introduces them. It's a short document, and it's not too painful to read. Go ahead and click it.

Like many new C++ standards, it's not clear yet how the new feature is going to be used. Michael Feathers has already decided not to use them. At least one other person seems to mostly agree. I, on the other hand, am with Herb Sutter who seems excited enough about the feature to imply that MSVC10 will have support for it. This is going to be a great feature. Incidentally, Sutter has mentioned an addition to C++/CLI in the past that would add less sophisticated lambda support for concurrency. I suspect he's serious about adding the support soon.

There have been many times when I've desperately wanted to avoid defining a one-off functor or function in my code. In fact, there have been times when I've been desperate enough to actually use Boost.Lambda! This standard is a clear win over Boost's attempts to deal with the limitations of C++03.

It's worth showing you what's possible. I'm going to steal code from other people's blogs since I don't have a compiler to test code. Here is Sutter's example:

find_if( w.begin(), w.end(),
[]( Widget& w ) { return w.Weight() > 100; } );

This has more punctuation than those earlier bloggers wanted to see. However, I'd call it extremely clean. It's the normal STL find_if algorithm with a lambda expression to create a functor. The initial [] declares the capture set for the lambda in case you wanted a closure. I'll talk about that later.

If you haven't been following the decltype/auto proposals in C++0x, you may be surprised to see that you don't have to declare the return type of the functor. As a side note, you will be able to use the auto keyword to allow the compiler to automatically determine the return type of a function. Expression template libraries are going to change a lot now that this becomes possible:

using namespace boost::xpressive;
auto crazyRegexToMatchDollars = '$' >> +_d >> '.' >> _d >> _d;

Suddenly, you can avoid placing your complicated types in container types that erase most of the type information.

That's another subject, though. The short version is that the compiler figures out the return type of your lambda expression based on the type of the expression that gets returned.

The capture list in the original lambda example is worth talking about. There are several options here. My favorite is the one Herb Sutter used: an empty list. This means that there are no captured variables in the closure, and this seems like a great default. In this case, the lambda expression cannot access any variables stored on the stack outside of the lambda declaration.

If the empty square brackets were replaced with a [&] or a [=], then any references to stack variables in the lambda expression will get something copied into the functor that the lambda expression creates. This level of automation may make some sense for extremely small lambda expressions.

It's great that the fully automatic behavior is optional. If the original [] is used, then you'll get a compiler error if you accidentally access a variable that may go out of scope before your functor is destroyed. The empty closure list appears to be a great safety mechanism. [=] is a good second choice, however. This makes all captures be copy constructed into the functor. Because everything gets copied, it should be safe from lifetime issues. [&] will copy non-const references into the functor and should probably be used with caution.

If you really need to write to a capture, you can list the variables that you want to use explicitly with a & in front of the variables. This means that, when reading the code, you'll realize that the variables listed in the capture list have to outlast the functor created by the lambda expression. Ironically, I'd go with the opposite of some of those earlier blogs and guess that [&] should be avoided as much as possible. (They seemed to think that storing references to unspecified captures should be done by everyone, all the time. C# does this, but it also has garbage collection to help out.)

So far, this extension looks great! I can imagine a lot of different ways to use different capture sets. It looks like a succinct way to make a variety of commonly needed functors, and I'm a fan of the safety that it allows. It should get people to make a lot more functors.

I'm really looking forward to n2550 going into compilers. I'll be watching GCC closely to see when they add reliable support for it, and I may stop using MSVC at home as soon as they do.

This originally appeared on PC-Doctor's blog.

Wednesday, April 23, 2008

The Next JavaScript...

ECMAScript 4.0 (ES4) is on its way. This will be the next standard for JavaScript. It's not going to be usable on web pages for a while, though. In fact, I suspect I won't be using it on my web page for at least 5 years. The problem is simple: as long as people still use older browsers, you won't be able to assume that people have it.

However, the features in it are an interesting look at what the standards committee thinks is wrong with the current JavaScript. This is not a minor patch release. This is a dramatically overhaul of the current JavaScript (ES3). Oh, they've included a lot of minor things that are simply broken in ES3. These changes are certainly interesting, but today I'm going to talk about their major focus. They want to make it easier to develop large applications in JavaScript. Clearly, they understand that people are starting to develop large applications for web browsers, and they feel that there are problems with the currently available technologies for this.

I don't have any experience with developing what I'd call a large JavaScript application, but we are starting to develop an extension of PC-Doctor for Windows that uses JavaScript in numerous places to control its behavior. In my dreams, I imagine that it will eventually become a large application made up of plugins that run on JavaScript.

Let's go through the major features that the committee thinks I'll need as our technology gets bigger...

Classes and conventional OOP. I thought this was interesting. They're adding conventional classes as an alternative to the prototype based inheritance that ES3 supports. This comes with a bunch of normal OOP things like virtual functions, getter and setter functions for "virtual properties", interfaces, inheritance, etc.

I can certainly understand why they wanted this. Type safety is greatly enhanced if you can know exactly what type something has. ES4 makes it possible to create a "conventional" class who's members look a lot like Java's and C#'s. You can't write or delete arbitrary properties of one of these classes. With ES4 and a properly written program, the language will be able to determine that you're doing something unintended to some objects and fail at runtime.

Type safety. The new class system allows values to actually have a strict type. Most user created types in ES3 are simply Objects with certain properties and a specific value of the prototype member. That's close to a type, but it's only an approximation since it's possible to change it with some member assignments. The new classes will have a strict type that cannot be changed.

This is bigger than it sounds. This means that code can require certain types in certain variables. It means that a member function will remain a member function for the lifetime of that object. The runtime engine can enforce this, too. The standard doesn't hide the fact that inserting some extra asserts in the code will slow things down, however. They bring this up clearly, and they don't apologize for it.

This is called type safety, and it's something that I've been advocating for a while. The committee wanted it badly enough to be willing to slow programs down for it. They go beyond even that, though.

They've introduced another mode of execution called "strict mode". In this optional mode, the execution engine is supposed to perform some static analysis of the program before it begins. The engine is supposed to fail when a variety of illegal things are possible in the code. For example:

1. Writing to a const property.
2. Calling functions with the wrong number of arguments.
3. Referencing properties of class objects that don't exist.
4. Failing a type check.

The great thing is that all of this happens before the code is executed. It's done entirely with static analysis. If anyone ever bothers to implement this mode (it's optional), then JavaScript is going to become my new favorite scripting language!

I should point out that there are few guarantees about strict mode. Different browsers are going to check for different things in strict mode. Therefore, it is only useful in development. I predict that developers will run the most strict web browser on their development machines and publish without strict mode turned on.

I'm going to claim that this is strong support for my work on a Lua static analysis program. They are coming extremely close to saying that in order to write large applications, a scripting language requires strong type checking, and static analysis might be able to relieve the runtime of some of its need for that. Since Lua's runtime has no static type checking, it needs static analysis as well.

Yeah. I'm going to claim that many of my previous rants are backed up by this committee!

Function overloading. Function overloading is a trick that's only available to languages that can define the type of an object. ES4's version is slower than Java's version because it has to happen at runtime. However, the important thing is that it's possible.

Functions are values of variables in JavaScript just like strings or numbers. You might ask how it's possible to assign more than one function to a single variable name. They do it in two ways.

The first is something they call a generic function object. This is an object that can hold more than one function in it. It exists as a single value. It's a bit clunky, but I couldn't come up with anything better.

Speaking of ugly, there's another way to overload functions. They added a switch statement that works on the type of the value. Here's the example from their overview paper:

switch type (v) {
case (s: string) { ... }
case (d: Date) { ... }
}

That looks like a pathetic attempt at overloading to me, but it could well be how generic function objects are implemented internally. It may be that this could be used to create a more flexible generic function object. However, I predict that consultants 5 years from now will make lots of money going to companies to give talks condemning the use of it.

Packages and namespaces. This is used to hide the activities of different components in an application. This is clearly lacking in ES3 even with small scale libraries and applications. Library authors tend to go to great lengths to hide as much of their code as possible from the global namespace. With the small applications I've written, it's been working fine, but I can see its limitations.

Overall, I'm pleased with where ES4 is trying to go. They're trying to make it safe to build large applications with JavaScript. Given where we're seeing web applications going, this is a great thing. However, I can't imagine that it'll work anytime soon since Google Web Toolkit gives ES3 programmers compile time type safety right now.

Mostly, however, I'm pleased that the committee is trying to add type safety to an untyped language in almost exactly the same way that I would. :-)

This originally appeared on PC-Doctor's blog.

Tuesday, April 15, 2008

The Cost of Complexity

This article is going to have more questions in it than answers. It's about a problem in software development that I'm not sure I've worried about enough. I've certainly thought about it for specific cases, but this is the first time I've tried to think about the problem in general.

My main question revolves around the cost of complexity in software. There is certainly a large cost in making software more complex. Maintenance becomes more difficult. Teaching new employees about the project becomes harder. In the end, you will get fewer engineers who understand a complex project than a simple one.

Unfortunately, almost any non-refactoring work will add to the complexity of a project. However, some changes can have a large effect on the complexity in a short period of time. Adding a new library or technique to the code base, for example, will make it so that the new technology will have to be understood by people working on the project.

What I really want to know is how much can this cost of complexity be mitigated? Besides switching libraries to add, what can be done to decrease the cost? My question is based on the assumption that some complexity is essential. So, given that you're going to add a new library to the code base, for example, what can be done to reduce the cost?

Make it fun?


Some types of complexity are actually fun for programmers to deal with. After all, if you don't like learning new things, you wouldn't last long as one. For example, a new library may be complicated, but it may also do some really nice things. If it's a fun library to use, then is its cost of introduction reduced?

Unfortunately, fun is difficult to predict. Certainly a hot technology is going to generate more interest with engineers. For example, I'd much rather develop a new mobile application using Android than WAP. WAP is probably a lot simpler, but Android is breaking news. WAP is dying. Would I choose Android over WAP just for this? Probably not, but I suspect that Android's problems are reduced a bit because of its novelty.

I looked reasonably hard for some discussion of this. I couldn't find anyone who wanted to admit that this could be a significant factor. Am I all alone here? Somehow, I doubt it. I prefer to believe that everyone ignores this.

Of course, without some measurements, it's just speculation. Hmmm... Maybe you could make some sort of measurement based on fake job ads and resume counts?

Ease of use


No one seems to think of libraries as something with usability considerations. However, an API is really just another interface that happens to be used only by programmers. If a library is easy to use, its cost will be reduced. What can be done to make a new technology easier to use?

Microsoft is pretty good at this, actually. Lots of people are going to complain at that statement because they've got a long ways to go. However, they put a lot effort into making high quality tools and documentation. Heck, they even created a publishing company to create how-to books for their libraries. They put a lot of energy and time into getting things working that aren't related to the API itself. (It's too bad their APIs frequently make me want to kick kittens.)

Libraries that do this well will make themselves cheaper to use. Once a library is chosen, though, what can be done about making it easier?

When I first came to work for PC-Doctor, I was told to create a new web application using Ruby on Rails. None of us knew how to use either Ruby or Rails. It turns out that the online documentation for Rails was pretty close to miserable. (I sure hope they've fixed that!) We bought many copies of some O'Reilly books on the subject, and this helped a lot.

Spend a bit of money to educate engineers. Different people learn in different ways, though. I love to read. Andy, our current Rails expert, is a big fan of some Rails videos that he found. Find the right support for each developer. Buy whatever training is required. The cost of a book is pretty low when compared to a developer who is forced to stumble around the internet looking for answers.

Genuine need


If everyone understands that a library really is needed, then people will be a lot more interested in learning it. This may be related to the fun factor mentioned above.

Complexity that seems like a bad idea afterwards is frustrating. Poorly written code is a great example of unneeded complexity. I've seen first hand what large amounts of horrible, old code can do to someone's moral. Andy, I'm thinking of you...

What did I miss?


Let me know. As I mentioned, I'm way behind in thinking seriously about this subject.

This originally appeared on PC-Doctor's blog.