Wednesday, December 31, 2008

Namespace visibility in C#

Java has package scoping. It allows an element to be visible only within the same namespace. It's a wonderful thing.

Here's how it works in Java:


package com.pc-doctor.mynamespace;
package class Foo { ... }


The class Foo is only visibile within mynamespace.

Even though I'm not a Java programmer, this immediately strikes me as extremely useful. Frequently, helper classes are only needed by code that lives close by.

There are two reasons to want namespace visibility to be enforced by your compiler:

  1. If you can make those classes invisible outside the namespace, it will make life a lot easier for clients of that namespace. Having only the useful classes appear in Intelisense is a big win.
  2. Having helper classes be invisible also helps construction of the component that is in the namespace. If the compiler doesn't let anyone make calls to the helper classes, then we can make much stronger assumptions about how our clients use the code.
C# does not offer any support for namespace visibility. However, there are three ways to accomplish it. None of them are perfect, and one of them is a bit bizarre.

The Microsoft Way



Microsoft expects you to make the classes internal. This prevents anyone outside of the assembly from using the class.

However, you have to make a separate assembly for each namespace that you want to do this with.

Frankly, that's painful enough that few people do it.

The C++ Way



C++ also lacks namespace visibility. The folks over at Boost use a separate namespace underneath the main namespace for helper classes (and functions). They've standardized on the name "detail" for this namespace, and it works fine for problem #1. It doesn't do anything for #2, though.

This is an easy thing to do, but it doesn't do much in C#. C# programmers use a lot more using directives than C++ programmers should. The only place to put a using directive in C# is at the top of the file. This means that, if a single function needs a namespace, then the whole file will get it.

The end result is that a lot of detail namespaces will be visible. This means that you'll have to make a using directive for the detail namespace that you really want, and you'll have to avoid typing "detail" to get to an element.

Even with those problems, it's probably worth doing in C#. Create a detail namespace under each namespace and put things that you'd like to have package scope in there.

Getting the C# Compiler to Enforce Namespace Visibility


C# does have a feature that can be used to emulate namespace visibility. This solution is a bit weird.

I should point out that it uses a feature of C# that wasn't designed for what we're going to use it for. In C++, this kind of behavior is encouraged and well supported. In C# (and Java), you're not supposed to deviate from the party line.

I'm sure Microsoft doesn't have any tests written for this behavior; I've already found one bug in the compiler from this technique.

What is a namespace? It allows many classes to be placed in the same scope even when they're stored in different files.

A partial class does the same thing, and this can be used to emulate a namespace. Partial classes were designed to allow Microsoft's code generators to create part of a class and put those portions of the code in a separate file. C#'s designer tool makes heavy use of this.

It's also very close to a namespace!

If you use a partial class as a namespace, then you can put multiple classes in different files and have some of them be invisible outside of the "namespace".

A private class inside the "namespace" is visible to other classes in the namespace, but it is not visible outside of the namespace. Likewise, a public class is visible outside of the partial class.

All of this is enforced by the compiler, too. We get both of the benefits of namespace visibility with this technique.

Unfortunately, it doesn't work perfectly.

For example, using declarations don't work for a partial class. If you're not in the namespace, you will always have to use the partial class's name to access public members of the namespace. This may be annoying in some cases, but, if you limit this technique's use to cases where there isn't a lot of access to the namespace from outside of it, then it's not serious.

It also looks wrong in the IDE. The IDE has no idea that your class is really a namespace, so it gets colored incorrectly. The severity of this problem is a matter of opinion. It doesn't bother me.

This is all awkward enough to prevent me from completely replacing namespaces. Instead, I use this when I want to expose an extremely simple API and perhaps a type or two. It's really only worth the trouble if you get to hide a lot by using this.

The syntax is also verbose:

namespace pcdoctor {
  partial public class fakeNamespace {
    private class NamespaceVisibilityClass {}
  }
}

There's actually a hidden benefit of this technique. It's possible to make a "free function" in the fake namespace. A static member function behaves a lot like a free function. It's accessible from any of the classes in the namespace. If it's public, then it's accessible from outside the namespace.

I mentioned that I found a bug in the compiler. Unfortunately, I didn't track down exactly what the bug was. Instead, I just found a solution and moved on.

However, if you find that some of your code gets executed more than once when the fakeNamespace type is instantiated, then you might want to find another solution.

Good luck! Just remember that the central authority that controls C# programmers doesn't want you to do this. You're on your own here.

Wednesday, August 20, 2008

lnternal Training Talks at PC-Doctor

PC-Doctor is trying to start a series of internal training talks. I'm going to give the first one next week.

It looks as though there's a lot of interest from everyone on this project. Management loves the morale boost and the training that people get. Developers are excited to learn something new. QA seems excited, too!

Here's the draft of my talk:



It's probably worth another blog to discuss the dangers of posting raw slides where the whole world can see them. It's probably also worth talking about why I don't waste my time making everything look perfect.

For now, it's exciting that publishing is as easy as copying and pasting some HTML from Google Docs! :)

Tuesday, August 5, 2008

Testing the Untested: World of Warcraft Needs Help!

As you should know, I play World of Warcraft. It's been a great game for several years. Blizzard is making lots of money off of the game, and they are using that to put new content into the game regularly.

There have been a lot of changes to the game since I started playing it. Watching these changes carefully has given a lot of circumstantial evidence to the idea that World of Warcraft is primarily tested by a large quality assurance staff.

This post is a sequel to Testing the Untested. However, it will focus on one example. I think it's interesting to look at what the effects of an inadequate testing program have on a major software project like World of Warcraft.

I'll also talk a bit about what they might have to do to fix their problem. I suspect that roughly the same problems exist on any large project that has the problems that World of Warcraft has.

Blizzard's QA Staff: Are They Relevant?


This interview is interesting. There are apparently 135 "developers" working on World of Warcraft.

The Blizzard guy that they're interviewing makes an interesting distinction between developers and non-developers. It seems clear that a "developer" is a more important person to him than non-developers.

I may be reading too much into this, but that's what this sort of analysis is all about. Please read it yourself below the picture of Nova hanging out in front of an alien cave.

Interestingly, artists working on cinematics count as developers to this guy. However, QA staff does not count.

I'm biased by my work here at PC-Doctor. We hire some incredibly talented QA folks. They have a large role in the development of new and existing products, and developers tend to have a lot of respect for them.

We don't call them developers, either. We also don't sneer at non-developers and pretend they don't count in the real employee count.

I'm going to interpret that interview as a statement that Blizzard doesn't believe that QA staff are as important as artists, programmers, and designers. If Blizzard doesn't give them much responsibility, then they are probably correct.

Anyway, if the QA staff isn't given the respect they need to be relevant, then the programmers are the only ones left who can produce automated tests. In fact, it looks as though QA is the only visible source of tests for World of Warcraft. This might contribute to their perceived irrelevance. If they're spending their time doing things that could have been automated, then it may be hard to gain much respect.

I was pretty disappointed to see the lack of respect in that interview. It looks as though Blizzard's QA staff does a great job with the new content. Unfortunately, it's not possible for them to revisit old content. This is the fundamental problem with relying on a staff to do your testing. It costs a lot to run tests, and so you'll end up running them less.


Do They Have Functional Testing?


Blizzard has stated several times in the past that they're unwilling to change dusty old content that people don't run frequently. They've said that this is because the risk of screwing something else up is too great.

A statement like that is pretty much the same as saying that they don't have enough testing for the old content.

Actually, I expected this section to be a bit longer since the conclusion is so important to the rest of this post. However, if you've got a Blizzard employee who says exactly what you're hoping to prove about their project, then you don't really have to do much more!

I do wish I could find some of the other times that this has been said, but having it said once is sufficient for this article.

Adding Tests After the Fact


Here, I'm going to talk about how Blizzard should be adding tests. It's mostly interesting because the story almost exactly the same for any company that doesn't have a large set of automated tests for their software.

How should Blizzard go about creating tests? This is has already been the subject of another post. In fact, I'm going to say many of the same things.

The first thing to worry about is whether or not the corporate culture supports testing. If it doesn't, then this is the most serious problem facing someone trying to add tests. Testing has to be thought about be all of the developers. It really has to be a part of the normal operation of the programmers. It has to be a part of their culture.

World of Warcraft has been in development for almost ten years now. If they still don't have an extensive set of automated tests for the game, then they clearly don't understand what they're missing.

It's pretty hard to imagine how someone might convince them that testing is important if they haven't seen it already. The biggest advantage of automated testing is that you can make changes with some confidence that nothing was broken. However, you don't get to that point until you have relatively thorough tests.

Developing a thorough set of tests for a game as large and old as World of Warcraft would be an enormous undertaking. Therefore, some advantage would have to be found for incrementally adding tests. If tests can be created that verify parts of the game that are difficult to test with a QA staff, then these would be easy to convince people to add.

The easiest example that I can think of is a test to ensure that the floors don't have holes in them. Whenever Blizzard releases new content, there seem to be places that people can fall through the floor into a location that they're not supposed to get to. I have no idea where this problem come from, but it sounds as though it should be covered by an automated test.

Adding this sort of test allows developers to slowly add real value to the automated test infrastructure. As long as there is value in each step taken, it is easy to convince people that the work is valuable. Eventually you can hope that you'll end up with enough tests that you'll

Another, riskier approach could also be taken. Class balance would be extremely difficult to verify, but a test for it would be extremely useful and visible.

There are a large number of different character classes in World of Warcraft. Each class has different capabilities, but those capabilities are supposed to be equally useful under certain circumstances. Getting this correct is extremely important to players, and getting it correct is extremely difficult as well.

Blizzard's players and staff spend a lot of time thinking about it, and it gets tweaked over and over again. If testing this could be partially automated, then they could speed up the process. Customers and developers would both enjoy this a lot.

It's not clear that it's even possible to automate this. A few things can be analyzed easily in a simple spreadsheet. More complicated aspects of balance would require some extremely sophisticated analysis.

However, Blizzard has some really big supercomputers*. If it were valuable enough to them, they could run some fairly sophisticated tests. I can imagine some partially automated tests that could analyze even arena class balance. Input from the QA staff could be used to speed up the tests considerably.

If this worked, then it would go a long way towards convincing the rest of Blizzard to try other problems. Again, this approach would be significantly riskier. If the project failed, it might set back automated tests even further.

World of Warcraft Isn't Alone


World of Warcraft is a huge project that clearly suffers from a lack of automated tests. Everything I've said here is specific to that game, but it comes from my experience on other, smaller projects with the same problem.

A lot of projects have exactly the same problem, and solving them requires a lot of the same tools.


* Actually, we don't know this. However, The9 gets most of their revenue from World of Warcraft and runs the Chinese server clusters for Blizzard. They also have 12 of China's fastest publicly benchmarked supercomputers. It seems safe to assume that Blizzard themselves also have similar servers. While none of those are dedicated to testing, it seems likely that they've got some extra CPUs around that could be used.

Wednesday, July 30, 2008

Fingerprint Readers Don't Work

A while ago, I got annoyed at a friend's computer. It had a fingerprint reader, and I wanted to play a game on it before he woke up.

Fortunately, it turned out that my fingerprint worked just fine. It took a few tries, but I successfully logged in as him.

He did look a bit shocked when he woke up and saw me playing a game on his supposedly secure work computer. Too bad he wasn't in the IT department at his company. :)

How secure are fingerprint readers? I can't say that I'm impressed. Since you leave what is essentially your password on everything you touch, they can't be infallible.

Fingerprint readers are supposed to be intimidating. You're supposed to look at one and think to yourself that you'd have to do some kitchen trickery to defeat it. Intimidation might be most of the security they provide.

That would have worked for me. I don't make a habit of breaking into other people's work computers. Is intimidation all they've got?

It looks as though that might be true. If someone really wants to break in, they can. It's not always as easy as my attempt was, but even the most secure readers can be broken.

However, I'm not going to complain too much about fingerprint readers. It's really easy to login to a computer with one. It took about 5-10 seconds to break into my friend's. Imagine how easy it'd be if it worked the first time? Convenience is much more important than security to me on many of the computers that I use.

Incidentally, there's an interesting ending to this story. The friend whose computer I broke into was a researcher at HP Labs. After seeing me casually playing a game on his computer, he decided to do some research on alternate biometric input devices.

Wednesday, July 23, 2008

High Performance Multithreaded Code

Current CPUs have several cores per CPU. If a program wants to speed up with new hardware, the program has to exploit those extra cores. Using multiple threads is, therefore, becoming extremely popular.

Of course, people who talk a lot about multithreaded programming don't ever mention that most programs don't need to be any faster. While I feel obligated to point that out, this article is written for people who do want their applications to run faster.

In fact, I'm going to go even farther than that. This is for people who want to squeeze every last bit of performance out of their multithreaded code. This isn't for everyone.

Since I don't have a lot of experience with this, I'm going to talk about two books that I've read. They both talk about specific topics that are, I suspect, absolutely essential to some extremely high performance multithreaded code.

Interestingly, neither book talks about what I'm talking about directly. The books have absolutely no overlap, either. They both talk about different ends of the same problem without looking at the whole problem.

I enjoyed both of them.

The first, The Art of Multiprocessor Programming, was written by a couple of academics. It's a highly theoretical look at lock free and wait free data structures. It never talks about real hardware. It's also fascinating.

The second, Code Optimization: Effective Memory Usage, is an extremely practical guide to how modern hardware deals with a critical resource, memory. It talks in detail about what the hardware is doing. It doesn't touch algorithms that avoid the many problems that it talks about. It's a bit out of date as well, but it's still worth spending time with.

The Art of Multiprocessor Programming


You wouldn't be able to tell from the cover or the publisher's description of it, but this book is about lock free and wait free algorithms.

A component is lock free when many threads can access the routines and at least one thread will always make progress. If a thread holds a mutex, this is not possible. The thread with the mutex could page fault and be forced to wait. During that wait, no thread will make progress.

Wait free is an even stronger constraint. A routine is wait free if it will complete in a finite number of steps. That is, all threads will simultaneously make progress.

It's possible for a data structure to have some routines that are wait free and others that are merely lock free. The authors frequently try to make the most critical routines wait free and the less important ones lock free.

Lock free programming is a topic that's always fascinated me. It seems incredibly difficult. Researchers like the authors must agree, because there aren't that many lock free algorithms in the literature, yet. There are a few data structures out there, and a lot of work has been done on critical algorithms like heap management routines. There isn't much else, though.

The book, however, walks you through the techniques that are needed to build these algorithms. They describe and analyze the algorithms in ways that I don't normally bother with. Mathematical proofs appear to be critical to their process. Don't worry too much, though. None of the proofs outlined in the book were difficult to follow. Without the proofs, I would have had a difficult time understanding what they were doing, too.

Here's an example of an interesting theorem in the book. Modern processors have a variety of atomic instructions that are designed to help avoid locks. These instructions are critical to lock free programming. Examples include atomic increment and compare and swap.

Lock free algorithms replace locking a mutex with a number of these atomic instructions. Someone created a theorem that essentially states that most of these instructions are pathetic. (I'm only paraphrasing slightly.) Compare and swap is proven to be useful, however.

Lock free articles talk a lot about compare and swap. It's nice to understand why!

Incidentally, despite the title of my post, lock free algorithms are not necessarily faster than a conventional algorithm wrapped in a lock.

The Art of Multiprocessor Programming doesn't talk about it, but these atomic operations are expensive. They require a memory barrier. This requires communication with all of the other cores in the computer, and it's slow.

Arch, an Intel developer, puts it nicely here:
My opinion is that non-blocking algorithms are important in situations where lock preemption (arising from oversubscription) is an issue. But when oversubscription is not an issue, classic locked algorithms, combined with designing to avoid lock contention, will generally out perform non-blocking algorithms on current Intel hardware. I don't know about other hardware, but observe that:
  1. Non-blocking algorithms generally use more atomic operations than it takes to acquire a lock.
  2. Atomic operations in non-blocking algorithms generally need a memory fence.
  3. Memory fences incur an inherent penalty on out-of-order processors or processors with caches.
Do keep that in mind when you read the book! If your algorithm uses too many of these atomic operations, there's no point in doing it. Locking a mutex doesn't require many of these operations.

The authors act like typical academics and ignore this problem completely. :)

Code Optimization: Effective Memory Usage


This book is dated 2003. It's several processor generations out of date. Don't panic, though. It turns out that a lot of what Kris Kaspersky says has been true for far longer than that.

There's a good chance that some of his discussion of ways to exploit specific CPU generations isn't useful anymore. However, interleaving memory bank access, N-way associative cache behavior, and many other interesting properties of memory are unlikely to change immediately.

You'd think that memory technology would change enough that the same quirky code optimizations wouldn't work for a whole decade. Apparently, you'd be wrong.

This is, as you might imagine, the exact opposite of the previous book. This is about how the memory systems in a (somewhat) modern PC work. This is about the details of machine architecture and how to use those details to speed up your code.

In fact, translating the information in this book to highly parallel computing will require some thought. It was written without much thought to the behavior of multicore processors.

That's not the point, though. Multithreaded programming is all about memory access. If you poke the memory in the wrong order, your program will slow way, way down. Compilers are not yet smart enough to do all the work for you.

It's worth talking a bit about how important memory access is. Here's a slide shamelessly stolen from Robert Harkness at the San Diego Supercomputer Center:



Performance in on a log scale. Memory bandwidth has a dramatically lower slope than CPU speed.

You could say that, eventually, performance will be entirely dominated by the usage of memory. However, we're almost there already. High performance data-parallel programming requires the knowledge in this book.

I do wish he'd write a second edition, though. Some of the chip-specific discussions are interesting, but they aren't necessarily relevant anymore.

Lower Performance Multithreaded Code


I should emphasize that most of the stuff in both of these books is useful for pushing performance a bit faster than you'd thought possible.

If you haven't already gotten to the point where you think your code can't be sped up, then you're likely to have more serious problems that will erase the improvements available from these books.

Actually, there's one significant exception to that. If you're using a low associativity cache poorly, then you could get almost no cache utilization. In some cases, you can make minor changes to your memory usage and go from no cache utilization to good cache utilization. That's probably a more important change than using a good algorithm.

Generally, however, these books are not what you need to make your web page load faster. None of the code that I've written in the last few years needs this. I'll keep looking for applications, though, because optimizing performance in exotic ways is a fun!

I enjoyed both books a lot even though they didn't seem directly applicable. I hope you will, too.

Thursday, July 10, 2008

Integration: The Cost of Using Someone Else's Library

I don't do much Ruby on Rails development anymore, but Andy lives right next to me at PC-Doctor. He does.

Recently, he's run into an interesting problem. I've seen the problem once before in a completely different context.

Once might be coincidence, but if you see the same problem twice, then it must be a real problem. :)

Ruby on Rails


If you've got a product to develop, it's normally better to use someone else's library whenever possible.

Ruby on Rails makes this easy. They've got a fancy system to download, install, and update small modules that can be put together cleanly and elegantly to create your product.

It's a wonderful system. It works, too.


Andy discovered that it might work a bit too well!

He's got a medium sized web application that uses a bunch of external modules. He wrote it fairly quickly because he was able to pick and choose modules from a variety of sources to solve a lot of his problems.

Unfortunately, he had to upgrade to a newer version of Ruby. That means that he's got to look for problems in each of the modules he installed and find a version that works with the new version of Ruby.

Some module maintainers are faster than others, of course. Not all of the modules are ready for the new version of Ruby.

This is a problem that doesn't scale happily. As the number of modules goes up, the chance of one or more modules not being ready goes up.

As Andy discovered, this means that an application can become painful to update.

I phrased my title as though Andy might not have been doing to the right thing. I'd better be honest here, though. If one of his modules can't possibly be updated, then he's still better off rewriting that module. The alternative is to write all of the modules during application development.

Andy did the right thing. The pain he had while updating was minor compared to the alternative.

Ruby on Rails makes it extremely easy to combine large numbers of modules from different sources. The problem can be duplicated any time you get large numbers of independent developers working together.

Boost


The Boost libraries seem to be suffering from the same problem.

Boost doesn't put a lot of emphasis on stability, either. Changes to libraries are encouraged and frequent. Versions of the library aren't even required to be backwards compatible.

The end result is the same as Andy's problem. One library will change a bit, and that change will have to ripple through a bunch of other libraries. It can take a while to squeeze each contributor enough to get them to update their library enough to put out the next version of Boost. (Boost.Threads was the worst case of this. The developer disappeared with his copyright notice still in the source files!)

It's hard to blame either the release manager or the contributors. They're volunteers with paying jobs, after all.

The end result is still unfortunate. It now takes about a year or so to release a new version of the framework. Some libraries end up languishing unreleased for a long, long time because of this.

Boost has gone through a lot of releases. This makes it really tempting to look at this quantitatively. :)

To the right is a chart showing the number of days between major releases. This is, of course, a silly thing to look at. What defines a major release? There was only 5 days between version 1.12.0 and 1.13.0, for example.

The lower numbers on the chart show the number of libraries that changed with each release. There is a slight upward trend to that as well. Clearly, newer releases contain more new stuff in them than the older releases. Furthermore, not all changes to libraries are the same. Some of the more recent changes are substantial.

Despite all of that, I'm going to claim that the release schedule is slowing down over time. There are many reasons for this, but one of them could well be the same problem that Andy has.

Before a release goes out, there is often a plea on the Boost developers' mailing list for assistance on a few libraries. Those calls for help are proof that the size of Boost is slowing it down. If they have more libraries then they'll have more libraries with trouble.

Early versions of Boost had extremely light weight coupling between the different libraries. More recent versions are significantly more coupled. As developers get familiar with other Boost libraries, they are extremely likely to start using them in newer libraries. It's almost inevitable that the coupling increases over time.

The developers for each library continue to mostly be volunteers who don't always have time to make prompt updates. Getting all updates to all libraries to line up at the same time can't be easy.

Commercial Projects


Both of these examples involve open source projects. Andy isn't building an open source application, but he is relying heavily on open source modules. Boost is entirely open source.


An open source project is going to end up relying on volunteers. It's really hard to manage volunteers! Is it any easier on a large commercial project?

I don't have any direct experience with this. I've never been a part of a big company with hundreds of people all working on the same thing.

Is the problem unique to open source projects? I've got no data, but I'll make some speculations.

Some fraction of open source developers are volunteers with other jobs. This isn't true for commercial projects.

A developer who's spending their free time on a project will have to schedule their time around a higher priority project that's paying them. According to this theory, this dramatically increases the spread in the amount of time required to complete their job.

Conclusions


The problem probably isn't unique to open source projects, but I suspect that it's worse for them.

Ruby on Rails encourages using large numbers of independently developed modules. This model will exacerbate the problem.

I'd love to hear from someone who's got experience with large projects. The problem gets worse with big projects. Too bad I don't know much about what happens with them.

Monday, July 7, 2008

Rvalue References Explained

Thomas Becker just sent me a note about an article that he'd just written. Rvalue references aren't in wide use, yet, and they aren't part of the official standard, either. Not many people understand them, yet. I'm sure his article will dramatically increase the number of people who understand them since Thomas is such a good writer.

If you'd like to play with rvalue references after reading his article, GCC 4.3.1 is what you want. You can access them using the -std=c++0x compiler option.

Doug Gregor's C++0x page can be used to track the progress of that compiler option.