Tuesday, July 31, 2007

Choosing a programming language: Is type safety worth it?

I'm a big fan of the strongly typed language vs weakly typed language debate. It's old, but it's also important.

I'm revisiting this topic because I'm trying to decide exactly that problem on a project that I'm working on. In my case, I'm torn between client side JavaScript on the web browser and Google Web Toolkit (GWT) compiled to JavaScript. The only reason I have to mention this is because I want to eliminate the quality of the libraries from the debate. GWT has fairly rudimentary libraries, but they're growing, and you can add JavaScript libraries if you really have to.

I'm going to try to have a debate with myself about it. I can't claim that I'm unbiased, and I'm going to ignore arguments that I don't consider significant. I'd love to hear further arguments from you in the comments.

Argument for weak typing

Weak typing allows you to write code faster. Well, that's what the proponents of it claim, anyway. I haven't seen any good measurements of this. There's certainly less typing to do, though. You don't have to declare the type of your variables. (In JavaScript, you do have to declare your local variables, however.) (It is worth pointing out that I have never written software and been limited by my typing speed. I'm skeptical of this argument.)

If you're designing a scripting language, then weak typing is easier to explain to inexperienced programmers. There's a whole step that doesn't have to be done. In some circumstances, this is a clear win. (In my case, this is irrelevant.)

Argument for strong typing

Strong typing allows the compiler to do more work for you. Essentially, you're constraining how variables can be used, and the compiler can use those constraints to detect errors or perform optimizations.

The optimization part is important for a few people. The error detection is great for everyone.

Counterargument from weak typing.

Unit testing is the weakly typed solution to error checking. (It's also the strongly typed solution, but for a different class of error.) It's either hard or impossible to write a unit test that checks that function arguments have the correct properties before going into a function. However, you can write a unit test that ensures that functions behave correctly in a variety of different circumstances. That's what programmers really care about, anyway.

Also, suppose we want to have a function that supports multiple types? In a strongly typed language like Java, we can only support different types if they share an interface. In a weakly typed language, we can support multiple types if they support the same operations. If I want a function that adds all the elements of a collection together, I can write one function that supports both strings and complex numbers correctly. (This is possible with generic programming as well in a strongly typed language, but that's not really what we're arguing about here.)

Rebuttal from strong typing

Unit testing is a good guarantee, and it's one that's required by strongly typed languages as well. However, it's a fundamentally different guarantee than you get by having the compiler check the types of variables as they're used by the code. First, a compiler checks every line of code even if it's hard to get to or you forgot to write one of the unit tests. Second, the combination of tests and strong typing is a better guarantee than tests alone.

Here's a Ruby example of some dangerous code:

function foo( a, b )
  if a < 0.0
    print( b )
foo( -12, 'This string is printed' )
foo( 5 )

This code has two "overloads". One takes a string and a negative number. The other takes a non-negative number and might take a string. The problem is, there is no good way to tell this based on the function signature. You can only tell by looking at the implementation. This means that you can't safely change the implementation of your function.

Unit testing partially guarantees that functions have working implementations. Strong typing partially guarantees that they are used correctly.


I'm going to go with Google Web Toolkit, which imposes strong typing on top of JavaScript. The mediocre library support may bite me later, but I'll be happier knowing that my compiler knows a bit about what my code will do.

I'm hoping I get flamed by someone, so please post your comments!

This originally appeared on PC-Doctor's blog.

Friday, July 27, 2007

ActiveX is still around!

ActiveX has been around a while. When Microsoft was battling Netscape, they needed a way to put custom, active content on web pages. Java was being used by Netscape, and people thought it was great. Microsoft needed something they could develop quickly that would let programmers put new types of content on the web browser. ActiveX was born.

The basic idea behind ActiveX is really simple. A programmer creates a DLL that can be accessed by anyone. Some introspection is added, and now a web browser can call native code! Once you're in native code, you can do whatever the heck you want, so Microsoft's work was done. Of course, Microsoft added a bunch of ways to make it complicated, but the basic architecture is extremely simple.

That was back when Bill Gates thought that no one would pay money for security. The security model Microsoft used was also extremely simple: All ActiveX DLLs are signed. If someone hijacks thousands of computers using your DLL, then Microsoft will know who's responsible!

Of course, Microsoft signed some DLLs that had some big holes in them. In fact, lots of legitimate companies did. For the next decade and a half, a whole team of Microsoft employees dealt with the consequences of these design decisions.

ActiveX is still here, though.

Even in Vista, ActiveX is still available. It's still possible to run whatever code you want in Internet Explorer under Windows Vista. If you don't believe me, go over to your Vista machine and head over to this URL: Microsoft Windows Update. This page can replace your drivers and reboot your computer!

So, what has Microsoft changed? Is it still business as usual in the land of ActiveX? No, it's not. A lot has changed.

First of all, enough warnings pop up around an ActiveX control that both programmers and users avoid them like the plague. Back in the early days, programmers were supposed to put UI widgets on the browser window because Microsoft said it was easier to do it that way than by using HTML. (This conveniently prevented the page from loading under Netscape, so no one actually took this advice.) Now, almost no one makes ActiveX controls. Once you've got some video players, Flash, and a few others, you're done. No one else has to write them anymore! Certainly no one has to write one that requires administrative access to the computer. Once Windows Update was finished, the designers probably concluded that that was all you needed.

There's very close to no documentation on the subject, but it's still possible to have your ActiveX control run as an administrator. The strange part is that now, instead of being a mainstream programmer, you have to put on your dark sunglasses and visit some very murky areas. Microsoft won't tell you exactly what to do, but they do put clues in a variety of blog postings and tech notes. Bugs will haunt you as you make your way toward what you need, and you'll never really know if you're exploiting the OS or doing it correctly.

It's amazing what's changed. It's even more amazing how little has changed.

This originally appeared on PC-Doctor's blog.

Wednesday, July 18, 2007

What is usability testing all about?

The phrase usability testing gets thrown around a lot. It sounds great when you're planning a project. If you say you'll do some usability testing, then people get a warm feeling about your project plan.

After discussing it with a few people, I've concluded that there are a lot of myths out there about usability testing. I'll outline all of them that I've either heard from someone or thought to myself.

First let me explain what it is.

No. I lied. That's a huge topic, and I'm going to bypass it here. Instead, I'll refer you to the best summary I've read on the subject: Dumas and Redish, A Practical Guide to Usability Testing. It's a bit old, and there are likely better summaries out there, but linking it nicely lets me avoid explaining what usability testing is.

I will, however, explain what the goals of usability testing are:
The idea is to watch your users interact with your product (or something similar to your product) in a way that allows you to see how well the product works for the users. In addition to finding problems, usability testing also tries to gather data that the testers can use to figure out how the problems should be corrected.
That description is designed to shoot down several of the bigger myths that I've run into.

Myth: Usability testing gathers statistical evidence that you can use to make decisions about solving a usability problem.

Running a good usability test on a single user is expensive and time consuming. There's a lot of data that gets analyzed, and I've found it to be a heck of a lot more useful to get more data about a single user than it is to test multiple users.

You end up running tests on very small numbers of people. I generally spend a fair amount of time before and after a single test preparing the test and analyzing the results. The test will find some problems, but you correct those problems before you analyze how bad the problems were. If a problem is bad, then you're likely to run into it. If a problem is small, you're not likely to be bothered by it again. It's much better to just assume that all problems you run into are large enough that they should be solved.

All forms of usability testing that I've done follow this formula: Watch a single user use your product. For every problem that user runs into, figure out what caused the problem, and decide if and how you need to fix it. All problems a user runs into are considered real problems until proven otherwise.

Myth: Usability testing requires a lot of fancy equipment.

I hear this sometimes after people read about what large software companies use when they do usability testing. I've never seen it, but I've heard rumors about rooms filled with hidden cameras, one-way mirrors, and eye tracking devices. It sounds like fun, but it's really not needed.

When I do some usability testing, I try to understand the user as well as possible. I want to know what they're thinking when they click the wrong menu item. I tend to be in their face a lot more than someone standing behind one-way mirrors would be, but, for a lot of problems, a user being in an artificial environment with a couple of engineers breathing down their neck isn't as bad of a problem as it sounds. Certainly, a lot can be done this way.

Myth: Any old user will work.

The background of your user has a huge impact on how they view your product.

In the early stages of testing, I like to use coworkers as much as possible. They're easy to sucker into these tests; the first few times you do it, they even think it's fun! However, the data you generate from these tests requires so much interpretation that you can quickly get to the point where it's not worth your time. (They're great for the early tests, though. If nothing else, they can help test your testing procedure!)

Myth: Usability testing has to take a long time.
Myth: Usability testing can be done very quickly.

These myths are different, but the answer is the same. Usability testing can take as long as you need it to. It's fairly difficult to anticipate how long you'll need, however. Unexpected usability problems frequently come up. They need to be fixed in code, and if you're not lucky, that might take time.

If the dialog box or web page that you're testing isn't all that important, then it's possible to run tests that will fail to catch smaller problems. This can greatly speed things up, and it is possible to test something quickly and get away with it.

One of my favorite easy test to run is one that tries to decide between a small number of completely different approaches. This can be quick, but it's also possible that none of the approaches tried works all that well. I haven't been great at predicting the time required for usability tests that I've done, and I claim that it's a problem with usability testing rather than my own shortcoming.

One of the more dangerous forms of this myth is the belief that a large, complex product can be tested comprehensively in an amount of time that management will be happy with. Usability testing of a significant amount of UI code is a major project, and it really needs to be done continuously over the entire lifecycle of the product.

Myth: There aren't any other myths.

As I find some more time, I'll come back and address some more issues that I've run into. I think I've gotten the biggest ones that I've seen, though.

This originally appeared on PC-Doctor's blog.