Monday, August 27, 2007

Google Web Toolkit: First Impressions

I've starting converting a complex web app from being purely JavaScript and browser plugin based to using the Google Web Toolkit for the JavaScript part of the project. See my previous blog post about this.

Unfortunately for me, this project is not high enough priority to play with it for long periods of time. However, I have managed to start work and get some basic functionality working. It was certainly interesting!

I'm going to comment on a few of my first impressions. I'll state a strong opinion now, but I reserve the right to change it completely as I get more experience with the library and tools.

Speaking of Tools...


Wow. This is really slick. You get to debug the Java code directly without looking at the JavaScript that it compiles to! Eclipse is a great editor for Java code, and GWT integrates nicely with it.

The debugger is nice, the compiled code looks good, and the warnings and errors that the development web server generates are extremely helpful. This is probably the strongest selling point for me with GWT.

Don't touch the DOM!


This was a shock to me. The web browser's DOM is an extremely efficient place to store information about the document you're displaying. A heck of a lot of effort has gone into making this efficient and safe. CSS is an extremely powerful tool that is closely coupled to the document's DOM.

GWT's designers don't want you to touch the DOM, though.

GWT gives programmers only limited abilities to write to the DOM, and they work extremely hard to make it difficult to read the DOM. For example, the DOM.setAttribute call can be used to assign a id attribute to an HTML tag. That sounds like a really useful call, doesn't it? I could use that to make it easy for our graphic artist to design some CSS to describe the appearance of the application. It's deprecated, though. You're not supposed to use it. :-(

GWT is not for documents.


I shouldn't be too harsh to GWT about the DOM, though. This might be obvious, but GWT is for complex, full featured applications that happen to reside in your web browser. It's not for displaying documents. There are some corners of the library that allow you to put HTML directly wherever you want to, and I suspect they work great. They should even work with an external CSS file.

An application is not a document, and I might be having trouble getting over that. I'll get back to you later when I form a stronger opinion.

Writing JavaScript directly?


If you want to do something that simply cannot be done through GWT, you're in luck. GWT has a JavaScript Native Interface that is analogous to Sun's JNI. It works great! I use it extensively to talk to my browser plugin, for example. It's easy to use, and it doesn't screw up the tools any more than they should be.

This, of course, allows you to create whatever back door you want to allow you to manipulate the DOM. However, until I get a better understanding of the GWT techniques, I'm going to try to do things the GWT way. I may have more to say on this subject later. ;-)
There are back doors to get there, but it's not a real GWT application if you rewrite their tools to do it your way. Instead, you do things the SWIG way.

GWT == SWIG


Yes, that's right. GWT has more in common with a SWIG application than a JavaScript/DOM/CSS application. Is that bad? It's certainly a great thing if you like SWIG. I don't have much experience with it, so I don't yet have a strong opinion.

However, I do think it's a great idea to emulate a well used framework instead of writing a completely new one. There are a lot of SWIG developers and ex-SWIG developers out there, and I'd guess that they'll feel pretty comfortable in this framework.

Documentation?


The documentation is fine, and I shouldn't complain about it. However, I could have really used this blog post before I'd started. If you start out as a native JavaScript/Prototype/Ruby on Rails developer like I did, then what I've said here is not completely obvious.

The Google group is great, though. I've gotten responses to my queries immediately.

This originally appeared on PC-Doctor's blog.

Monday, August 20, 2007

Why Aren't PCs as Pretty as Macs?

If you walk into an Apple store, and you manage to find the corner that sells Macs, you'll find some fairly attractive looking machines. A lot of them aren't big black boxes, for example. Why can't I buy a PC that looks like this?

You probably want me to say that gaming PCs are designed to be attractive. That's true, as long as you believe that any money spent on appearance looks good. Alienware right now is selling desktops that look like alien insects. That's not what I want, though. I want a machine that tries to blend in and look nice at the same time. The current round of iMacs do this admirably.

Maybe a gaming PC isn't the answer? I can imagine that. Gamers who want to slaughter thousands of cartoons might not want something attractive. How about a high end home theater system? Voodoo PC makes some right now that are designed for exactly this. Are they attractive? Well, they're still big boxes, but, since you're paying $5k for them, you get to choose any color you want, and it's got a cute display on the front.

No, I'm not looking for this. I want a PC that was designed from the ground up to look good. I want a piece of furniture that does what it has to do and doesn't take up any more of my attention than it deserves. It's got an on button and a DVD drive. It doesn't deserve my attention.

Is cost and performance a problem? If a desktop manufacturer has to use notebook technologies to cram everything they want in an unusual form factor, perhaps things are either too slow or too expensive to sell.

I doubt this is realistic. Voodoo PC manages to sell $5000 home theater PCs. I'm going to guess that they could use some notebook technology and not lose much of their profit margin. Performance isn't that important for the PC that I want. All my PC does is surf the web, play DVDs, and occasionally edit photos. I'm even willing to pay extra!

Is there a technical problem with this? Perhaps the ATX motherboard specs force PC manufacturers to have a large rectangle available for the board, and they can't make the crazy shapes that Apple can get away with. While this is a reasonable cop out for a small shop, what about large manufacturers? Dell might be screwed because people don't normally get to see Dells before they buy them. A PC like this would be centered around its appearance. Then what about HP? They could afford to make a motherboard in whatever shape they wanted. Users would get to see the machine in the store before they buy it. It'd be just like a piece of furniture.

Could copyright restrictions be the problem? Apple is extremely possessive when it comes to its designs. Is it possible that I'm too picky, and Apple has prevented anyone from making anything remotely similar to their machines? I sure hope not!

So far, I'm not coming up with any answers. I hope someone out there can shed some light on this.

Tuesday, August 14, 2007

Computer: Heal Yourself!

The Autonomic Computing Initiative at IBM tries to do some really interesting things. The goal for IBM is to make server hardware run without much human intervention. IBM breaks the problem down into four different parts:

1. Automatically install and configure software
2. Automatically find and correct hardware faults
3. Automatically tweak software and hardware for optimal performance
4. Automatically defend itself from potentially unknown attacks

This is an ambitious goal, of course. They don't intend to complete the project right away. #2 is the interesting one from the point of view of PC-Doctor. However, I'd like to try to look at it from IBM's point of view. They (unlike PC-Doctor) have a lot of influence on hardware standards. The question they should be asking is "What sensors can be added to existing hardware technologies to enable us to predict faults before they happen?". Fault prediction isn't the whole story, but it's an interesting one.

I'd better admit right away that I don't know much about computer hardware. Uh... "That's not my department" sounds like a good excuse. However, I hang out with some experts, so it's possible that a bit has rubbed off. We'll find out from the comments in a week or two! :-)

Hard drives:


This is an easy one. The SMART (http://www.seagate.com/support/kb/disc/smart.html) standard already allows software to look at correctable failures on the hard drives. If you look at these errors over time, you may be able to make a guess about when a hard drive will fail.

This is nice because the hardware already sends the necessary information all the way up to application software running on the computer.

Flash memory:


Flash memory can also fail slowly over time. I don't know of any effort to standardize the reporting of this information, but, at least on the lowest level, some information is available. There are two things that could be looked at.

First, blocks of flash memory will fail periodically. This is similar to a hard drive's sector getting marked as bad. Backup blocks will be available. Some errors during the fabrication of the device will also be marked as bad and replaced before it ends up on a computer. Device manufacturers probably don't want to admit how many blocks were bad from the beginning, but a company like IBM might have a chance to convince them otherwise.

Second, you could count the number of times that you write to the memory. Manufacturers expect a certain number of writes to cause failures in the device, but I don't know how good these measurements would be at predicting failure.

Fans:


A lot of servers these days can tell you when a fan has failed. They might send some email to IT staff about it and turn on a backup fan. It'd be more impressive if you could predict failures.

Bearing failures are one common failure mode for fans. This frequently creates a noise before it fails completely. A vibration sensor mounted on a fan might be able to predict an imminent failure. You could also look at either bearing temperature or the current required to maintain fan speed. Both would provide some indication of increased friction in the bearing.

Network card:


Some Marvell network cards can test the cable that's plugged into it. The idea is to send a pulse down the cable and time reflections that come back. The Marvell cards look at failures in the cable, but you could do a more sensitive test and measure when someone kinks the cable or even when someone rolls an office chair over it. If you constantly took measurements of this, and you kept track of changes in the reflections, you might get some interesting info on the cable between the switch and the computer.

Printed wiring boards:


You could do some similar measurements with the connections on the PWB that the motherboard is printed on. This might help you learn about some problems that develop over time on a PWB, but I have to admit that I have no idea what sorts of problems might be common.

Shock, vibration, and theft


Can you get some useful information from accelerometers scattered throughout a computer? Notebooks already do. An accelerometer placed anywhere in a notebook can detect if it's in free fall and park the hard drive heads before the notebook lands on the floor.

A typical server doesn't enter free fall frequently, though. One thing you could look for is large vibrations. Presumably, large vibrations could, over time, damage a server. Shock would also damage a server, but it's not obvious when that would happen.

Security might be another interesting application of accelerometers. If you can tell that a hard drive has moved, then you could assume that it has been taken out of its server enclosure and disable it. This might be a good defense against someone stealing an unencrypted hard drive to read data off of it. This would require long term battery backup for the accelerometer system. It would also require a pretty good accelerometer.

IBM sounds as though they want to make some progress on this. It would be really nice to be able to measure the health of a server. Most of my suggestions would add some cost to a computer, so it may only be worthwhile for a critical server.

Now, after I've written the whole thing, I'll have to ask around PC-Doctor and see if anyone here knows what IBM is actually doing!

This originally appeared on PC-Doctor's blog.

Wednesday, August 8, 2007

Multithreaded Programming for the Masses

Writing software on multicore CPUs is a hard problem. The chip designers have told us that they're not going to do all the work for us programmers anymore. Now we have to do something. (Here's a good description of the problem from Herb Sutter.) Writing multithreaded apps is not easy. I've done a lot of it in C++, and the tools, the libraries, and the design patterns just don't make it trivial. I don't mean to say that it's impossible, however. It's certainly possible if you're careful, you plan for it from the beginning, and you know what you're doing. The real problem is that 99% of programmers aren't all that good. A defect in multithreaded code can be really hard to track down, so it's expensive if you get it wrong. A lot of companies and research groups have been spending a lot of time trying to figure out how to make it easier. I haven't been doing any research on the topic, but, like a good blogger, I'm going to comment on it anyway.

First of all, do we have to speed up all applications by the number of cores in your CPU? Your typical Visual Basic programmer isn't any good at threading, but do they have to be? They can create dialog boxes and display data and let you edit that data. Do they have to know anything about threads for this?

Probably not.

It's still possible to speed up that dialog box, too. If there's some text entered by the user, the grammar/spelling checker might be running on a different core from the UI widget. This wasn't done by the Visual Basic programmer. The dialog might have been drawn using a graphics engine that used threads or even offloaded some computations to the video card. Again, this wasn't done by the programmer who's never worried about threads.

So, we don't always have to know about threads. Some substantial fraction of the programmers in the world don't have to speed up their code using multiple cores. That's good news. We really need to admit that not all programmers need to make their programs faster. The libraries they use can be made faster without compromising the reliability of the program itself.

That's not the whole story, though. Let's look at the other end of the spectrum. How about a video game?

Graphical processing is astonishingly easy to parallelize. GPUs will continue to get faster over time because of this. A lot of calculations can be done on individual pixels. It's relatively rare that two pixels that are far apart will affect each other substantially. (There are some techniques that defy this statement, but a lot of them rely heavily on expensive matrix calculations which may be parallelizable. I know nothing about these techniques, but I'm going to guess that they're easy to parallelize, too.)

If video cards are going to continue to get faster, then the CPU had better keep up. The CPU is going to have to generate enough data to avoid starving the GPU. As the GPU gets infinitely fast compared to a single core of the CPU, this will require multiple cores.

Uh, oh. Does this mean that all game designers will need to use multiple threads?

I'm sure that it doesn't. Couldn't the game engine programmers do all of the hard work and let the average game programmer use a safe API that avoids threads? Certainly a level designer who's scripting an in-game event in Lua would not be forced to worry about threads!

What about an AI programmer? I don't know enough about game engine design to say for sure, but I'd be willing to bet that a cleverly designed game engine could avoid exposing a lot of the multithreaded mess to many programmers, but at some point AI programmers will have to do things on multiple cores at the same time.

While the AI programmer might be fairly smart, and they might be trained in multithreaded programming techniques, that does not mean that it's a good idea to subject him or her to mutexes, condition variables, thread creation, deadlock avoidance, etc. That's not a good use of a programmer's time.

What can be done to help a programmer who really does have to run code on multiple cores?

Functional programming languages are a potential solution. The creators of languages like ML, Haskell, and Erlang realized that your compiler can do a lot of fancy things if the programmer isn't ever allowed to modify a value. If things can't be modified, then they can be easily parallelized. Once you create an object, as many cores as you want can read it without even bothering to lock it. Of course, this will frustrate a Visual Basic programmer who is used to changing a variable as the object that the variable represents changes. It requires some significantly different programming techniques.

Once again, this is not for everyone.

Futures are a pretty slick way to abstract away threads and some locks. It doesn't eliminate the headaches of multithreaded programming, but it has the potential to make things simpler.

A future is a chunk of code that a programmer wants to execute. Instead of calling the function that contains the code and waiting for the return value immediately afterwards, the programmer separates the calling and the waiting. First you tell your library that you want your future to run. The library can do whatever it wants at this point. It could execute it immediately. It could put it in a queue to be run by a thread pool. It could ignore it completely. The program can then do whatever it wants. At some point, it may want the result of the future. Then the program will ask the library for the result of the future. The library may have to wait for the result to be ready, but eventually the result will be returned.

The great part about futures is that threads don't have to be created. If you can divide your program's logic into N bits, you can let your library and compiler figure out how to use that to speed up the program using multiple cores.

Unfortunately, futures don't eliminate the need to worry about locks. Since that's probably a more difficult problem than thread management, futures are not a panacea.

There are some other ways to easily parallelize your code. SQL statements are a great example. Databases are really good at optimizing SQL for this sort of thing.

Intel has a strong interest in making it easier for programmers to use their new chips. Their Thread Building Blocks library uses STL-style algorithms and protected iterator ranges to express parallelism. It also has a "task scheduler" that behaves at least a bit more like a futures library. This seems like a really great idea. An STL-style algorithm that describes the parallelism to the library is not always sufficiently flexible to describe a problem. However, if it is sufficient, it's extremely easy to use. The task scheduler is more conventional and would probably be much nicer with a good lambda library.

OpenMP is another library designed to abstract away many of the details of parallizing code. It's not strictly a library, however. It requires some compiler support as well. The programmer uses a function that behaves much like a Unix fork() command. OpenMP then manages several threads to handle the different branches of the fork. While I'm certainly not an expert, this doesn't seem like either a clean solution or a sufficiently flexible solution.

I'm sure there are other research projects out there. If you know of any interesting ones, please post a comment about it below.

This originally appeared on PC-Doctor's blog.

Tuesday, August 7, 2007

Exploiting an Industry's Culture

Sometimes, an industry becomes uncreative or stops taking risks. This lets an outsider come in and gain market share by exploiting the mistakes made by an entire industry. It's fun to look for these industries and understand what they're doing wrong.

My favorite example is the video game market. (Roger Ehrenberg has a good summary of the Xbox side of this.) Ten years ago, everyone in the video game industry was happy thinking that all gamers were young males. The fact that other people spent enormous amounts of time playing Solitaire and Minesweeper didn't seem to bother anyone.

Then The Sims came out. This should have been a wakeup call. To a young male such as myself, it was a complete waste of time. To Electronic Arts stockholders, it was gold. That happened in early 2000. Believe it or not, nothing much happened for a long time. People at least talked about why nothing happened, though. It was apparently pretty hard to convince a publisher to risk large amounts of money on something that wasn't a clone of a successful game. All successful games involved things that young guys like to do. (Solitaire didn't make any money for Microsoft.)

You can tell where this is going.

The Wii was the next big one. Now you can play bowling and have enough fun doing it that your grandmother will join you. (Trust me, she doesn't like Halo.) Even so, the industry sat around for a few months saying that the Wii was just a fad and it would be back to young males sometime soon. (The definition of "young" seems to change, though. 35-year-olds play Halo 3.)

I think the video game industry has finally woken up. They've given a term to gamers who aren't young males! That could well be what was missing before. "Casual gamers" don't like to play first person shooters, and now even Microsoft wants to lure them to the Xbox.

Okay, that's a story of an industry's culture having major problems and some companies exploiting that. That story is almost complete. There's another one that's happening right now.
Of course, I'm talking about cell phones and the iPhone. Many years ago when I bought my last cell phone, I really wanted a good user interface. Despite spending a lot of time designing user interfaces, I didn't want to figure out someone else's bad user interface. There were a lot of them back when I bought mine. The Sony Ericsson collaboration seemed to be doing okay, so I got one.

Apparently my $200 purchase towards a decent UI didn't motivate the entire industry to work on improving, however. Apple has figured out how to make and sell fashionable and easy-to-use consumer electronics, and they've exploited Sharp's inability to do the same.

Now, it's still not obvious that the iPhone is a runaway success. It should be noted, however, that everyone at least knows what an iPhone is. I'm a bit jealous of my friends who have them, too. That's going to help attract buyers to a cheaper phone if Apple came out with one.

The cell phone story isn't complete, but I'm betting that it'll end up with Apple doing well. They didn't do well at first with the iPod, either.

Alright, now we've talked about the story that's mostly complete and the story that's happening right now. What about an industry that has a cultural problem right now but hasn't yet been exploited? One of the best ways places to look is a small industry that doesn't have a lot of players in it. If one company is dominating a market, then any cultural issues that that company has could be exploitable by a complete newcomer.

A few of you have probably figured out where I'm going with this one, too.

Is the hardware diagnostics field vulnerable? PC-Doctor is the big player here and we might have some problems. Several potential vulnerabilities might be there.

First, are our diagnostics any good? Well, I happen to know something about our diagnostics. I find it really hard to believe that a newcomer (or even a current player) can do as well here. This is what we do, and we do it well.

Are really good diagnostics what people want, though? Well, it is what companies like HP, Lenovo, Dell, or Apple want on the machines that they ship to their customers. They want to be able to trust those diagnostics, and they can when they run PC-Doctor.

Here's a potential exploit, though: What do the customers of those big companies want? Do they want fast and trustworthy diagnostics? I don't think they do. They want something that says that their machine is broken, why it's broken, and how they should fix it. They don't care if it works 95% of the time or 99% of the time. They just need it to work this one time.

Furthermore, they don't care if the problem is hardware or software. That's a critical question to a PC manufacturer who only warranties hardware defects. That's not the right question for a lawyer in Florida who wants to know if he should download a new driver or buy a new hard drive. Right now, PC-Doctor doesn't deal with software issues.

How could this be exploited? Well, suppose a company made some really good software diagnostics. Then they could add some fairly bad hardware diagnostics to it. The big companies might not be impressed by these hardware diagnostics, but the end users might be since the software solves the problem that they want to solve. It would take a new entrant to the diagnostics industry a while to build up a complete set of hardware diagnostics, but they might be able to do it by focusing on what the consumer needs instead of what the big companies need.

I could also be part of the culture that's screwing up. If that's the case, then I wouldn't even know that there was something else wrong. Is our user interface so bad that no consumer would ever use it without a tech support guy telling them what to do? Are we completely unaware of this?

Should I be worried about this? I'd love to hear what you think.

This originally appeared on PC-Doctor's blog.

Monday, August 6, 2007

Explanations in user interfaces are bad!

When I was doing the design for the BTO Support Center website, I had some troubles explaining to some coworkers why helpful text shouldn't be added to explain the interface. At the time, I couldn't explain it well, but now that I've thought about it a while, I think I have a better way to describe it.

My new argument assumes that the interface is explorable. Let's start with that.

If a user is comfortable with a user interface, they will happily play with it until they get it to do what they want. This is called an explorable interface, and it's required in any good interface. For example, I'm typing this on the blogging software's built in editor. I've never used this editor before, but the cost of just pushing buttons randomly on it is low. I can undo them easily, so I'm not worried about pushing the wrong thing. The editor doesn't normally pop up a useless dialog box that I have to get rid of, so pushing the wrong button is unlikely to waste much of my time. It is laid out in a way that explains what buttons are relevant, so I don't even have to scan most of the web page. The cost of not knowing what I'm doing is low, so I've never read most of the text on the page.

The economist Herbert Simon described this behavior as "satisficing". This is a combination of satisfy and suffice, and Herbert Simon used it to describe people's behavior when confronted with a choice that is expensive to resolve. If the cost of reading an entire web page to find the optimal solution is expensive, then people are perfectly happy to use the first thing they find that might work. The interface designer has to do a lot of work to make sure that the first thing they find is the correct one.

In an explorable user interface, users don't have to figure out exactly what the designer was thinking. They don't have to read every bit of text on the dialog. They can just pick a button that might do what they want and push it. This, it turns out, is often substantially cheaper than trying to understand the interface completely. After all, you can just undo it afterwards.

Once you understand this, a bit of text to describe your interface is clearly the wrong answer. A user will not actually read the text on the web page if exploring might work. If exploring doesn't work, the user will become frustrated rather than resorting to your helpful bit of text. In some cases, they will be perfectly happy to assume that your interface doesn't support the option they were hoping to find.

Don't explain your interface. Make it easy to explore.

This originally appeared on PC-Doctor's blog.