Software and Health Care

February 24th, 2010

Recently, at work I was given the task of improving a piece of software that ran slowly, consumed far to many resources and was far to rigid to perform any more than the single task for which it was originally created. As I began looking into the code, I discovered that worse than all that, the code was an utter mess and there was absolutely no way I could may even the most marginal of improvements without starting from scratch.

So I did. While I worked on recreating one part of the software, a colleague of mine worked on another. Eventually, I finished my parts and got to testing and tuning his parts. As I went, I continually tested my new code and ideas against the existing system to ensure the new product was better than the old one. With little exception, the new product ran at least twice as fast as the old, consumed fewer resources and was flexible enough to handle the original task as well as a wide range of current and future potential tasks. I was pleased.

Then I got to the code my colleague worked on. There didn’t seem to be anything wrong with it on the surface. Given ideal and even the least ideal circumstances we imagined the software might face, it appeared it would run faster than the old code, perhaps even two or three times faster. Yes, we had to admit it would consume much more memory, but the speed improvement would be worth it as the old software was far too slow.

But once I got into testing against the old software (like I did for the parts I worked on) I came to a stunning and all together disappointing realization. First, we were consuming far too many resources which was bringing the hardware to its knees, even more so than the old software did. Second, and far worse, we were running as much as seven to ten times slower.

So what happened? Even in far from ideal circumstances the software should have easily out performed the old software. The answer was simple: circumstances were even farther from ideal than we thought. This made our software (which was smart enough to deal with even the worst case) take much more time and resources dealing with the terrible cases it faced. Unfortunately, this was the standard case and our beautifully crafted software was far from adequate for the task.

So I worked tirelessly (then tiredly) for weeks trying to squeeze the new code my colleague wrote to get the performance out of it that we needed. After all, we couldn’t go back to the old software since it was a nightmare to maintain and too rigid for even current tasks. But finally, I had to admit there was nothing I could do.

Nothing short of rewriting it yet again myself. So I finally threw away the code my colleague wrote and started from scratch for a second time. Carefully considering how the old software did the task, comparing it to the flexibility of how our attempted solution was designed, in the space of just two days, I wrote a solution that not only consumed fewer resources, but ended up running two to three times faster than the old software.

Yay for me, right? Sort of. If it hadn’t been for the original implementation I wouldn’t have known what works well. If it hadn’t been for the second implementation I wouldn’t have known how bad the circumstances were the software faces, and wouldn’t have seen how that can effect performance. While I’d like to claim a stroke of genius here, I pretty much have to say it was the process of elimination: we already did everything that wouldn’t work, then I did what would.

This, it seems to me, is like the Health Care debate/bill in the United States Congress today. Clearly, the current system isn’t working for everyone, and increasingly for anyone. It is slow, cumbersome, expensive and getting more expensive at a rate no one can afford in the long term. The Federal budget already devotes a huge percentage to health care related programs and will continue to have to devote more money to them, else abolish or diminish them. So, President Obama rightly made it his agenda to “fix” it, by asking Congress to create a piece of legislation that would somehow make things better.

There’s probably a lot of good things in the bill Congress eventually came up with. I hear there’s some sort of tax incentive, rules against insurance companies dropping patients when they actually become sick. There’s probably a lot that’s good. However, there’s enough that is – let’s call it “less good” – about the bill that public opinion has turned sharply against it. Some examples of “less good” qualities of the bill include throwing money at states wholesale just to win votes from their senators/representatives.

Does that mean we should keep the current system as is? Absolutely not. Many people still want reform, just not this reform. Like my software system, the original health care system isn’t working like we want and consumes far too many resources. But if the new system is only being approved by a slim margin (if indeed it gets that much) because votes were effectively purchased for it, how much better can we hope it will actually be? Reform yes, for the better? Maybe not.

I don’t expect us (by “us” I mean Congress) to just throw that bill away and let it go to waste. I expect us to start from scratch again, taking lessons learned from the current system as well as the bill and try to make something beautiful. I don’t pretend to know what that bill will look like, but whatever it is, it had better not resort to pork, special interests, or any other means of purchasing votes to get through either house. Health care is far too important a subject for any bill to be voted for or accepted for anything other than its merits.

Comments Off

Game Developers long suffered the stigma of the lone, smelly, poorly groomed and overweight, pizza eating bachelor that spends countless late night hours in his (yes, his, not his/her) basement cranking out the next big thing in games. While this stigma may have been well earned in the 1980’s and even a little in the 1990’s, the industry has since pulled away from that stigma.

Still, game developers, like most software engineers, tend to be male, most are required to work many hours under tight schedules with little if any additional compensation. As a result, many tend then to have little or no family life (though that is far from “all” of them). Some who don’t sacrifice personal or family lives for whatever project they’re on at the time find themselves first in line for layoffs when the inevitable budget crunch comes around.

But game developers have something that many software engineers lack – passion. It is their passion that calls them into game development and it is their passion that sees them through the hard times and long work hours. But most of all, their passion helps them learn much more about software engineering for a single project than many software engineers learn in a career.

Why? Because games are among the hardest software problems in existence. They must make use of the latest hardware for graphics, sound, user input, squeeze every drop of performance out of memory, CPU, file system, network, and other resources. They deal with concurrency, complex algorithms, artificial intelligence, and security. They are often distributed, run intricate simulations, and predict user actions and involve dozens of developers working cooperatively.

But on top of being extremely hard to create (relative to many types of software), games are fun, exciting, and sometimes even educational. Think of the last game you played. Do you remember how fun it was to play? Now imagine creating that game. I find my enjoyment of writing a game is at least ten fold greater than my enjoyment of playing one.

That is why I would recommend to everyone out there who is attempting to teach or learn computer science, software engineering or and form of computer programming: create games. Students will learn more, they’ll enjoy it more, and they’ll be more driven to keep learning if they are making games. The best part is games can be simple – or complex. Basic games can involve only a little knowledge, take less than a day to write and be tons of fun both to write and to play. The more complex games will obviously take more time to create but will also teach the student more software principles. The learning here won’t just be an act of regurgitation as so many homeworks and exams are, they’re true learning experiences.

So, go play a game and appreciate the hard work of those that made it. Software people, go write a game. You’ll learn more than you think, even if you’ve been in the industry your whole career.

Comments Off

Software Should be so Lucky

February 19th, 2010

It never fails to elicit an eye roll and a heavy sigh. The days of Deities descending in a chariot to save the protagonist may be over, but “Deus Ex Machina” has taken on a new (if you can call it new) twist has become the cliché that refuses to die.

I speak of course of how computers (and in particular software in any form) are portrayed in television and movies. You know what I’m talking about, the “who done it” crime shows where the right computer software can enhance an image, clean it up and add almost infinite detail if the right protagonist is manipulating it. Other examples include, hackers and viruses that can break into any system, even the most secure, even those that have no external connectivity, and sometimes those of other times, races, worlds and even galaxies. Yes, these computer programs are almost deified in their own right according to screenwriters (and other writers too I’m sure).

One “classic” example comes from Independence Day when the lowly Earthlings uploaded a virus to a never before encountered race of alien’s computers, dropping their impenetrable shields and allowing them to be destroyed by mere missiles and nuclear weapons. One might ask – How did this little virus some guy wrote know how to effect the alien systems? Is there any reason to believe the virus wouldn’t look like static noise to the alien system if it saw it at all? How did they get the alien ship to even interface with the human technology? Well, fear not – it was a Mac!

The problem here is this: as a software engineer, programmer, and computer scientist myself I find these plots, and devices so horrifically fantastical that it completely destroys my ability to suspend my disbelief. Whether the genre is basic fiction, science fiction, mystery, or any other I see how poorly they understand computers and it makes me wonder how badly they must understand law, police or military protocol, other sciences, and anything else that may be in the story.

If a computer centric episode that gets it wrong occurs occasionally, I can usually still enjoy the series (ie. Stargate-SG1 and Atlantis seem to throw these in now and then), but when the dependence on utterly wrong and impossible computing takes place 1-2 times per episode (ie. CSI, and similar shows) I lose all ability to watch. The cringe factor becomes too large.

For all of you writers out there, I have only one plea. PLEASE STOP! Stop, writing super viruses, hackers, and otherwise magical programs that can’t possibly exist. Unless you’re writing science fiction (and even then if you don’t have a good explanation) just say no. Just as you can’t extract blood from a stone (for the simple reason that a stone doesn’t have blood), you can’t extract additional detail from a digital photo! It just doesn’t have the detail beyond the original pixels. Yes, you can enlarge it and blur it a bit then sharpen the image, but that’s not more detail, that’s less and with increased uncertainty, and you certainly can’t do that to the extent done on TV on a weekly basis to solve crime. All of these things are blatantly wrong! They show ignorance. They mislead and misinform the public. But worst of all – they’re cliché.

Comments Off

I regularly find myself facing peers (ever since my college days) who are decidedly misguided if not moronic about the practical application of object oriented programming. For those of you lay folks out there who don’t know what that means, it basically means: programming, where you model objects. Instead of data and functionality being separate in the code, you can simplify many things by putting it together. It sounds simple on the surface (trust me it is simple), but some people get it so terribly wrong it makes me sad.

The premise, while an easy one for me, seems to elude many. You see, there are several implications and powerful tools this coupling of data can give us programmer types. For one, it allows us to hide the actual data and algorithms we’re using so we can change things in one piece of code without hurting other parts of the software. That alone is HUGE. We can also reuse capabilities of existing objects (we’ll call the “classes” from here on) by something known as inheritance. Inheritance is a mechanism through which a class can get the capabilities and data of another class completely for free! Then it can specialize something in order to do a specific job better. There are lots of reasons for wanting to do this.

Let’s look at the example that makes me sad. It makes me sad because it comes sooooo close to describing how to use inheritance and why you’d want it, but instead of driving the point home, it kind of drives it to the neighbors house, where they have loud parties and sleep till noon

The example is of cars, or automobiles. We’ll often start with “suppose we have a class called ‘Automobile’ that describes all the generic capabilities of an automobile.” With me so far? “Then suppose we want to describe a “Car” as inheriting from “Automobile” all those attributes plus the idea that it’s small, has four wheels and ” So far, so good. “Then we want a class for ‘Truck’…” Here’s where I start twitching. If you follow this example to the logical fruit, you basically end up having a class for every type of thing out there.

So, what’s wrong with that? Well, for one thing code tends to be easy for students to develop so they’ll take this line of thought off the bridge past the signs that say “warning” without realizing they’re doing it wrong. They don’t realize that once that code is deployed you can’t so easily go in and change it. Instead, for this car example, we should pretty much just parameterize the class “Automobile” and leave it there without any inheritance at all. That way, when Ford invents a new model of car, we don’t have to redeploy our software. If instead of “Automobile” we had “Vehicle” we could do something else. For example, we could have a class “Boat” which would describe any kind of sea faring vehicle in terms of sea faring things, “Car” that did the same for land bound vehicles, “Plane” for air born ones and so on. It’s still not a clean cut example however since we may have vehicles that are capable of land and water use or water and air, or all three. See? Examples are not so easy.

This is where the student of Object Oriented Programming needs to make that intuitive leap from swallowing and regurgitating facts and ideas, to actually understanding the intent behind them. The intent of these examples isn’t to show you where to use inheritance, it’s how. And unless you can learn the how, you’ll never understand the where. Unfortunately, so many I have come in contact with are stuck in regurgitate mode and have yet to make that leap.

Comments Off