Unraveling Obfuscation

ob fus cate – 1. to confuse, bewilder, or stupefy 2. to make obscure or unclear

Archive for February, 2007

Performance Question – JavaScript Execution

Posted by Todd McKinney on February 25, 2007

As more and more applications that we use everyday are run in a browser, and increasingly rely on JavaScript for enhanced user experience, we developers should spend some time thinking about the implications of JavaScript code performance.

Scott Isaacs shared lessons learned building live.com some time back, and it touches on a couple of issues related to client-side performance. Notably, one of the observations is that parsing XML is slow. Another of the observations is that script download activities can negatively impact user experience unless it is carefully managed.

The IE blog describes some of the work done to improve JavaScript performance JavaScript performance Part 1, JavaScript performance Part 2, JavaScript performance Part 3 in IE7, and also makes recommendations about coding practices to increase client-side performance. Countless other resources exist to help make JavaScript applications run well.

My concern with all of this is that we are very quickly moving into a world where poor performance is becoming commonplace because of a combination of factors:
1. We have a ton of code being downloaded into our browsers and executed through an interpreter.
2. Browsers are not the most miserly resource consumers to begin with (see Firefox memory leak for example).
3. Too often, the performance characteristics of the code are not well understood and thoroughly analyzed by the developers writing the code.
4. With ubiquitous tabbed browsing and a proliferation of web-based applications, the end user is increasingly loading many different applications into the same process.

I am less concerned with items one and two than I am with three and four. We have a history of optimizing interpreted execution environments through the magic of JIT compilers. If there’s a speed advantage to be gained, and the performance pain is widespread and visible enough, I have confidence that the browser vendors will solve the interpretation problem. The same goes for garbage collection and general browser application resource consumption. The worse it gets, the more likely it is to be solved.

On the coding practices front, one thing seems obvious to me. This is not a no-brainer. As with most performance optimization, it takes a deliberate effort and a significant attention to detail to get the coding done right. Every time that a development effort prioritizes “get cool stuff done quickly” over well engineered, efficient code, the opportunity for end users to run something in the barely adequate to poor end of the performance spectrum increases.

To really compound things, loading up multiple sites in a browser instance with tabs means that the end user will often be running with more than just the code from one site in memory. I’m generalizing based on my own behavior here, but it can’t be that unusual. Typically, I have 15 tabs open in a single browser instance. The reason for the magic number 15 is that is about how many comfortably fit horizontally across the screen in my usage. Once I’ve gotten the browser instance “full” with 15 or so tabs, I launch a separate window and keep going. Normally, two browser instances is about all I’m willing to tolerate without going back and “recycling” existing tabs for something else. What I’m noticing is that more and more if I start hitting the wall on my local machine, the browser is a likely culprit for using up gobs of memory, and/or causing the CPU to sustain high levels of activity.

Interestingly enough on Windows XP, with IE 7, the mechanism used to open the browser window determines whether you share the process of the open browser instance, or get a separate process. Right-click a link in IE, select “Open in new window”, and you’re sharing the same process. Launch IE from the start menu, and you get a new process. Firefox 2 shares a single process under both of these scenarios.

I’ve really only just asked the question here. I need to do some measurement and analysis to make any sense out of it, but I do think we have some cause for concern.

Posted in Coding, Tech | Leave a Comment »

Back from Hiatus

Posted by Todd McKinney on February 18, 2007

My apologies (to my one reader) for being off the grid for a bit there. We just got back from Costa Rica, and there sure wasn’t a lot of feed reading and blogging going on during the trip.

I came across a post by Brian Bailey about clearly communicating expectations to users. His point, and I think it’s a good one, is that we need to consider whether proctecting against any and all possible user actions is a worthwhile approach. In many cases, a good old fashioned discussion may be a more effective answer whether we’re trying to enforce some rule in software, or a behavior in an organization.

 Thanks for the thought provoking topic Brian – I think you touched a nerve here :)

Posted in General | Leave a Comment »

 
Follow

Get every new post delivered to your Inbox.