Posted by Todd McKinney on February 25, 2007
Scott Isaacs shared lessons learned building live.com some time back, and it touches on a couple of issues related to client-side performance. Notably, one of the observations is that parsing XML is slow. Another of the observations is that script download activities can negatively impact user experience unless it is carefully managed.
My concern with all of this is that we are very quickly moving into a world where poor performance is becoming commonplace because of a combination of factors:
1. We have a ton of code being downloaded into our browsers and executed through an interpreter.
2. Browsers are not the most miserly resource consumers to begin with (see Firefox memory leak for example).
3. Too often, the performance characteristics of the code are not well understood and thoroughly analyzed by the developers writing the code.
4. With ubiquitous tabbed browsing and a proliferation of web-based applications, the end user is increasingly loading many different applications into the same process.
I am less concerned with items one and two than I am with three and four. We have a history of optimizing interpreted execution environments through the magic of JIT compilers. If there’s a speed advantage to be gained, and the performance pain is widespread and visible enough, I have confidence that the browser vendors will solve the interpretation problem. The same goes for garbage collection and general browser application resource consumption. The worse it gets, the more likely it is to be solved.
On the coding practices front, one thing seems obvious to me. This is not a no-brainer. As with most performance optimization, it takes a deliberate effort and a significant attention to detail to get the coding done right. Every time that a development effort prioritizes “get cool stuff done quickly” over well engineered, efficient code, the opportunity for end users to run something in the barely adequate to poor end of the performance spectrum increases.
To really compound things, loading up multiple sites in a browser instance with tabs means that the end user will often be running with more than just the code from one site in memory. I’m generalizing based on my own behavior here, but it can’t be that unusual. Typically, I have 15 tabs open in a single browser instance. The reason for the magic number 15 is that is about how many comfortably fit horizontally across the screen in my usage. Once I’ve gotten the browser instance “full” with 15 or so tabs, I launch a separate window and keep going. Normally, two browser instances is about all I’m willing to tolerate without going back and “recycling” existing tabs for something else. What I’m noticing is that more and more if I start hitting the wall on my local machine, the browser is a likely culprit for using up gobs of memory, and/or causing the CPU to sustain high levels of activity.
Interestingly enough on Windows XP, with IE 7, the mechanism used to open the browser window determines whether you share the process of the open browser instance, or get a separate process. Right-click a link in IE, select “Open in new window”, and you’re sharing the same process. Launch IE from the start menu, and you get a new process. Firefox 2 shares a single process under both of these scenarios.
I’ve really only just asked the question here. I need to do some measurement and analysis to make any sense out of it, but I do think we have some cause for concern.