You are currently browsing the category archive for the ‘Scalability’ category.


I just recently found out that James Robertson had posted a video of my “Share Everything” talk from ESUG 2008 several months ago. I must have missed the announcement:)

An MP4 version of the talk was posted here a month ago.

And here’s a link to the slides.

Ken Treis gives some love to lighttpd in his post: My Favorite GLASS Front-End Server: lighttpd:

lighttpd maintains a running “load” for each FastCGI backend server. When an incoming request hits, lighttpd chooses from among the servers with the lightest load. This is exactly what I was looking for.

Photo by TW Collins (Creative Commons).


[Update – 3/19/2009: Read The SSD Anthology: Understanding SSDs and New Drives from OCZ (on AnandTech) if you are thinking of buying an SSD. There’s useful information in a discussion of the article over at Hacker News].

Because disk i/o is one of the primary factors affecting the performance of GemStone/S applications, using SSD drives has been an intriguing proposition ever since they popped onto the scene.

About a week ago, Otto Behrens posted a message on the GemStone Mailing list (emphasis mine):

I’m supporting a GS/S 32 bit system at an investment / insurance company. The database is currently 16GB and runs about 40 concurrent sessions.

I recently bought a 80GB Intel® X25-M Mainstream SATA Solid-State
Drive (SSD) and installed it on a Windows XP desktop (Pentium 4, 1.2GB
RAM). I set up a 512MB SPC and copied (all!) the extent files onto the
, leaving tranlogs on the internal IDE. This machine outperforms
the production environment significantly! OK, the production
environment is a Sun with 2 x 1GB CPUs and 8 GB RAM; the GemStone DB
there is set up with an SPC of 3GB.

Anybody tried this with GS/S? I think it will be significant because
the SSD is perfectly suitable for this kind of database. Because of
all the random reads done on these databases, the biggest bottleneck
have always been the seek time on magnetic disks. The SSD reduces seek
time to practically zero! The net result is that we need a much
smaller SPC and can simply use internal SATA disks
. Well, let’s see
first, but I can see much cheaper hardware with much better
performance coming…

And yesterday, Hernan Wilkinson wrote in a message (emphasis mine):

Hi Otto, after your mail we decided to give the SSD a try, and what can I
say… thank you very much for sharing your experience with all of us!

We bought a Super Talent Master Drive OX ( that
has 150 MB/Sec (max) of Seq. Read, 100 MB/Sec (max) of Seq. Write and 0.1 ms
of access time. As you can see, it is slower than the Intel X25-M and the
results are just outstanding.

Here are some numbers of one of the migration’s step we have to run in a
couple of weeks in one customer. The customer’s extent is about 3.5 Gb. The
process was run in the same machine with the same GemStone configuration (we
just changed the path of the extent and transaction log files to point to
the SSD drive). The hard drive is a Maxtor stm3250310as ( 300 MB/s of data
transfer rate and 11 ms of average seek time). The machine has an Intel Core
2 Duo of 3.0 GHz and 2 Gb of ram (600 MB of SPC), running Windows XP, SP3:

* Looking for all instances of 8 classes:
* With the hard drive: 28 minutes, 47 seconds
* With the SSD drive: just 39 seconds!!

* Migration total time:
* With the hard drive: 35 hours, 32 minutes, 3 seconds
* With the SSD drive: just 58 minutes, 28 seconds!!

So the difference is really big. We are thinking about telling our customers
to switch to this type of disk.

That’s a 30X performance gain achieved by installing SSD drives!

What’s going on?

If the Shared Page Cache (SPC) is too small relative to your working set, performance will be limited by how fast data pages can be read from disk.

If dirty pages are created too fast (high commit rate and/or high data volume), performance will be limited by how fast data pages can be written to disk.

Even if a system runs at an acceptable rate, its equilibrium can be upset by maintenance activities such as:

  • Garbage Collection which involves an increased level of page reads, while all object in the repository are scanned, and an increased level of page writes while the dead objects are disposed and pages are reclaimed.
  • Data Migration which involves an increased level of page reads, while allInstances are collected, and an increased level of page writes as each instance is migrated to it’s new shape.

The page read problem is generally solved by increasing the size of the SPC (GS/S 32bit SPC is limited to ~4GB). With an SSD drive, read rates are significantly faster so that you can see significant performance gains without changing the size of the SPC. This means you should be able to allocate your excess RAM to OS processes (like more and/or larger VMs).

The page write problem is generally solved by spreading the extent files across multiple disk spindles and adding additional  AIO Page Servers. With an SSD drive, write rates are significantly faster so that you don’t need multiple AIO Page Servers (and multiple disk spindles) to keep up with the generation of dirty pages. This means that you don’t have to add external drives to supply the needed spindles.

I haven’t gotten my hands on an SSD drive yet, but with the kind of results that Otto and Hernan have seen it looks like SSD drives should receive serious consideration when you are looking to squeeze more performance from your GemStone/S application.

[1] Photo by Éole Wind (Creative Commons).

I know that the ‘A‘ in GLASS stands for Apache, but I have to admit that since last spring, I have been using lighttpd almost exclusively for my performance tests.

It all started last fall, when I noticed that every once in a while, I’d get some pretty noticeable flat spots in the performance graphs when running at rates in excess of 100 requests per second. You can easily see the flat spots in the graph at right (running this setup – 4 gems on a 4 core machine with 10 concurrent siege sessions). Peaking at 200 requests/second, but the graph has big swings.

I originally blamed the the flat spots on “contention,” but I didn’t know where the flat spots were coming from.

In May I started tracking down the source of those flat spots. Over a several days of testing I was able to rule out disk I/O, network I/O, and GemStone internals as the source of the contention. All vital signs on all of the systems involved were flatlined – I used separate machines for siege, Apache, and GLASS.

I finally got around to using tcpdump and I was able to see that the last packet to flow between machines before the flat spot was an HTTP request packet heading into the Apache box. The flat spot ended with a packet heading from the Apache box to the Gemstone box. Pretty clear evidence that Apache was the culprit. Without getting into the internals of Apache, I figured that the contention must be an unfortunate interaction between the MPM worker module (which is multi-threaded) and mod_fastcgi.

I asked our IT guys to install lighttpd and you can see the results in the graph at right (32 gems on an 8 core machine with 180 concurrent siege sessions). In this run we’re peaking at 400 requests/second (twice as many cores), but the performance graph is much tighter (standard deviation of 36 for lighttpd versus 60 with Apache) and best of all, no flat spots. Soooo, if you expect to be hitting rates above 100 requests/second, you should be using lighttpd.

Not only is lighttpd performant, but it is pretty easy to setup as well. It turns out that Gemstone/S and FastCGI with lighttpd‘ where he describes how to set up lighttpd for GLASS.

[1] Photo: Grandpa’s Shed, Uploaded by a o k on 3 Apr 07, 11.14PM PDT.

I know, I know, at first blush it sounds like a bad idea, but if you let the idea marinate overnight and then sear over red hot mesquite – it ends up being a pretty tasty idea. No that isn’t the sound of vertebrae popping in the background:)

I blame Ryan Simmons. In a comment to my previous post, he innocently asked the question (emphasis mine):

Would it be possible to not comit session state to the stone and use something like Ramons current post on scalling ( ) to redirect users to the same gem.

Avi Bryant read Ryan’s comment and suggested that running Seaside on GLASS using 1 Session per VM could be a viable technique for avoiding commits.

But, 1 Session per VM?

As I’ve detailed elsewhere, there are good reasons for not using a vm to serve multiple concurrent sessions (without doing a commit per request), but if one were to serve a single session per vm, the good reasons are rendered moot.

With 1 Session per VM, session state does not need to be persisted and voila! no commits due to changes in session state. One should be able to approach the same sort of performance achieved with navigation URLs (i.e., 130 pages/second running on commodity hardware: 2.4Ghz Intel, with 1 dual core cpu, running SUSE Enterprise 10, with 2Gb of ram and 2 disk drives – no raw partitions) while continuing to use rich, stateful Seaside components.

Isn’t 1 Session per VM wasteful?

It depends upon how you look at it. The only thing that might be wasted is memory/swap space. A couple of hundred extra processes on todays hardware is not a big deal unless you are swapping.

If you have 100,000 unique visitors per month and a 10 minute session expiry, you end up needing around 20 sessions. Throw in a fudge factor of 5x and you’re looking at 100 sessions.

100 VMs at 100Mb per VM (tunable) should consume 10Gb of RAM, unless you use mmap (which we do). With mmap, only the memory that is actually used is allocated in RAM. The GemStone/S object manager maps and unmaps chunks of memory on demand so the full 100Mb will only be used if needed and when the memory is no longer needed, it is returned to the system. Without some real benchmarks I can’t tell for sure, but I think it is reasonable to assume that 100 100Mb GemStone vms could run comfortably in 5Gb or less of real memory.

On the flip side, in-vm garbage collection is much cheaper, because only the working set for a single session needs to be swept. When session state for multiple sessions is colocated in the same vm, there is noticeable overhead, so with 1 Session per VM we trade memory for CPU.


We will run some real benchmarks to characterize the tradeoffs between memory, CPU, and commits, but based on back of the envelope calculations it appears that 1 Session per VM is a viable approach for scaling Seaside applications with GemStone/S.

I’m headed to Amsterdam and ESUG on Thursday, but I intend to get busy on this when I get back into Portland in early September, so keep your eyes peeled.

“Nobody expects the Spanish Inquisition!” With that Cardinal Ximinez spun on his well-shined heel and pulled the heavy, copper strapped cell door closed. Laughter echoed in the tunnel as the meager light drifted away with the torch smoke.

I didn’t set out to commit heresy, I suppose noone really does.

Sprawled on a bug infested pallet, locked in a cell deep beneath the Abbey, remnants from my former life draped in ragged tatters from my bruised limbs, I wondered if my simple act of heresy was worth the heavy price I am doomed to pay. I hold scant hope for my redemption, but you gentle reader, you have a chance to learn from my descent into dissent.

Back in April…

…after resolving the last couple of issues with transparent persistence, I turned towards the scalability and performance of Seaside and GLASS. For focus I set a hypothetical goal of 100,000 requests/second – if you’re going to aim for something you might as well aim high.

Since GLASS performs one commit per HTTP request I started out by tuning GemStone/S for maximum commit performance. Before long I realized that I was barking up the wrong tree – if our commercial customers can do say 5,000 commits/second, then I’d need to come up with a 20x performance boost for commit processing to reach 100,000 requests(commits)/second – an admirable goal but not very realistic – we’ve spent 20 years tuning commit performance.

For a 20x performance improvement, I’d just have to pack more bang into each commit or somehow avoid state changes altogether. I’d already explored the ‘more bang per commit’ option way back when I first ported Seaside to GemStone, so I knew that there wasn’t going to be much gained by that route. With regards to avoiding commits, the theory is that avoiding commits for 95% of the pages could result in a 20x gain in performance. The more commits avoided the bigger the multiplier. Now we’re talking!

In May, I started exploring the reuse of session state. By the end of July, I had a proof of concept demonstrating the reuse of 99% of the session state, but I wasn’t real happy with the extent of changes I had to make to Seaside. Nor could I eliminate the decoration chain update used with #call:/#answer: the primary navigation scheme for Seaside. We aren’t going to avoid commits on 95% of the pages without being able to navigate statelessly. I need a navigation scheme that doesn’t require Seaside session state.

Last week (end of July) James Foster and I tossed around some ideas for doing navigation in Seaside without using #call:/#answer: and without using Seaside session state. After playing around with a couple of approaches I settled on the following style of URL, which I’ll call a navigation URL – it’s sorta RESTful, but not too RESTful, it is bookmarkable and it was pretty easy write a component to render pages (the most important – Ha, Ha):

I wrote an example Seaside application (Sushi-Example) using the navigation URLs. It is available on GemSource in the GemStone Examples project. As of this writing Sushi-Example-dkh.25 is the version that should be used. The Sushi-Example package can be loaded into either GLASS or Squeak Seaside. There is no styling in the example so don’t expect anything pretty. I encourage you to take a look at the example and give us feedback.

Last Sunday morning I was lying in bed mulling over the problem when it occured to me that if I dropped the ‘_k’ off a navigation URL and figured out how to render the page, I wouldn’t need to save a continuation for each page hit and most importantly I’d avoid a commit for each of those pages as well! After a little more thought, I figured it would be possible to avoid saving sesssions making it possible to drop the ‘_s’, too. By Monday afternoon, I had a prototype running.

I spent Tuesday doing some benchmarks to see if there were noticeable improvements in performance. I knew that by avoiding commits I would greatly improve the scalability of Seaside, but I was also hoping to see some performance gains, too.

I have to admit that advocating the use of navigation URLs and proposing the ‘_k’ and ‘_s’ be optional URL parameters makes me feel like a Seaside heretic…

(JARRING CHORD. The door flies open. In come three evil types in red robes.)

Cardinal Ximinez: NOBODY expects the Spanish Inquisition! Our chief weapon is surprise…surprise and fear…fear and surprise…. Our two weapons are fear and surprise… and ruthless efficiency…. Our three weapons are fear, surprise, and ruthless efficiency… and an almost fanatical devotion to the Pope…. Our four… no… Amongst our weapons… Amongst our weaponry… are such elements as fear, surprise… I’ll come in again.

A Case to make ‘_k’ and ‘_s’ optional using Navigation URLs

… but here goes, the rack be damned.

I have glossed over a couple of details in the following section, so you’ll want to look at the code for the real skinny:)

In a normal Seaside application URLs are generated for links that look something like this: To decode this URL Seaside uses the ‘_s=68pqfS‘ to lookup a session, the ‘_k=SW7A‘ to look up a continuation in the session and the rootComponent associated with the continuation is rendered. The ‘_n’ indicates to Seaside that no redirect is needed for the rendering pass.

Consider a navigation URL that looks like this: To decode the navigation URL, Seaside creates a default session (no ‘_s’ present) and renders the rootComponent associated with a default rendering continuation (no ‘_k’ present). The rootComponent uses the ‘pg=Item&id=26′ to lookup and render the appropriate item.

When a navigation URL is used to reach a page, Seaside creates and saves a new instance of the session class. If a customer is window shopping (i.e., navigating around the site without logging in or adding an item to a cart), the session state is uninteresting and doesn’t really need to be saved. If a request URL has no ‘_s’ and no state is changed while processing the URL, the ‘_s’ could be dropped from generated navigation URLs with no loss of information. There’d be a definite advantage for web sites with lots of window shoppers as only the interesting sessions would be saved.

If a customer logs in or adds an item to a cart, then the session becomes interesting and it makes sense to propogate the ‘_s’ to all generated navigation URLs. Such an URL looks like this: To decode this URL, I would expect Seaside to use the ‘_s=68pqfS‘ to lookup a session and render the ‘pg=Item&id=26′ item. The ‘pg=Item&id=26′ is retained in the URL to make it possible to lookup and render the target page. Essentially the ‘pg=Item&id=26′ replaces the ‘_k’, which ends up serving two useful purposes:

  • In a window shopping scenario, the customer will continue to navigate around the site. The interesting session state is encoded in the ‘_s’ and without a ‘_k’ there is no need to create and save session state for each page rendered.
  • such a navigation URL is natively bookmarkable. If the session expires before the bookmarked URL is used, it still contains enough information to allow a valid navigation to the bookmarked page.

With navigation URLs, a ‘_k’ is needed only when a #callback: or #call: is associated with a rendered anchor or form.

(Torchlit dungeon. We hear clanging footsteps. Shadows on the grille. The bootsteps stop and keys jangle. The great door creaks open and Ximinez walks in and looks round approvingly. Fang and Biggles enter behind pushing in the dear old lady. They chain her to the wall.)

Ximinez : Ha! Now, old woman! You are accused of heresy on three counts. Heresy by thought, heresy by word, heresy by deed, and heresy by action. …Four counts. Do you confess?

Old Lady : I don’t understand what I’m accused of.

Ximinez : Ha! Then we shall MAKE you understand! Biggles! Fetch… …THE CUSHIONS!

The Benchmark

In the Sushi Example, you can use either the standard WASession or the SushiSession. When you use WASession, you’ll get ‘_s’ and ‘_k’ parameters generated for every link (along with the associated session state), while using the SushiSession, the ‘_s’ and ‘_k’ parameters are only included in the URL when needed. Since I wanted to measure the effect of the accumulation of session state (or lack thereof), I decided to run the tests for 20 minutes (double the default sessionExpory). With a 20 minute test we’ll generate a full complement of session state and presumably settle into a steady state as aged sessions expire.

For all of the tests (except Run5), I used the following siege invocation:

siege -b -d 0 -c 10 -t 20M

The arguments tell siege to hit the URL as hard as possible simulating 10 concurrent users for 20 minutes. For more on siege, see my post Scaling Seaside with GemStone/S.

For Run5, I had to increase the -c argument in order consume both CPUs on the box. Run5 was made with 300 concurrent sessions.

The Benchmark Results

The following table summarizes the results:

Run Req/Sec Core Gem VM Machine Session Class
1 11 1 1 S laptop WASession
2 14 1 3 G Foos WASession
3 67 1 1 S laptop SushiSession
4 129 1 3 G Foos SushiSession
5 335 2 3 G Foos SushiSession

Run1 (Squeak) and Run2 (GemStone) are baseline runs and if you look at the results from my benchmarks last fall, you’ll see that the figures are comparable (previous Squeak results and previous GemStone results).

Things get interesting when you look at Run3 (Squeak). Not saving session state results in a 6x improvement in performance. Since both runs (Run1 and Run3) were made against the same Sushi-Example the difference in performance can be attributed to the reduced load on the memory management system. While I didn’t measure the size of the VM, I am sure that it is a lot smaller in Run3.

For GemStone Run4 the improvement is even more dramatic – a 9x improvement. For GemStone/S, the improvement is due to the fact that without saving session state, no commit processing overhead is incurred. I should note that on the machine Foos, no effort was made to use raw tranlogs (see Scaling Seaside with GemStone/S and GemStone 101: Transactions for info about raw tranlogs and performance) so the improvement is more dramatic than you’d see on a system that was tuned for commit performance.

For Run5 (GemStone) I added an adidtional CPU allowing the stone and 3 gems to utilize both CPUs available on the machine. This time the improvement was 2.5x. You’d expect a 2x, so the extra margin of performance is probably due to the fact that the there was very little context switching going on with 2 CPUs.


The quest for 100,000 pages/second doesn’t look quite as quixotic now as it looked a week ago. I think that using navigation URLs with Seaside is a valid technique especially if you are interested in a highly scalable Seaside application.

If you are using the Web Edition of GLASS, using navigation URLs will definitely make it possible to sustain rates in excess of 15 pages/second, since you will be able to avoid saving uninteresing session state in the repository.

For low traffic sites, I think navigation URLs hold promise for making it possible to get the most bang for your resource buck.

The prototype for making the ‘_k’ and ‘_s’ optional parameters is less than a week old and I know that there are some weak spots in the implementation, but I’m sure that they can be resolved:

I don’t like how navigationOnly is used in SushiSession. I’m sure that navigationOnly can be eliminated, but it might be necessary to change the logic for generating Form html, since I added navigationOnly so that Forms would work correctly.

I would like to be able to create Forms where an URL could be used as an alternative to using a #callback:. I know I risk be boiled in oil for this, but I would have liked to include a search field on nearly every page, but that would have required a #callback:.

After spending bits and pieces of 3 months working part time towards the reuse of session state, it is pleasing to see how little code it took to make the ‘_k’ and ‘_s’ parameters optional.

Before we point jcrawler at a Seaside application I would like to talk about what you should expect.

To start with, jcrawler is like a bull in a china shop. The algorithm jcrawler uses is not very deterministic nor is it discriminating, but it is thorough, relentless and highly parallel. Given an an initial set of URLs, jcrawler traverses each page and adds the links it finds to its list. Every so often, jcrawler creates a new thread to process another URL from the list. We can depend upon jcrawler to rattle every piece of china in an application and it will rattle more than one piece at a time, so we’d better be ready to deal with wreckage.

Jcrawler will help make your application bullet proof, but at potentially 15 errors/second spread across several vms, there can be a lot of wreckage to sift through.

I added an object log to GLASS a couple of weeks ago and over the last couple of days, I’ve added a Seaside application for viewing and manipulating the object log. Take look at a sample log sample object log (my blog is wide-image challenged). It’s not the purtiest page this side of the Mississippi (I am web-design challenged:), but it does the job.

In the object log, the entries labeled ‘– continuation –, partial object logrepresent object log entries that can be debugged via the ‘Debug’ button in the GemStone/S Transcript Window. If you take a peek at the pid column, you will notice that the log entries were generated by two different gems. There are three gems serving HTTP requests in the appliance.

The upshot is that after letting jcrawler hammer on your application, not only do you get an overview of the problems uncovered during the run, but you can open a debugger and investigate the issues that resulted in walkbacks.

I generated the sample object log by manually playing with the randomError application (http://localhost/seaside/examples/GemStone/randomError in the appliance) found in the GemStone Examples project. This little gem generates a simple log entry (‘random error’) or walkback (‘– continuation –‘) 12% of the time. You can also generate different kinds of errors by poking the links in the Error tab of the alltests application (http://localhost/seaside/tests/alltests in the appliance).

If you want to play with the object log, load up the latest version of the GLASS package (GLASS-dkh.103 in the GLASS project – it will also load the GemStone Examples). Poke around in the randErrror application until you get an error then head on over to the object log (http://localhost/seaside/tools/objectLog in the appliance). You can also try the remote debugger from your development image.

Next up we’ll talk a little bit about configuring jcrawler.

Avi Bryant has an article article up on his blog where he does a very good job describing GemStone’s architecture. It’s definitely worth a read.

jcrawler is a good tool for load testing Seaside applications – in theory. Unfortunately, it takes a little bit of work to modify jcrawler to get it to the point where it can be used to load test Seaside applications. So here’s my story…

The Story

In the last week I’ve started putting together some examples of persistence for GLASS (I’ll write a post about them when I’m happy with the examples). As I have mentioned before, you need to use different techniques to manage concurrency in GemStone/S, because you will be using multiple vms to serve web pages and Semaphores can’t be used. The examples illustrate several of the techniques that you can use to avoid transaction conflicts. As part of the exercise, I needed to find a way to test for transaction conflicts.

In order to create a transaction conflict, you need to have two web requests hit your web server at exactly the same time. I’ve used siege in the past for load testing, but siege uses constant URLs, not too useful for banging arbitrary URLs buried in the depths of your Seaside application.

To effectively beat on a Seaside application (especially if you want to expose concurrency bugs) you need a load tester that will crawl through your site, pick up the dynamically generated URLs and feed them back into the mix.

I knew that WAPT had been used by several folks for Seaside Load Tests, but I didn’t see site crawling mentioned in the feature list for WAPT. Beside that I’m doing my work on Linux boxes, so a Windows-only tool would not be convenient.

Without trying too hard, I found a site that listed a ton of Web Test Tools and up near the top of the of the Load and Performance Test Tools section there was a listing for jcrawler:

An open-source stress-testing tool for web apps; includes crawling/exploratory features. User can give JCrawler a set of starting URLs and it will begin crawling from that point onwards, going through any URLs it can find on its way and generating load on the web application. Load parameters (hits/sec) are configurable via central XML file; fires up as many threads as needed to keep load constant; includes self-testing unit tests. Handles http redirects and cookies; platform independent.

Just the ticket, huh? Well, if it was that easy, I wouldn’t be writing a blog post would I? Haha!

The Work

I grabbed the download from SourceForge and proceeded to build jcrawler.

You need ant, too. But that’s easily fixed.

The build completed and I was ready to slam my Seaside apps and its only been a couple of minutes! But failed:

Exception in thread “main” java.lang.UnsupportedClassVersionError: com/jcrawler/Main (Unsupported major.minor version 49.0)

It turns out that you must use JDK 5.0. Another download and some monkey business with my environment variables:

export JAVA_HOME=/home/dhenrich/jdk1.5.0_14
export PATH=$JAVA_HOME/bin:$PATH

and we’re off to the races. I launched jcrawler against:

a variant on WACounter running on my copy of the appliance.

The Problem

Things appeared to running okay. jcrawler was spinning away dumping log entries like the following to stdout (sorry about the line wrapping):

2568 [THREAD#93 CREATED 10:40:01::602] INFO com.jcrawler.UrlFetcher – Fetching URL

However, as I interactively poked at rcTally, I noticed that jcrawler wasn’t hitting the + + or – – links, because the shared value was not getting updated.

After an excruciating amount of debugging, I noticed that the URLs extracted from the web page contained the sequence ‘&‘ instead of ‘&‘…geez it has been just about as hard to get WordPress to display the dang ‘&‘ string in my post (can’t use rich editor) as it was to find the problem in jcrawler (sorry about line wrapping):

The Fix

  1. Download an HTML Parser from SourceForge.
  2. Copy the jars from the HTML Parser into jcrawler:

    cd /home/dhenrich/htmlparser1_5/lib
    cp *.jar /home/dhenrich/jcrawler/lib

  3. Edit the jcrawler source and insert the following line after line 120 in com/jcrawler/ (in the jcrawler src directory) to convert the encoded HTML:

    content = org.htmlparser.util.Translate.decode(content);

  4. add htmlparser.jar to the list of jars in (in the jcrawler misc directory).
  5. Rebuild jcrawler and you are off to the races!

The Payoff

At the end of the day, you’ve got yourself a version of jcrawler, that can be used to randomly poke around in the nooks and crannies of your Seaside application and give it a pretty thorough workout.

As I work on the GemStone examples, I’ll learn more about jcrawler’s quirks and features, but for now it does pretty much what I need.

If there’s another load tester out there that can crawl through a Seaside website, I’d appreciate hearing about it.

Herb Sutter has just published another article in his series on Effective Concurrency: Use Lock Hierarchies to Avoid Deadlock. Coming on the heels of my post on Transaction Conflicts where I talk about object locks, it is very timely.

Enter your email address to subscribe to this blog and receive notifications of new posts by email.

Join 446 other subscribers


RSS GLASS updates

  • An error has occurred; the feed is probably down. Try again later.

RSS Metacello Updates

  • An error has occurred; the feed is probably down. Try again later.

RSS Twitterings

  • An error has occurred; the feed is probably down. Try again later.
June 2023