I won’t go into the details of the specs of the hardware or OS, since there is no reliable way to compare two different systems against each other, but the relative performance on the tested system gives a very good insight as to how it will behave on other systems as well.
I also want to stress that the SOAP API is not the same as the web/desktop client, and may use a different communication model on the server side. The results ARE NOT comparable to the web or desktop client response times.
We started by trying a large test-case we had defined, that only browsed the releases, sprints, backlogitems and tasks. However, getting all the tasks for all backlogitems in all releases and sprints amounts to a fairly large amount of SOAP calls.
First we did a baseline, where we got the basic speed of the system. Then slowly the load was increased by adding simultaneous users. It was concluded that the response times and resource-usage did not increase linearly to the amount of users, which is a very good sign! We increased the load until we reached the limits of the target server (CPU at 100%) and executed it for a prolonged time to see if there where any delayed effects.
In the end my initial worries about the data-amounts being a performance killer proved to be premature. The product size we used in the testing had a fairly reasonable amount of data in regards to BacklogItems and tasks.
So my personal conclusions are that the ScrumWorks 3.0 SOAP API is well built and seems to be able to handle a multitude of users and functions and does not stagger even when the CPU resources are low.