The test data we used was somewhat larger than a typical small project. This data was scaled to match the expected data amounts of a very large enterprise environment within a few years, with lots of users.Perhaps the most worrying thing was that the amount of data was linearly affecting the response times of the API. As an example we can say that if we had X backlog items for a release, the response times for getActiveBacklogItems() would be Y seconds. If we then increased the X by a factor of 2 (doubled the amount of data) we clearly saw Y increasing by the same factor.
This in itself may not be a problem if the amount of data is going to stay small, but in this enterprise environment the expected data amounts are going to be huge.
During the tests we monitored the Linux server machine running both ScrumWorks Pro and MySQL and saw that the CPU usage for the processes where pretty much 70% for JAVA and 30% for MySQL. Also the Java process was hogging nearly 50% of the available memory of the machine. Only during deletion of BacklogItems we would see MySQL spike to 80% or more.
The tests are not yet conclusive. The DB needs to be separated from the WebServer machine, the JVM settings optimized and other settings tweaked (OS and such) before we can really see the performance of the application.
It would also be a very good option to separate the WebServers into 2 separate machines serving from behind a LoadBalancer, but I was informed that ScrumWorks Pro does not support this (unverified).
To view test results click here.