Here are some questions we have answered about dsBenchmark
"I just ran dsBenchmark on a colocated Mac mini and on a machine with a FM hosting company. In the first instance the test topped out at 100 connections and on the second instance at 25. It both cases I watched my server stats from the server admin dashboard and nothing seemed to be too heavily taxed. Neither test ended itself. The number of connections just flatlined. I suspect that the FileMaker script engine connections are somehow limited on each machine. Both machines are running the same 25 user license (site lic.). Are you familiar with this issue and how might I test these machines to their limits? Thank you."
Yes, dsBenchmark uses the creation of server side sessions to simulate multiple users by means of calling Perform Script on Server WITHOUT telling the calling script, a loop, to pause until the called script has concluded.
The default setup for FileMaker Server limits those Perform Script on Server sessions to 25. This limit can be set to any value up to a maximum of 500. To run dsBenchmark you should set this limit to 500.
In recent versions of FMS you need to user terminal to do this with:
fmsadmin set serverconfig scriptsessions=500
In earlier versions the limit number can be set in the Ui.
There is more information about dsBenchmark here
https://dcms.deskspace.com/dsbenchmark_faqs.html
"What is the dequeue (red line) for, can you explain what the Token setting is for and how it affects the testing?"
With Tokens set at 0 tokenisation does not operate.
With Tokens set to 1 or more the solution monitors the current speed of each task completion, amongst all the virtual client server side scripts, and adds a delay before the next call is made, in order to reduce the pressure on server.
This is predicated on our view, based on about 4 months of research, that ultimately more is done in the time if the task queue is slowed and the server encouraged to retreat from moving towards choking.
The Dequeue line shows the amount of delay being introduced by the Token mechanism.
You will realise that ds Benchmark is open to the community so you can see exactly how this is done, and no doubt improve on it, if you care to.
"Strangely, with a WebDirect connection, the most users it’ll go up to is 3 although I selected 10 to test with. The server is set to 500 (max) script sessions and licensed for up to 101 WD users."
Are you opening a series of browsers / browser tabs on your local machine(s) and calling dsBenchmark from each?
To simulate WD sessions you must create one browser window or tab for each user.
This means you need to work hard to keep all the user sessions active so they don't close.
We tested this way with up to 15 users to obtain our figure of 32mb RAM being used per WD user session (on OS X)
"When I set it for 100 intense users with 6 tokens, it got up to 43 users, then stopped. Is there some condition that I didn’t know where the test will stop after a certain amount of time and/or specific conditions?"
Correct, the test is designed to automatically curtail when the server ceases to be performant, which I have set as the average completion time for each task exceeding, from memory, 3 seconds.
This is based on the idea that if the server isn't performant being able to support more users - who are going nowhere - is meaningless.
Hence the core dsBenchmark task is to establish on a consistent basis the normal max number of users that any given FMS deployment can support.
This then enables two comparative activities:
(a) adjusting the cache settings on FMS, and retesting seeking the optimum cache setting;
(b) running the same test on other servers in order to compare potential max load capacity.
Please note that this load capacity is not an absolute! Because as you will know the actual capacity of any deployment has a lot to do with the actual level of activity users, it is a comparative value, to enable comparisons to be made.
Hence, you can choose to run the less active user tests and you will see that the numbers who can be supported will rise.
If for example you judge that you have 25 very active users, 50 fairly active users and 25 relatively inactive users then set dsBenchmark to run a series of tests of 25 intense users, 50 busy and then 25 inactive, starting each set of tests in turn.
"Is there any significant difference between the performance of Windows and OS X Servers, with vaguely comparable resources?"
The big majority of our testing and development of this tool was on OS X servers. We then had several test users in various parts of the world running the beta on windows as well as mac. We have a strong impression that for any given physical resource OS X running Yosemite (10.10) was more efficient and faster than Windows.
Of course there is far more physical resource available on Windows servers but I have so far seen results on Windows that do not compare favourably with OS X equivalents. I cannot establish at present whether this is to do with how dsBenchmark tests or with how OS X and Windows manage resources and calls, or a combination of both.
The point is that whilst FileMaker Server has no control on how its competing calls for resource are handled the operating system does, and we observed a step change in OS X performance between FMS13 on OSX 10.9 and FMS14 on OSX 10.10.
It would be brilliant to be able to shed some light on the reasons for this difference in the future.