To prove that with realistic data noise and load, Coach continuous operation & multi-tenancy solution can adequately meet the demands of a specific mission critical use case.
Create an evaluation from a recording listed in the IntelliSearch media page
Ensure the Coach solution meets capacity demands
Load testing the
Each test iteration, starts with 80 Managers logging in and performing a sequence of actions
Each iteration ends with 80 evaluations being created, 160 when run from 2 separate locations
Mission critical use cases
Load balanced across 2 VMs
CPU E5-2673 v3
@ 2.40GHz 2.39GHz
Session cache & data in SQL Server
Capture user actions (http requests)
Improve user experience
& application performane
- Record http requests
- Simulate multiple users
- Intercept responses
- Query Db
- Create test scripts
- Rich & Powerful set of capabilities
- APM+, metrics, errors, logs
- Powerful, full 360 picture
- Integrated with JIRA
- Rapid problem resolution
To make the load test as realistic as possible the appropriate database tables were pre-populated with data that accurately represent an actual month's worth of recordings as well as the supporting entities required to satisfy this use case. To do this we used a combination of our Administration REST API and a custom F# Console application [insert recording records].
We used the most recent available tools. The approach was always to simulate concurrent usages across multiple tenants, monitor the results and tune where applicable.
Expand Team to get Agents
Expand Tenant to get Units
Select Agent to reveal page of media
Select next page of media
Evaluate first media in list
Make multiple hosted service calls to get data for evaluation
Navigate back to IntelliSearch page
Navigate to IntelliSearch page
Captured and modified http requests & responses via JMeter. Tenant codes and application users obtained from JDBC connection to Db (duplicate Db in another location). Data and intercepted responses (JSON) used to generate subsequent GETs & POSTs.
Create an evaluation from IntelliSearch result
- 99.9+ satisfaction for a 2 seconds response time 'Goal'
- Averaging between 6-7k requests per minutes
On average close to 100 evaluations created every minute
- During this hour 5,475 evaluations are created
- Satisfaction is at 99.85 and better
- Db latency is constantly low
- Average request latency is consistently low
Below are the results captured from the tools used to monitor both (a) the impact on the high number of requests and (b) the demand on the system as a whole, where the load has come from multiple computers and that Managers across 100 tenants have created evaluations. The objectives are to have a 99% or better satisfaction rating and to keep CPU utilisation below 60% during this period.
160 logins from 2 different locations averaging around 100 users per minute working through the steps to creating an evaluation with a 30 second ramp up period.
- CPU averaging around 50%
- Potential for increased capacity
Figures obtained from a 1 hour window
Our TLM (Tenant Licensing and real-time Monitoring) application comes with tools to help you:
- Sub second latency across 5-6k requests per minute
- Consistently low Db latency
All figures are rounded down
Here we show the make-up of each Tenant; number of agents etc. We use these figures to calculate the total number of evaluations each Tenant is expecting to create during a month. We then look at 2 capacity cases, one for a typical contact centre setup and one for 24/7 e.g. BPO. We've already seen the number of evaluations created during this load test - 5,475 evaluations per hour. This gives us 91 evaluations per minute. Using the figures shown below, the typical setup comes out at 766k Evaluations per month which is 91 * 60 [hour] * 7 [hrs in day] * 20 [days in month]. If we then divide 766k by the total evaluations needed for each tenant (2.5k) to get the maximum number of supported tenants, we get 306 Tenants. The same algorithm is used to calculate the 24/7 setup which comes out at 1,576 Tenants.
An important factor of HA is redundancy. In short, if a server fails or is purposefully taken out of service [think maintenance] there's another to continue processing requests. This contributes to our no SPoF strategy. The load can also therefore be spread across multiple servers. Our recommended minimum configuration includes 2 web servers. It is best practice to protect your data tier by filtering traffic with using an ACL so only internal systems can gain access. This is exactly how we have configured our load testing environment, by only allowing the top subnet (front-end) public access and disallowing public access to the bottom subnet (data). This is illustrated below and orchestrated with using the Microsoft Azure cloud platform.
Load balanced over 2 servers, requests coming from 2 locations and repeated continuously
This is the primary use case that we are including in our load test. It reflects the principle reason why Coach is being used; to create agent Evaluations. We are testing from 2 locations, in parallel, across 100 tenants and infinitely.
Qualtrak Solutions Limited
- Monitor the health of the Coach platform
- See what components are running and which are not
- Quickly assess whether you need to scale up to manage capacity
- Get alerted to brute force attacks
- Audit who is doing what within Coach
Produced for Cloud based Contact Centre, Call Recording & Speech Analytics vendors and Service Providers