Connexion Performance Test Environment
Overview
This environment is for performance testing Connexion and Connexion Devices. The data can be wiped periodically (and must be wiped periodically, due to a lack of space).
Initial Test Hardware
The initial test configuration will consist of an application server pair (primary and fail-over), database server pair (primary and fail-over), and a load balancing switch. From a pure performance stand-point, the extra application server will not change the performance characteristics (since it is completely passive), but it will allow us to perform some initial fail-over testing. The secondary database server may have performance implications based on the mirroring technology being used i.e. Always On, Clustered Disks, Log Shipping, etc. The Data Center team should be consulted as to which Sql Server fail-over technology will be used.
Fiber or other high-speed, low latency network should be used to connect the various computers.
Primary and fail-over VMs should be hosted on different physical hardware.
Connexion Load Generation Server Requirements
Hardware
- 4 Cores
- 4 GB Memory
- 100 G B Disk Space
Operating System/Prerequisites
- Microsoft Windows 2008 R2, or higher
- .NET 4.0+
Connexion Application Server Requirements
Brian and David have provided the following rough estimates of volumes as of 12/2/2013:
FFC | FFT | |
---|---|---|
Avg Msgs/Hour | 10K | 30K |
Peak Msgs/Hour | 42K | 70K |
Approximately 6 channels per customer (4 in, 2 out) and most likely a cap of a few hundred channels per VM for smaller customers. One or two large customers per VM.
Based on these volumes, initial performance test hardware should be:
Hardware
- 4 Cores
- 8GB Memory (may need to be increased for memory-hungry custom devices)
- 100GB Disk Space (this should provide some wiggle room in case file readers/writers are used)
Operating System/Prerequisites
- Microsoft Windows 2008 R2, or higher
- .NET 4.5.1
Database Server Requirements
The disks should be provisioned to provide as much concurrency as possible across the three database files (log, ndf, mdf). This will minimize the maintenance issues. The disk sizes used here are arbitrary. How big are the messages being received, and how long will they be retained for?
Hardware
- 4 Cores
- 16GB Memory
- 100GB Disk - OS Disk (C:\)
- 50GB Disk - SQLBIN (E:\)
- 50GB DIsk - TEMBDB (F:\)
- 100GB Disk - Database Log File (G:\)
- 2 TB Disk - Primary Database File (H:\) - revised from 300GB on Feb 27, 2014
- 2 TB Disk - Backup (I:\) - revised from 300GB on Feb 24, 2014
Operating System/Prerequisites
- Microsoft Windows 2008 R2, or higher
- Sql Server 2012 R1 Standard (or higher). Perhaps this should be Enterprise in the data center?
Considerations
Disk Performance
Database / Disk IO is typically the bottleneck for Connexion throughput. The test hardware should use disks with roughly the same performance characteristics of those that will used in the data center.
In Enterprise Connexion it is possible to host Queued message data in separate databases. If the bottle neck becomes the Queue Database, adding additional databases on physically separate disks should help to alleviate the IO bottleneck.
Memory
SQL Server will use free memory to load much of Connexion's database into memory and reduce the number of relatively slow paging operations. The more memory, the better.
CPU
For systems with few channels, higher clock speeds usually provide better performance. For systems with many channels (which will be the case in the data center), higher core counts usually provide better overall performance.
Processes
Connexion can be configured to run one or more channels in a separate process (called execution group). This is done to provide isolation between different groups of channels. In particular, to prevent the activity of 1 channel from bringing down another. We have not seen a performance degradation by having multiple processes, but we have never had more than 10 in any of our tests.
Machine list
IP | Name | Purpose |
---|---|---|
10.114.29.30 | phlptwcnxapp010 | Load generator server |
10.114.29.31 | phlptwcnxapp011 | Application server 1 |
10.114.29.32 | phlptwcnxapp012 | Application server 2 |
10.114.18.83 | phlptwcnxsql010 | Database server 1 |
10.114.18.84 | phlptwcnxsql011 | Database server 2 |
F5 Switch Configuration
General Requirement
The purpose of the F5 switch in the PT environment is to provide connection level fail-over between 2 Application Servers configured in fail-over pair (see figure at the top of the page).
The F5 Switch should be configured to redirect a connection to Application Server 2 if the connection could not be established with Application Server 1 within a time period (5 sec? not sure what the correct value is here?), and visa-verse.
Open issues