0001 Enduro/X benchmarks
0002 ===================
0003 :doctype: book
0004
0005 Overview
0006 --------
0007 This document generally contains Enduro/X benchmark overview, performed on different platforms.
0008 Different aspects are being tested and performance results are analyzed. Testing will cover
0009 fundamental message exchange between client and server, synchronous and asynchronous aspects are tested.
0010 Document also covers persistent storage benchmarks.
0011
0012 == Preparation
0013
0014 This section gives some notes how to prepare for benchmark. What packages are required to
0015 be installed. This assumes that standard Enduro/X build installation is performed.
0016
0017 === Preparing on Ubuntu/Debian like system
0018
0019 ---------------------------------------------------------------------
0020
0021 $ sudo apt-get install r-base
0022
0023 ---------------------------------------------------------------------
0024
0025 Asynchronous tpacall()
0026 ---------------------
0027 This tests uses one way calls to the server process. At the end of the calls it is ensured that
0028 all messages are processed by server. Only then results are plotted.
0029
0030 image:benchmark/04_tpacall.png[caption="Figure 1: ", title="tpacall() benchmark", alt="tpacallbench"]
0031
0032
0033 Local tpcall()
0034 --------------
0035 This test includes locally running ATMI client and ATMI server. The interprocess comunication happens
0036 with help of kernel queues (kq) or shared memory (shm). The polling mechanisms are either epoll() (works
0037 on linux, most efficient way) and usual poll() for which event chain is bit longer.
0038
0039 image:benchmark/01_tpcall.png[caption="Figure 2: ", title="tpcall() benchmark", alt="tpcallbench"]
0040
0041 From the results can be seen that performance mostly stays stable at different data size loads.
0042
0043 Networked tpcall()
0044 ------------------
0045 At this test basically same works is done as for above, but instead two instances of application servers
0046 are started and they are interconnected with network.
0047
0048 image:benchmark/02_tpcall_network.png[caption="Figure 3: ", title="tpcall() network benchmark", alt="tpcall_network"]
0049
0050 Here we see that network performance slightly fluctuate. That could be related with fact now bridge is involved to transfer
0051 the message over the network, and process count is bigger than CPU core count. Thus scheduler comes in work.
0052
0053 Multi-process/multi-thread tpcall()
0054 -----------------------------------
0055 This test cases employs five ATMI servers, and one ATMI client which inside have 5x threads,
0056 sending massages to the servers
0057
0058 image:benchmark/03_tpcall_threads.png[caption="Figure 4: ", title="tpcall() multiproc", alt="multiprocessing"]
0059
0060 Persisted storage benchmark, tpenqueue()
0061 ----------------------------------------
0062 This test gets much lower results, because all messages are being saved to disk.
0063 Also note that internally Enduro/X uses distributed transaction manager to
0064 coordinate the save of the message, thus processing of XA transaction takes
0065 some disk resources too. This benchmark uses default Enduro/X setting for data
0066 flushing to disk which is fflush() Unix system call. Fflush() does not guarantee data
0067 consistence at power outage event. For fully guaranteed data consistence,
0068 flags (FSYNC/FDATASYNC/DSYNC) can be set for XA resource. However expect much
0069 lower TPS performance.
0070
0071 image:benchmark/05_persistent_storage.png[caption="Figure 5: ", title="Persistent storage", alt="persistent_storage"]
0072
0073
0074 Single-threaded tpcall() on cached service
0075 ------------------------------------------
0076 This tests performs benchmark over the service for which results are cached. For
0077 best results cache database is stored on RAM driver (on Linux system).
0078
0079 image:benchmark/06_tpcache.png[caption="Figure 6: ", title="Cache performance", alt="review on cache performance"]
0080
0081
0082 Running the benchmarks
0083 ----------------------
0084 It is possible to run benchmarks on your own system. Note that for chart plotting R language is used. If R will not
0085 be installed, then charts will not be generated. But results still can be read from data files. Benchmark script is
0086 located in 'doc/benchmark' folder, named 'build.sh <configuration name>'. The configuration name is arbitrary description
0087 of your system on which you perform the tests. Shall not contain spaces. Results are plotted into text files located
0088 in same directory.
0089
0090 NOTE: That you must enable the Q message size up till 56000 bytes. On Linux system that would mean that '/etc/rc.local' needs to be set to:
0091
0092 ---------------------------------------------------------------------
0093 ...
0094 echo 56000 > /proc/sys/fs/mqueue/msgsize_max
0095 ...
0096 ---------------------------------------------------------------------
0097
0098 And in 'setndrx' we shall also enable that message size:
0099
0100 ---------------------------------------------------------------------
0101 ...
0102 # Max message size (in bytes)
0103 export NDRX_MSGSIZEMAX=56000
0104 ...
0105 ---------------------------------------------------------------------
0106
0107 For our sample user (user1), then running could look like:
0108
0109 ---------------------------------------------------------------------
0110 $ cd /home/user1/endurox/doc/benchmark
0111 $ ./build.sh my_system,linux,ssd
0112 $ ls -1
0113 01_tpcall.png
0114 01_tpcall.txt
0115 02_tpcall_dom.txt
0116 02_tpcall_network.png
0117 03_tpcall_threads.png
0118 03_tpcall_threads.txt
0119 04_tpacall.png
0120 04_tpacall.txt
0121 05_persistent_storage.png
0122 05_persistent_storage.txt
0123 build.sh
0124 genchart.r
0125 ---------------------------------------------------------------------
0126
0127
0128 ////////////////////////////////////////////////////////////////
0129 The index is normally left completely empty, it's contents being
0130 generated automatically by the DocBook toolchain.
0131 ////////////////////////////////////////////////////////////////