https://www.vultr.com

The Everywhere Cloud

Deploy an instance.
Accelerate your application.


Using cURL for response timing


cURL is a command-line tool that connects to a uniform resource locator (URL) for data transfer. Although this tool is commonly used to quickly check if websites are up and running, it can also be used to time the response of services. This may help administrators gather data for analysis to match against baseline response times for performance tuning.

cURL has many options or flags available to adjust its behavior. For the purpose of timing, the following flags shall be utilized.

[ -k ] tells cURL to disable peer verification of the SSL/TLS certificate. This is useful for connecting to URLs that have self-signed certificates.

[ -s ] tells cURL to operate in silent mode. This suppresses standard error messages and the progress meter.

[ -o /dev/null ] tells cURL to dump its standard output to the null device. All messages are effectively discarded and no output file is produced.

[ -w '%{time_total}\n' ] tells cURL to write the declared variables to standard output after all transactions have been completed.

The cURL command can be used in conjunction with shell scripting elements to automate its behavior. In the examples below, a "for" loop tells the shell to do the cURL command and sleep for two seconds thirty times, representing thirty sample points within a one-minute time period (30 * 2). The entire process usually has to be written into at least four lines of code. But with the aid of the semicolon to delimit each line, the code is compressed into one line and entered directly as a single command.

The raw data appear as a single vertical column on the terminal, but for easy reading, they have been arranged into a six by five matrix. The statistical variables provide a single, unified snapshot of the raw data that can be compared to other timing benchmarks. They are easily derived by copy/pasting the raw data into a text editor first, then copy/pasting them into a spreadsheet, and applying the following formulas on the hightighted cells.

Sample size: [ =count(cell_begin:cell_end) ]
Mean: [ =average(cell_begin:cell_end) ]
Standard deviation: [ =stdev(cell_begin:cell_end) ]

==========

1. INTRANET

1.1. Local Host

user@host: $ for i in `seq 1 30`; do curl -k -s -o /dev/null -w '%{time_total}\n' https://localhost/; sleep 2; done

1.1.1. Raw data (in seconds)

0.009465 0.015725 0.012583 0.010050 0.015269
0.019835 0.008015 0.016222 0.007697 0.007623
0.008868 0.009007 0.007742 0.008687 0.007635
0.011545 0.014929 0.015915 0.013973 0.014573
0.014722 0.013829 0.016262 0.007848 0.014075
0.011344 0.011652 0.009540 0.007880 0.008018

1.1.2. Statistics

sample size = 30
mean = 0.011684 sec = 11.684 milliseconds
standard deviation = 0.003505 sec = 3.505 milliseconds

1.2. Local Area Network

user@host: $ for i in `seq 1 30`; do curl -k -s -o /dev/null -w '%{time_total}\n' https://myothermachine.local/; sleep 2; done

1.2.1. Raw data (in seconds)

0.011505 0.009929 0.009603 0.013288 0.013619
0.010138 0.015637 0.023420 0.012638 0.020766
0.018180 0.016018 0.016636 0.016438 0.015701
0.012581 0.014353 0.011650 0.012753 0.010214
0.013066 0.010586 0.017807 0.013550 0.015060
0.010151 0.011726 0.010942 0.008967 0.010529

1.2.2. Statistics

sample size = 30
mean = 0.013582 sec = 13.582 milliseconds
standard deviation = 0.003461 sec = 3.461 milliseconds

2. INTERNET

2.1. From local machine to GitHub.

user@host: $ for i in `seq 1 30`; do curl -k -s -o /dev/null -w '%{time_total}\n' https://github.com/; sleep 2; done

2.1.1. Raw data (in seconds)

0.068296 0.073524 0.078219 0.081192 0.067825
0.075990 0.069114 0.075555 0.083259 0.076288
0.071967 0.071226 0.099029 0.066621 0.070929
0.075798 0.077149 0.066555 0.083402 0.075499
0.068624 0.071517 0.076407 0.065145 0.070650
0.060023 0.069480 0.063219 0.068301 0.071077

2.1.2. Statistics

sample size = 30
mean = 0.073063 sec = 73.063 milliseconds
standard deviation = 0.007428 sec = 7.428 milliseconds

2.2. From local machine to Google.

user@host: $ for i in `seq 1 30`; do curl -k -s -o /dev/null -w '%{time_total}\n' https://www.google.com/; sleep 2; done

2.2.1. Raw data (in seconds)

0.063755 0.060576 0.059470 0.056462 0.061898
0.054252 0.064097 0.054673 0.069383 0.080088
0.063040 0.062149 0.060885 0.059521 0.059752
0.068037 0.056084 0.067769 0.061787 0.052944
0.061217 0.068257 0.057929 0.058128 0.058128
0.058713 0.064062 0.062538 0.056305 0.056062

2.2.2. Statistics

Statistics

sample size = 30
mean = 0.061265 sec = 61.265 milliseconds
standard deviation = 0.005557 sec = 5.557 milliseconds

2.3. From local machine to Facebook.

user@host: $ for i in `seq 1 30`; do curl -k -s -o /dev/null -w '%{time_total}\n' https://www.facebook.com/; sleep 2; done

2.3.1. Raw data (in seconds)

0.111131 0.108994 0.092564 0.091414 0.130091
0.087485 0.109982 0.140197 0.078669 0.108650
0.105736 0.081138 0.095722 0.117784 0.081882
0.102547 0.173616 0.383048 0.116085 0.114127
0.107404 0.097023 0.106136 0.086933 0.113889
0.085425 0.099581 0.115600 0.080906 0.081686

2.3.2. Statistics

sample size = 30
mean = 0.113515 sec = 113.515 milliseconds
standard deviation = 0.054720 sec =54.720 milliseconds

2.4. From local machine to Twitter.

user@host: $ for i in `seq 1 30`; do curl -k -s -o /dev/null -w '%{time_total}\n' https://twitter.com/; sleep 2; done

2.4.1. Raw data (in seconds)

0.158603 0.154103 0.164257 0.152700 0.163216
0.172792 0.184336 0.201901 0.179639 0.201962
0.166857 0.182079 0.158222 0.175940 0.165918
0.159424 0.172223 0.174358 0.151537 0.196903
0.155757 0.169510 0.165049 0.161020 0.196132
0.162587 0.164982 0.164893 0.158949 0.172589

2.4.2. Statistics

sample size = 30
mean = 0.170281 sec = 170.281 milliseconds
standard deviation = 0.014235 sec = 14.235 milliseconds

==========

Comments

Popular posts from this blog

Enabling HTTPS in Home Assistant

Configuring the FreeBSD Firewall with IPFW

Running Home Assistant on FreeBSD Servers