The quality of a link can be tested as follows: - Latency (response time or RTT): can be measured with the Ping command. - Jitter (latency variation): can be measured with an Iperf UDP test. - Datagram loss: can be measured with an Iperf UDP test.
The bandwidth is measured through TCP tests.
1 2 3 4 5 6 7 8 9 10 11 12 13
Server specific: -s, --server run in server mode -D, --daemon run the server as a daemon -I, --pidfile file write PID file -1, --one-off handle one client connection thenexit Client specific: -c, --client <host> run in client mode, connecting to <host> -u, --udp use UDP rather than TCP --connect-timeout # timeout for control connection setup (ms) -b, --bitrate #[KMG][/#] target bitrate in bits/sec (0 for unlimited) (default 1 Mbit/sec for UDP, unlimited for TCP)
interarrival jitter: 32 bits An estimate of the statistical variance of the RTP data packet interarrival time, measured in timestamp units and expressed as an unsigned integer. The interarrival jitter J is defined to be the mean deviation (smoothed absolute value) of the difference D in packet spacing at the receiver compared to the sender for a pair of packets. As shown in the equation below, this is equivalent to the difference in the "relative transit time"for the two packets; the relative transit time is the difference between a packet's RTP timestamp and the receiver's clock at the time of arrival, measured in the same units.
Schulzrinne, et al Standards Track [Page 26] RFC 1889 RTP January 1996
If Si is the RTP timestamp from packet i, and Ri is the time of arrival in RTP timestamp units for packet i, thenfor two packets i and j, D may be expressed as
The interarrival jitter is calculated continuously as each data packet i is received from source SSRC_n, using this difference D for that packet and the previous packet i-1 in order of arrival (not necessarily in sequence), according to the formula
Whenever a reception report is issued, the current value of J is sampled.
The jitter calculation is prescribed here to allow profile- independent monitors to make valid interpretations of reports coming from different implementations. This algorithm is the optimal first- order estimator and the gain parameter 1/16 gives a good noise reduction ratio while maintaining a reasonable rate of convergence . A sample implementation is shown in Appendix A.8. A.8 Estimating the Interarrival Jitter
The code fragments below implement the algorithm given in Section 6.3.1 for calculating an estimate of the statistical variance of the RTP data interarrival time to be inserted in the interarrival jitter field of reception reports. The inputs are r->ts , the timestamp from the incoming packet, and arrival , the current time in the same units. Here s points to state for the source; s->transit holds the relative transit time for the previous packet, and s->jitter holds the estimated jitter. The jitter field of the reception report is measured in timestamp units and expressed as an unsigned integer, but the jitter estimate is kept in a floating point. As each data packet arrives, the jitter estimate is updated:
int transit = arrival - r->ts; int d = transit - s->transit; s->transit = transit; if (d < 0) d = -d; s->jitter += (1./16.) * ((double)d - s->jitter);
When a reception report block (to which rr points) is generated for this member, the current jitter estimate is returned:
rr->jitter = (u_int32) s->jitter;
Alternatively, the jitter estimate can be kept as an integer, but scaled to reduce round-off error. The calculation is the same except for the last line:
s->jitter += d - ((s->jitter + 8) >> 4);
Schulzrinne, et al Standards Track [Page 71] RFC 1889 RTP January 1996
In this case, the estimate is sampled for the reception report as:
rr->jitter = s->jitter >> 4;
1 2 3 4 5 6 7 8
Client specific: -c, --client <host> run in client mode, connecting to <host> -u, --udp use UDP rather than TCP -b, --bitrate #[KMG][/#] target bitrate in bits/sec (0 for unlimited) (default 1 Mbit/sec for UDP, unlimited for TCP) (optional slash and packet count for burst mode)
I looked at the source code, and also took some tcpdumps for iperf3. I have the following understanding for iperf3.
In iperf UDP packets, a time stamp and a sequence number (which iperf source code calls it pcount) is written into the payload by the sender. Once the receiver gets the packet, it extracts time stamp for jitter, and sequence number for packet loss count.
Jitter is calculated by compared the time stamp and the current time to find the delay, D_current, first. Then, find |D_current - D_previous| (the difference cancels the clock insync between the sender and the receiver 消除发送者和接收者之间的时钟不同步问题) to contribute to jitter.
Loss is just to accumulate the the difference between the current pcount and the expected pcount which is previously received pcount plus one.
No matter it is iperf udp (with option -u in client side) or tcp, when iperf starts, a control TCP connection will be established. This control TCP connection is used to exchange client side and server side statistics, which include CPU utils, jitter and loss calculated above, at the end of test
################### macOSX client################ $ iperf3 -c 192.168.50.208 -t 10 -i 1 -w 8000k Connecting to host 192.168.50.208, port 5201 iperf3: error - unable to set socket buffer size: No buffer space available