Category Archives: LTE

Notes on "A Close Examination of Performance and Power Characteristics of 4G LTE Networks"! – Pt1

These are my notes on “A Close Examination of Performance and Power
Characteristics of 4G LTE Networks” which can be found here

4G is as the name suggests, the forth generation of mobile communication standards. There are currently two competing technologies, Mobile WiMAX and LTE. Long term evolution (LTE) is the technology that this paper considers.

4G Test Android Application

The team designed an android application called 4GTest. It is available in the Play store here. The application measures the network performance including latency, download speed and upload speed.

The application description claim that the results produced by this application are more accurate that any other test using single-threaded TCP tests.

I tested the application on an old HTC Magic but I couldn’t get the application to recognize my internet connection. The application is not compatible with the Nexus 7.

Overall Results/ Conclusions

The abstract states that “LTE generally has significantly higher downlink and uplink throughput than 3G and even WiFi, with a median value of 13Mbps and 6Mbps, respectively”. I am not sure why the median value has been quoted here instead of the mean value (if you know why, please comment below).

The data set for the study was 20 smart phone users over 5 months, this seems like a fairly limited sample size.

The power model of the LTE network, had a less than 6% error rate. It was found that LTE was less power efficient than WiFi and 3G. Testing of android applications highlighted that the device power is now becoming a bottleneck, instead of network performance, compared to 3G.

Long term evolution
LTE aims to enhance the Universal Terrestrial Radio Access Network, more commonly known as 3G. LTE is commercially avaliable but is not yet wide spread. The targeted user throughput is 100Mbps for downlink and 50Mbps for uplink. These targets are distinctly different to median values of 13Mbps and 6Mbps, respectively previously quoted.

User-plane latency is defined as the one-way transit time between the availability of a packet at the IP layer (the network layer) at the source and the availability of this packet at the IP layer (the network layer) at the destination. This definition means that user-plane latency includes the delay introduced by associated protocols. Since the delay is measured from network layer to network layer, we do not need to consider the delay introduced by the application layer or transport layer.

LTE can be compared to other networks such as Wi-FI and 3G by comparing network statistic such as bit rate, latency, user equipment (UE) power saving etc..

LTE uses Orthogonal Frequency Division Multiplex (OFDM) technology.
OFDM is based on FDM, but FDM wastes bandwidth as you need to leave bandwidth free between different carriers to stop the signals from interfering with each other. OFDM allows the carriers to be more closely spaced together so less bandwidth is wasted.

But OFDM is less power efficient as its more complex and requires linear amplification. To save power, LTE up ink uses a special implementation of OFDM called SC-FDMC.

Discontinuous reception (DRX) is also employed to reduce UE power consumption by some existing mobile technologies. LTE supports DRX. DRX is configured on a per UE basis and allows tradeoff to be made between power saving, delay and signalling overhead.

This study differs from previous ones since it doesn’t use either total on time or simplified LTE. Furthermore it uses real user traces instead of synthetic packets. UMICH refers to the real user data set of 20 smartphone users over 5 months, this data consists of user traces from Wi-Fi and 3G networks but not from LTE networks. So instead Wi-Fi traces are feed into the LTE model simulation framework, Wi-Fi traces were chosen over 3G as the RRT of Wi-Fi is more similar to that of LTE than 3G.

The study shows that LTE is less power efficient than WiFi and 3G for small data transfers but for bulk data transfers LTE is more power efficient than 3G but not Wi-Fi. Because LTE is more efficient than 3G for bulk data transfer then its important to make us of application tools such as Application Resource Optimizer (ARO) (MobiSys11 demo here) in LTE.

LTE is less power efficient than WiFi and 3G, even when DRX was used.

In LTE, the tail timer is the key parameter in determining the trade off between UE energy usage, performance and signalling overhead.

The study identified performance bottlenecks for android applications caused by device processing power, this is detected by monitoring CPU usage of application on the android devices.

Background on Radio Resource Control (RRC) and Discontinuous Reception (DRX) in LTE

Radio Resource Control (RRC) is a signalling protocol between user equipment (UE) and the 3G network (or 4G in this case). Long term evolution (LTE) has two RRC states: connected and idle. The transition from RRC-connected to RRC-idle is made when no data have been received/sent in a time peroid. The transition from RRC-idle to RRC-connection is made when some data is received/sent

Discontinuous Reception (DRX) is a used in mobile communication to conserve power. The UE and the network (in this case LTE) decide on phases when data transfer occurs, outside of these phases the network reciever on the mobile device is turned off thus consuming less energy.

For example. in 802.11 wireless networks, polling is used to control DRX. The mobile device is placed into standby for a set time interval and then a message is sent by the access point to indicate if there is any waiting data, if not then it is placed in standby again.

When LTE is RRC-connected state, then the UE can be in continuous reception, short DRX or long DRX. When LTE is RRC-idle, then the UE is only in DRX mode. Long DRX and short DRX are cycles of the receiver being on and off. The receiver is off for longer in long DRX, which increases delay but reduces emergy consuption. The receiver is on for longer in short DRX, which increases energy consuption but reduces deley. The parameters which dictate the length of time before various transitions, controls the trade-off between battery saving and latency.

Network Measurement
The android application designed to collect test data was called 4GTest and allows the user to switch between 3G, WiFI and LTE. The application made use of M-lab support. Measurement Lab (M-lab) is an distributed server platform for the deployment of internet measurement tools. The server-side tools are open source and the API’s allow researchers to develop client tools such as 4GTest.

When considering RRT for LTE, its important to consider the latency of the wired parts of the path from client to server because the latency of LTE is lower than that of 3G so errors (caused by wired parts) become more notable. To minimize the path from client to server, which is not LTE, the nearest server to the client is always used.

If GPS was not available then IP address was used to locate the client. This translation from IP address to GPS is a rough estimate. The service used for this translation was MaxMind.

To measure RRT and latency variation (difference in latency between connections not packets so its not called Jitter), the application repeatedly established a new TCP connection with the server and measures the delay between SYN and SYN-ACK packet. The median RTT measurements and the variation are reported to central server

To measure peek channel capacity, 4GTest sets up multi-threaded TCP measurements using the 3 nearest servers in M-lab. The throughput test lasts for 20 seconds, the first 5 seconds were ignored due to TCP slow start then the other 15 sec are split into 15 1 sec bins. The average throughput for each bin is calculated and the median of all bins is the measured throughput.

Interestingly, the median is used here instead of average as median reduces the impact of abnormal bins.

To collect packet traces on the android phone, tcpdump was cross-compiled.

To capture CPU usage history, a C program was written to read /proc/stat in the android system.

To understand the impact of packet size on one-way delay, delay was measured for a range of packet sizes and for each packet size, 100 samples were measured.

Power measurements were made using a Monsoon power monitor as power input for the android phone. Phone screen was turned off where possible.

The data set used for analysis is called UMICH and includes full packet traces in TCPdump format including both headers and payloads

The network model simulator takes the binary packet trace files in libpcap format and percessioning is required to collect accurate data.

Application performance 

The sample applications tested where some of the most popular apps. For the browser a simple website and a content-rich website were both tested. The approach taken involved launch the applications and then monitoring upstream, downstream data and CPU usage.

LTE Network characterization

These results are from the public deployment of 4GTest. In the US, the coverage of LTE, WiMAX and WiFi were found by 4GTest to be similar.

The public deployment of 4GTest found that the downlink and uplink throughput were both much higher for LTE than for Wi-FI, WiMAX and 3G technology. High variation in LTE throughput was observed.

For RRT and Jitter, LTE was found to have similar values to WiFi and better values than 3G and WiMAX

One-way delay and impact of packet size
On Wifi, packet size didn’t seem to influence one way delay, in either direction. On LTE, the uplink delay increases as packet size increases.

Modility
The requirements for LTE highlight that the network should be optimized for UE at low mobile speed (0 to 15 km/h) whilst still supporting high mobile speed (15 to 120 km/h).It was observed that in the face of speed changes, RTT remains stable and no significant change in throughput was observed.

Time of Day
Time of day was not found to have significant impact on the observed results .