$ cd ~/.ssh $ ssh-keygen -t rsa -f <key-name> <br> |
Then copy
$ cd ~/.ssh $ ssh-keygen -t rsa -f <key-name> <br> |
Then copy
These are my notes on “A Close Examination of Performance and Power
Characteristics of 4G LTE Networks” which can be found here
4G is as the name suggests, the forth generation of mobile communication standards. There are currently two competing technologies, Mobile WiMAX and LTE. Long term evolution (LTE) is the technology that this paper considers.
4G Test Android Application
The team designed an android application called 4GTest. It is available in the Play store here. The application measures the network performance including latency, download speed and upload speed.
The application description claim that the results produced by this application are more accurate that any other test using single-threaded TCP tests.
I tested the application on an old HTC Magic but I couldn’t get the application to recognize my internet connection. The application is not compatible with the Nexus 7.
Overall Results/ Conclusions
The abstract states that “LTE generally has significantly higher downlink and uplink throughput than 3G and even WiFi, with a median value of 13Mbps and 6Mbps, respectively”. I am not sure why the median value has been quoted here instead of the mean value (if you know why, please comment below).
The data set for the study was 20 smart phone users over 5 months, this seems like a fairly limited sample size.
The power model of the LTE network, had a less than 6% error rate. It was found that LTE was less power efficient than WiFi and 3G. Testing of android applications highlighted that the device power is now becoming a bottleneck, instead of network performance, compared to 3G.
Long term evolution
LTE aims to enhance the Universal Terrestrial Radio Access Network, more commonly known as 3G. LTE is commercially avaliable but is not yet wide spread. The targeted user throughput is 100Mbps for downlink and 50Mbps for uplink. These targets are distinctly different to median values of 13Mbps and 6Mbps, respectively previously quoted.
User-plane latency is defined as the one-way transit time between the availability of a packet at the IP layer (the network layer) at the source and the availability of this packet at the IP layer (the network layer) at the destination. This definition means that user-plane latency includes the delay introduced by associated protocols. Since the delay is measured from network layer to network layer, we do not need to consider the delay introduced by the application layer or transport layer.
LTE can be compared to other networks such as Wi-FI and 3G by comparing network statistic such as bit rate, latency, user equipment (UE) power saving etc..
LTE uses Orthogonal Frequency Division Multiplex (OFDM) technology.
OFDM is based on FDM, but FDM wastes bandwidth as you need to leave bandwidth free between different carriers to stop the signals from interfering with each other. OFDM allows the carriers to be more closely spaced together so less bandwidth is wasted.
But OFDM is less power efficient as its more complex and requires linear amplification. To save power, LTE up ink uses a special implementation of OFDM called SC-FDMC.
Discontinuous reception (DRX) is also employed to reduce UE power consumption by some existing mobile technologies. LTE supports DRX. DRX is configured on a per UE basis and allows tradeoff to be made between power saving, delay and signalling overhead.
This study differs from previous ones since it doesn’t use either total on time or simplified LTE. Furthermore it uses real user traces instead of synthetic packets. UMICH refers to the real user data set of 20 smartphone users over 5 months, this data consists of user traces from Wi-Fi and 3G networks but not from LTE networks. So instead Wi-Fi traces are feed into the LTE model simulation framework, Wi-Fi traces were chosen over 3G as the RRT of Wi-Fi is more similar to that of LTE than 3G.
The study shows that LTE is less power efficient than WiFi and 3G for small data transfers but for bulk data transfers LTE is more power efficient than 3G but not Wi-Fi. Because LTE is more efficient than 3G for bulk data transfer then its important to make us of application tools such as Application Resource Optimizer (ARO) (MobiSys11 demo here) in LTE.
LTE is less power efficient than WiFi and 3G, even when DRX was used.
In LTE, the tail timer is the key parameter in determining the trade off between UE energy usage, performance and signalling overhead.
The study identified performance bottlenecks for android applications caused by device processing power, this is detected by monitoring CPU usage of application on the android devices.
Background on Radio Resource Control (RRC) and Discontinuous Reception (DRX) in LTE
Radio Resource Control (RRC) is a signalling protocol between user equipment (UE) and the 3G network (or 4G in this case). Long term evolution (LTE) has two RRC states: connected and idle. The transition from RRC-connected to RRC-idle is made when no data have been received/sent in a time peroid. The transition from RRC-idle to RRC-connection is made when some data is received/sent
Discontinuous Reception (DRX) is a used in mobile communication to conserve power. The UE and the network (in this case LTE) decide on phases when data transfer occurs, outside of these phases the network reciever on the mobile device is turned off thus consuming less energy.
For example. in 802.11 wireless networks, polling is used to control DRX. The mobile device is placed into standby for a set time interval and then a message is sent by the access point to indicate if there is any waiting data, if not then it is placed in standby again.
When LTE is RRC-connected state, then the UE can be in continuous reception, short DRX or long DRX. When LTE is RRC-idle, then the UE is only in DRX mode. Long DRX and short DRX are cycles of the receiver being on and off. The receiver is off for longer in long DRX, which increases delay but reduces emergy consuption. The receiver is on for longer in short DRX, which increases energy consuption but reduces deley. The parameters which dictate the length of time before various transitions, controls the trade-off between battery saving and latency.
Network Measurement
The android application designed to collect test data was called 4GTest and allows the user to switch between 3G, WiFI and LTE. The application made use of M-lab support. Measurement Lab (M-lab) is an distributed server platform for the deployment of internet measurement tools. The server-side tools are open source and the API’s allow researchers to develop client tools such as 4GTest.
When considering RRT for LTE, its important to consider the latency of the wired parts of the path from client to server because the latency of LTE is lower than that of 3G so errors (caused by wired parts) become more notable. To minimize the path from client to server, which is not LTE, the nearest server to the client is always used.
If GPS was not available then IP address was used to locate the client. This translation from IP address to GPS is a rough estimate. The service used for this translation was MaxMind.
To measure RRT and latency variation (difference in latency between connections not packets so its not called Jitter), the application repeatedly established a new TCP connection with the server and measures the delay between SYN and SYN-ACK packet. The median RTT measurements and the variation are reported to central server
To measure peek channel capacity, 4GTest sets up multi-threaded TCP measurements using the 3 nearest servers in M-lab. The throughput test lasts for 20 seconds, the first 5 seconds were ignored due to TCP slow start then the other 15 sec are split into 15 1 sec bins. The average throughput for each bin is calculated and the median of all bins is the measured throughput.
Interestingly, the median is used here instead of average as median reduces the impact of abnormal bins.
To collect packet traces on the android phone, tcpdump was cross-compiled.
To capture CPU usage history, a C program was written to read /proc/stat in the android system.
To understand the impact of packet size on one-way delay, delay was measured for a range of packet sizes and for each packet size, 100 samples were measured.
Power measurements were made using a Monsoon power monitor as power input for the android phone. Phone screen was turned off where possible.
The data set used for analysis is called UMICH and includes full packet traces in TCPdump format including both headers and payloads
The network model simulator takes the binary packet trace files in libpcap format and percessioning is required to collect accurate data.
Application performance
The sample applications tested where some of the most popular apps. For the browser a simple website and a content-rich website were both tested. The approach taken involved launch the applications and then monitoring upstream, downstream data and CPU usage.
LTE Network characterization
These results are from the public deployment of 4GTest. In the US, the coverage of LTE, WiMAX and WiFi were found by 4GTest to be similar.
The public deployment of 4GTest found that the downlink and uplink throughput were both much higher for LTE than for Wi-FI, WiMAX and 3G technology. High variation in LTE throughput was observed.
For RRT and Jitter, LTE was found to have similar values to WiFi and better values than 3G and WiMAX
One-way delay and impact of packet size
On Wifi, packet size didn’t seem to influence one way delay, in either direction. On LTE, the uplink delay increases as packet size increases.
Modility
The requirements for LTE highlight that the network should be optimized for UE at low mobile speed (0 to 15 km/h) whilst still supporting high mobile speed (15 to 120 km/h).It was observed that in the face of speed changes, RTT remains stable and no significant change in throughput was observed.
Time of Day
Time of day was not found to have significant impact on the observed results .
I’ve finally bitten the bullet and decided to learn Programmer Dvorak. Firstly, what on earth is Dvorak ? … Well, Dvorak is a alternative keyboard layout to QWERTY which is designed to make it easier and faster to type, by making the most common phases located near to the base position of your fingers. Programmer Dvorak is a particular “sub layout” of Dvorak which makes it easier to write source code.
Switching to Dvorak in Ubuntu is easy, just change the keyboard layout to “English (Dvorak Programmer). I was told by a friend, this is best not to move you keyboard keys or stick sticker over them, you should instead memories the layout of Dvorak Programmer
Wireshark wins over TCPtrace on GUI |
Its a tool designed to analyze the output logs from TCPdump. Previously, in my introduction to TCPdump I highlighted that the output logs created by TCPdump were not plain text and only special programs could interpret them, TCPdump is one of these program, as is Wireshark and TCPtrace.
So TCPtrace takes the output file from TCPdump as an input and it then outputs useful information and graphs.
I downloaded it from the Ubuntu repositories using the typical ‘sudo apt-get install tcptrace’. If this is not possible you can download it from here.
You can call TCPtrace with a TCPdump file using ‘tcptrace ‘ where my-file is the name of the file outputted by TCPdump. For example you could do something like:
$ sudo tcpdump -v -i wlan0 -w my_tcpdump_output -c 100
$ tcptrace my_tcpdump_output
The above will run TCPdump and create the output file called “my_tcpdump_output”, this file is then passed as a argument to the TCPtrace tool
The structure of the output is (in order from the top) :
This output is TCPtrace’s brief output. Just like TCPdump, you can stop the translation of IP address to domain names using the ‘-n’ opinion.
When using TCPdump, you can see more detailed output using the ‘-v’ option but with TCPtrace you can see more detailed output using the ‘-l’ option.
When adding options to TCPtrace, you need to ensure the you place the extra options before the name of the input file and after the tool name.
When viewing the output from the long mode (when -l is the option) then all information is labelled. I’m now going to explain each label given in long output (warning .. this might take a while):
Packets and ACKS
Retransmissions
Window scaling / Probing
etc… (sorry I hate leaving things half done, but I really wanted to move on, its in my to-do list)
TCPtrace will generate statistics on RRT when using with the opinions ‘-r’ and ‘-l’. This will give data on RRT including the number of RTT samples found, RTT minimum,RTT maximum, RTT average, RTT standard deviation, RTT from TCP’s hand shake. The same data is then available again for full-sized RTT samples only.
The following are the notes I’ve taken from the lectures and labs at my first day here at Google, London. This is a first draft and they are very brief, taken in quite a rush. The primary reason for my placing them here on my blog so they that they can be used by other people here with me at the camp.
The android platform lauched in October 2008, it now has over 400 million devices registered. Currently, more than 1 million devices are registered each day. There are over 600, 000 applications in the Google Play store, this highlights the quality of the development tools but this also means that there is a lot of competition so applications need to be high quantity across the supported devices.
This diagram shows the Android Development Architecture |
The main layers that I will be focusing on are the application layer and the application framework. The android platform makes use of Java Modeling Language (JML).
The lastest Android OS is nicknamed Jelly Bean (its 4.1).
The Android Development that takes place here in London includes youtube, Play Videos, Voice Search, Voice IMF and Chrome
Chrome is NOT a port of chromium. It was first released in February 2007. The current Chrome beta is based on chromium18.
The reason that applications looks difference to their web implementations is to make use of different user interaction methods (such as touch) and to work around limitations (such as screen size).
GMS- a set of applications separate from the OS but are communally shipped with the platform.
More information on design for android is available at d.android.com/design.
The primary form of user interaction is touch so you need to consider factors such as the size of the users fingers. The design of the mobile applications must be intuative to a user on the go. The Android OS runs on 1000 difference devices so you need to consider factors like screen size or if the device has a keyboard.
Key Priniciples
– Pictures are faster than words
– Only show what you need to
– Make important things fast
Every OS has a different look and feel. The current system theme is called “Holo” visual language. You can vary holo to get dark (e.g. media apps), light (e.g. productiveity) or light with dark productivty bar.
UI Structure
The structure of the android UI is (from the top down) the action bar (required), tabs (optional) and contents.
Action Bar
The action bar has 4 elements (from left to right):
On smaller screens some action buttons get pushed onto overflow.
The action bar is automatically added on modern application and ActionBarSherlock can be used to achieve backwards compatibility.
You can customize the action bar with the .getActionBar().setDisplayoptions() method
Tabs
Tabs are available as part of the ActionBar API and usually can also be switched with gestures
Contents
The layout of the content is defined as a tree consisting of view groups (tree nodes) and views (tree leaves).
The layout of the content is most commonly defined in XML under res/layout
A view is an individual component such as button, text box or radio button. All views are rectanglar.
A view group is an ordered list of views and view groups. View groups can be divided into two categories: layouts and complex view groups. The key layouts are frame, linear, relative and grid. The key complex view groups are list views and scroll views
The action bar is not defined in XML or is includes as the contents.
The xml files defining the layout of a particular activity are located in res/layout. You can also specify the layout for a particular screen size or orientation. For example if you wanted to use a different layout the nexus 7 compatred to a typically android phone when you can put the XML files in layout-large. If there is no particular layout specified in res/layout-large then the layout in res/layout will be used automatically. Other files include res/layout-land and res/layout-large-land
The universal application binary will contain all resources and then the system choose at run time which resource to use.
A similar situation is true of the drawables, for example this is: drawable-xhdip for extra high dots per pixel.
A really interesting and useful file type of drawables is “9 patch PGN” and it is well worth researching
The res/values/strings.xml file contains all string that will be displayed to the user, the purpose of this is so that the application can be quickly translated to other languages.
The string in this file can be referenced using R.string.hello in Java or @string/hello in XML, where hello is replaced by the name of the string.
You can access an icon, such as the system edit icon using android.R.drawable.ic_menu_edit in Java.
dip stands for density independent pixel which means that if i create a button that 30 dip wide, then on whatever device the application is run on, then the physical size remains the same. This is useful for example when you want to ensure that a button will always be just large enough for the user to click on.
1 dip = 1 pixel at 160 dpi
The nexus 7, has 213 dpi, has a screen size of 1280 x 800 pixels or 960 x 600 dip.
The key drawable types are Bitmaps (.png), 9-patches (.9png) and state lists (.xml).
Look at the SDK for more information on how to draw 9 patches. 9 patches are density specified versions on an image that can be stretched.
State lines are used to specify then different graphics are used in different situations for example showing one graphic when button is pressed and another graphic for then the button is not pressed.
ASIDE: its vital to always give the user feedback
Styles can be specified in /res/values/styles.xml. With the file starting with
I’m changing my approach from using tools like Iperf and ping to collect network data and then using my java program to analyses the output to writing the scripts for myself and working from the ground up.
It is new territory for me so it really exciting but also a bit daunting. Along side this new work, I now have just 7 days left to prepare for the Google Android Development Camp in London. The next week is looking like its going busy but interesting and by the end I hope to have overcome a steep learning curve.
PLAN FOR MONDAY
I’ll update you again soon…
I am going to work through my last article, where I explained how to generate the required files to run my Java code here
SETUP
I connect my laptop and android phone to the same Wi-Fi network and get there private IP addresses:
IperfOutput.txt
If ./adb shell returns error:device not found then wait a few seconds before trying again. This is because there can be a slight delay between plugging an android device and its being recognised.
Terminal 1
heidi@ubuntu:~$ cd Downloads/android-sdk-linux/platform-tools/
heidi@ubuntu:~/Downloads/android-sdk-linux/platform-tools$ ./adb shell
# iperf -u -c 192.168.14.245 -t 100
————————————————————
Client connecting to 192.168.14.245, UDP port 5001
Sending 1470 byte datagrams
UDP buffer size: 110 KByte (default)
————————————————————
[ 3] local 192.168.14.47 port 52285 connected with 192.168.14.245 port 5001
[ ID] Interval Transfer Bandwidth
[ 3] 0.0-100.0 sec 12.5 MBytes 1.05 Mbits/sec
[ 3] Sent 8918 datagrams
[ 3] Server Report:
[ 3] 0.0-100.0 sec 12.5 MBytes 1.05 Mbits/sec 1.783 ms 10/ 8918 (0.11%)
[Ctrl-C]
Terminal 2
heidi@ubuntu:~$ cd TestingSignpostAppOutput/
heidi@ubuntu:~/TestingSignpostAppOutput$ iperf -s -u >> IperfOutput.txt
[Ctrl-C]
Everyone who I have spoken to about my work since yesterday, has asked me the same question. Why are you writing this in Java ? The answer is that I am going on the Google European Android Development Camp in a few weeks so I am using Java were possible in my work so that I can get familiar with the basics again.
To ensure that the server (my laptop) and the client (my android phone) can address each other I ensure that they are behind the same NAT so that private IP addresses can be used. This ensures that both devices can initialize an connect with the other
My Java code for analyzing and comparing Signpost Diagnostic Application to the results generated by Iperf and Ping, requires the following files:
PingDownstreamOutput.txt;
done
PingUpstreamOutput.txt;
done
Once you have generated all of these files then you can run my program SignpostOutputAnalysis.java found here which should output the average true and estimated latency, goodput and jitter.
The code in SignpostOutputAnalysis.java is still incomplete and untested, I also have not yet tested my instructions for generating the correct files at the correct locations for the java code to be ran. I will be doing this testing next…