The Evolution of Mobile and its impact on IoT Part 2

In Part One of this article, we looked at the birth of mobile through to the introduction of GPRS. In this second part, we're going to look at the arrival of 3G and what impact the increasing bandwidth of 4G and now 5G has on IoT applications.

5. Coming of Age: 3G makes mobile data mainstream

How did 3G happen?

Work started on 3G in the 1980s. in the International Telecommunication Union (ITU) work on the “Future Public Land Mobile Telecommunications System”. This led to the release of a technical specification – IMT-2000 – and spectrum between 400Mhz and 3Ghz being allocated for deployment.

However, it took until the formation of the 3rd Generation Partnership Project (3GPP) in December 1998 for practical standards for deployment of 3G to emerge.  3GPP was the result of collaboration between the European Telecommunications Standards Institute (ETSI) and 6 equivalent agencies covering the USA (ATIS), Japan (ARIB/TTC), China (CCSA), India (TSDSI) and South Korea (TTA).

The initial standard that enabled the roll out of 3G networks was the Universal Mobile Telecommunications System standard (UMTS). Released in 1999, it enabled 3G using either Frequency Division Duplex (FDD) or Time Division Duplex (TDD). TDD uses the same frequency for both transmit and receive over the radio link between the device and the network. It assigns alternative time slots for transmit and receive. FDD uses two different frequencies, or channels, for transmit and receive, separated by a guard band to minimise interference.

Whilst 3GPP championed UMTS, an alternative standard for 3G was developed by 3GPP2. Composed of standards bodies from Japan (ARIB/TTC), China (CCSA), North America (TIA) and South Korea (TTA), 3GPP2 defined the CDMA2000 standard. Whereas UMTS had been an evolution of the GSM standard, CDMA2000 built on existing CDMA network standard.

Unlike 2G/GSM data was intrinsic to 3G from the start. Picking up from where GPRS left off, 3G allowed significantly increased data rates – 2Mbps whilst static or walking and 384Kbps whilst in a car. These were enhanced further with High Speed Packet Access (HSPA). This delivered up to 14 Mbps in the downlink and 5.76 Mbps in the uplink.

3G video call
Kalleboo, CC BY-SA 3.0, via Wikimedia Commons

What did 3G deliver?

In 2001 3G networks based on UMTS, defined by 3GPP started to appear. Following extensive testing, and delayed launch dates, FOMA (Freedom of Multimedia Access) launched by DoCoMo in Japan in October 2001. Further 3G networks based on UMTS were then rolled out globally

Whilst initial ITU work allocated spectrum between 400 Mhz and 3Ghz for 3G. By the time deployment occurred the range had been narrowed significantly and most 3G networks used 2100Mhz. The characteristics meant cell footprints were reduced from GSM/2G, and in-building penetration capabilities reduced. Whilst TDD makes the best use of spectrum, most networks were deployed using FDD. The split approach created specific challenges for device manufacturers as well as the delivery of roaming.

Coupled with increasingly capable smartphones, 3G delivered video calling, richer messaging and high speed internet access. It was instrumental in the success of social networking, applications that leveraged all of the capabilities 3G offered.

The need for more cell sites to be built, and more expensive base stations impacted the roll out of 3G and consequent coverage. This meant that 3G coverage tended to be a subset of 2G – and the areas that suffered were likely to be the less populated areas where IoT applications could be of real value.

Nonetheless, by 2007 GSMA estimated that there were 190 3G networks across 40 countries and 154 HSDPA 3G networks operational in 71 countries. 3G also delivered a major milestone – it was the point data overtook voice as the main traffic on mobile networks.

What did 3G mean for IoT?

Despite the data focus of 3G, there are still challenges when considering using it for IoT.

  • Cell Breathing. This is the reduction in the coverage footprint for a mobile cell as the number of users on that cell increases. Whilst not a feature of GSM, it is an issue with 3G (UMTS and CDMA). If the total power of a cell is 50 watts, two devices on that cell will get 25w each. 5 users will get 10 watts each and the power per user will reduce in the same way as users increase. The lower the power for each device the shorter distance it can be from the cell and still be in coverage. The implication of this is that in peripheral coverage areas, if relying on 3G an IoT application could lose coverage at times of high cell usage.
  • The remote device needs to set up the data session. A central server/computer connecting to the device via the Internet cannot set up the data session to the device – the device on the 3G network has to do that. This can be an issue in setting up an IoT device remotely, or if it has a short loss of mobile coverage. This has been overcome using SMS. SMS is also used to deliver firmware upgrades to the device.
  • The data rate is not guaranteed – it is “best endeavours”. Whilst it was possible to achieve high data speeds, this is dependent on the capacity and loading on the network. At times of high network loading it was possible that the data rate would be inconsistent and no better than a contention free GPRS connection – 144Kbps. This could be an issue for critical applications in areas of dense usage – such as city centres. It could also be an issue for IoT applications supporting mission critical activities.
  • 3G has no error correction. In common with packet data on digital fixed networks, the assumption is that digital networking eliminates transmission errors. That means any transmission error checking has to be undertaken by the device receiving the data, and any re-sends necessary requested. Again, this could be an issue for IoT applications supporting mission critical applications.

Sierra Wireless 3G Module

 And the Hardware?

Companies providing IoT modules were ready with 3G product as the networks became available. 3G enabled more IoT applications to be developed, particularly those requiring higher data rates. Initially expensive compared to 2G/GPRS modules, 3G module volumes were driven by both new applications and the sunsetting of 2G networks in the USA. Module sales are a direct, though not exact indicator of the number and type of IoT connections being deployed. By 2017 3G modules comprised 49% of all module shipments, reaching 90 million units in 2018, before beginning their decline. 3G modules cost in the region £8 - £30 depending on order volume and other features.

6. Reaching Maturity: LTE delivers more capacity and capability

How did LTE/4G happen?

In 2002 the strategic vision for 4G wireless service was laid out by the ITU as “IMT Advanced”.  As a step towards delivery 3GPP ,adopted proposals from NTT DoCoMo in 2004 that they had labelled “Long Term Evolution” (LTE). Alongside LTE, other network technologies were also being developed under the same ITU 4G umbrella. Over time these reduced to 2, LTE and WiMax. As with 3G, both Frequency Division Duplex (FDD) and Time Division Duplex (TDD) variants were developed for LTE, but the majority of carriers, around 90%, deployed the FDD variant.

In February 2007 NTT DoCoMo demonstrated a Long Term Evolution prototype network that delivered 100Mbit/s whilst on the move, and 1Gbit/s whilst stationery.

A major departure from earlier technologies is the Evolved Packet System (EPS)/Evolved Packet Core (EPC) allowing a completely IP based network. For GPRS and 3G the IP address is specific to each specific data session. The device requests an IP address at the start of the session and is reclaimed by the network when the session closes. Under the EPC an IP address is allocated to the device by the network when it is turned on, and only reclaimed by the network when the device is switched off. Effectively the change means that under LTE/4G the IP address is allocated to a device, not a specific transaction by the device.

Whilst an IP network is ideal for data it is a challenge for real time services such as voice. Data traffic had overtaken voice traffic on mobile networks on 3G, and the balance had continued to shift in favour of data. That meant voice was a secondary consideration, especially with 2G and 3G already capable of supporting it. In fact, until the development and deployment of Voice over Long Term Evolution (VoLTE), voice was not carried on 4G, it was delegated to 3G or 2G, relying on handsets able to support multiple network technologies. The focus of LTE/4G was increasing support for data, throughput and capacity.

VoLTE is based on the IP Multimedia Systems (IMS) framework, and almost ironically, is capable of supporting up to 3 times the voice capacity of UMTS based 3G networks. That means seeing it as simply a “band aid” to support voice on LTE/4G is not appropriate. (not sure I get the significance of this para)

A standard that allowed the commercial deployment of LTE/4G was finally defined in December 2008 by 3GPP with Release 8. At its heart is the Evolved Universal Terrestrial Access Network (E-UTRAN) that allowed high data rates, short round trip times as well as frequency flexibility. A peak data rate of 300Mbps downlink and 75Mbps were defined, and a specific feature was the specification of a permitted maximum latency of 5ms for the radio access element.

Whilst the heavy lifting was done in Release 8, further refinement was also delivered in Release 9 in March 2010. Most notably Release 9 specified Femtocells, multimedia broadcast and additional spectrum options for LTE deployment - 800Mhz and 1500Mh.

What did 4G deliver?

The first commercial 4G network went live on 14th December 2009 covering Stockholm and Oslo, launched by TeliaSonera. The network was formally branded “4G” and access was provided using dongles – there were no handsets.
Telia branded Samsung 4G dongle

On 21st September 2010 LTE/4G launched in the US with the first LTE phone, the Samsung  SCH-r900. The world’s first LTE smartphone followed soon after on 10th February 2011, the Samsung Galaxy Indulge.

4G network roll out globally followed on, and by May 2016 the Global Mobile Suppliers Association (GSA) was reporting that over 500 LTE/4G networks had been deployed across 167 countries. The impact of the roll out was a significant increase in the number of devices connected to LTE/4G. According to GSMA, in 2019 LTE/4G accounted for more that 50% of the global mobile connections. This was no doubt helped by the prevalent delivery of a solution for LTE/4G that was able to support voice.

VoLTE was first rolled out in 2012 by Metro PCS in Dallas. By 2014 Roaming had been successfully showcased by KT and China Telecom, and AT&T announced domestic VoLTE interconnect with Verizon. In 2015 SEATEL in Cambodia focused on VoLTE only, removing support for voice from 2/3G. This was a significant step as VoLTE has sounded time for 2G and 3G networks. The removal of the reliance on them for voice calls has in turn paved the way for their retirement.

According to GSMA, by August 2019 there were 269 operators across 120 countries investing in VoLTE and that included 194 carriers in 91 countries with full commercial services. Each deployment enables sunsetting of previous generations of mobile technology, and the re-use of the associated spectrum. In part this accounts for the growing wave of 2G and 3G network closures.

What did LTE/4G mean for IoT?

Whilst delivering major benefit for mainstream mobile internet use, it is unclear that LTE/4G delivered much benefit for IoT. With an average monthly usage of under 10Mb for most IoT applications, the ability to deliver that in under a second was academic. Also, whilst LTE/4G coverage has grown it is still not on par globally with 2G.

The key issues to consider in deploying an IoT application on an LTE/4G network are:

  • No Cell Breathing. Coverage under LTE/4G is not significantly impacted by cell loading, unlike 3G. Improved intracell interference management for LTE/4G meant cell breathing was no longer an issue. That meant the potential coverage issues of deploying static applications at cell edge under 3G were fixed.
  • Inbound connectivity to the remote IoT device remains an issue. As LTE/4G moved to full IP networking, and the allocation of IP addresses to devices, not transactions it would be reasonable to expect that this would give the ability to contact a device from the internet. This is not the case. Most operators operate Carrier Grade Network Address Translation (CGNAT). This means the IP address seen externally belongs to the gateway, not the device, and may be being used by multiple customer connections as a consequence. Even where a device is given a dedicated IP address by the operator it is often behind a firewall that blocks all incoming connectivity. This is done to stop customers hosting on 4G, as well as to prevent unsolicited attempts to connect to the device. Therefore, the need for SMS as an integral part IoT connectivity continues.(‘hosting on 4G’…explain)
  • The data rate is not guaranteed – it is “best endeavours”. Whilst it was possible to achieve high data speeds on LTE/4G, this is still dependent on the capacity and loading on the network. At times of high network loading it was possible that the data rate would be inconsistent. However, given the underlying requirements of most IoT applications it is unlikely that the impact of congestion would impact their operation, and the definition of maximum latency on the air interface (5ms) would also help keep IoT applications running effectively
  • LTE/4G has no error correction. In common with packet data on digital fixed networks, the assumption is that digital networking eliminates transmission errors. That means any transmission error checking has to be undertaken by the device receiving the data, and any re-sends necessary requested. Again, this could be an issue for IoT applications supporting mission critical applications.

There is no doubt that LTE/4G added a significant step in the delivery of the Mobile Internet. The same cannot be said for IoT. Whilst in 2019 50% of the mobile connections globally were on LTE/4G, the same was not true for IoT connections on mobile. 2G remained the dominant mobile connectivity technology by far, a testament to the robust simplicity and minimal requirements for most IoT applications – “if it ain’t broke, don’t fix it”. However, what LTE/4G did was to finally unlock the spectre of 2G retirement and the impact that could have on IoT.

And the Hardware?

As with 2G/GPRS and 3G, module makers were ready for 4G/LTE network deployments with a range of IoT modules. In the case of 4G, ‘range’ was the operative word. Leaving aside the IoT specific low power (LPWA) variants there are 19 different categories of LTE depending on the upload and download speeds required. Most 4G/LTE modules are from ‘Cat 1’ to ‘Cat 4’, although some very high bandwidth IoT applications do require higher ‘Cat’ modules and these are available, though suitably expensive. 4G module sales grew from 4% of total IoT modules shipped in 2014 to 24% in 2017 and 34% in 2019, becoming the highest revenue earners for module suppliers as prices were four to five times 2G prices.
Typical prices for 4G/LTE modules are in the range £35 - £130, depending on order volumes, the ‘Cat’ required and additional features on the chip.

The demand for applications that could run autonomously of mains or vehicular power supply led to the development of cellular LPWA standards, the first IoT specific standards developed by GSMA. These are included in the LTE family and known as LTE Cat M1 and NB-IoT (NB1). Module makers were offering both these variants from 2017 but sales were mostly in China where NB1 has grown rapidly. Outside of China, rollout of networks capable of supporting these LPWA standards has been slower than anticipated but both LTE Cat M1 and NB1 networks began mass rollouts in 2020. Analyst forecasts suggest that by 2023 these LPWA variants will comprise 50% or more of new IoT connections with LTE Cat M1 replacing mobile applications that would previously run on 2G. NB-IoT (NB1) will be taking those static applications that would previously run on 2G, while both technologies expand the scope of applications that require power autonomy. Modules for NB1 connections are in the range of £10 - £25, while for LTE Cat M1 expect to pay £15 - £30.

7. Coming of Age: 5G delivering the promise

How did 5G happen?

South Korean research in 2008 looking at "5G mobile communication systems based on beam-division multiple access and relays with group cooperation" – a snappy titled project – was the start point. In 2012, the University of New York and the University of Surrey both started research projects looking at the 5G. The latter was jointly funded by the UK government as well as mobile operators and infrastructure providers.

Also, in 2012 two EU projects started key definition work on 5G. METIS (Mobile and wireless communications Enablers for the Twenty-twenty Information Society) worked with many key stakeholders to define consensus on standards ahead of more formal global standardisation.

Whilst some work had been done defining radio standards for 5G by 3GPP in 2017, it was not until Release 15 in 2019 that a full set of standards were defined. These standards introduced significant innovation. This set out the framework for proving users with a network delivering 150-200Mbps to their devices, an unprecedented level of access capability outstripping many fixed networks.

3GPP defined a new air interface for 5G – New Radio (NR) – relying on two frequency bands, FR1 using up to 6Ghz, and FR2 using mmWave – up to 24Ghz, over 24 times higher frequency than used for 2G.

FR1 focused on augmenting the traditional mobile network radio network with 5G capability.

FR2 was specifically focused on providing additional support capability for the existing network for 5G with a range of Small Cells. Small Cells complement the traditional network by providing additional capacity to enable both greater numbers of connections, and also higher bandwidth connections. The higher frequencies used minimise interference with the traditional network but have a significantly reduced coverage footprint – much like a WiFi Access Point vs a Mobile macrocell.

5G also introduced innovation in the antenna used both at the cell site and in the device -  Massive MIMO (Multiple Input Multiple Output). Massive MIMO provides significant increase in the capacity of the cell site, Nokia claim up to 5x, to meet the growing density of devices and also their required throughput.

The use of Edge Computing, moving servers as close to customers as possible, by 5G reduces latency and also data traffic congestion.

Beamforming, as the name suggests means a focused radio connection between the mobile network cell and the customer’s device. Beamforming helps 5G deliver both improved signal quality and data transfer rates using phased array antennas.

The higher frequencies used by FR1/5G do not travel as far as those previous generations. That means coverage gaps would be opened up using the existing architecture of most mobile networks. 5G introduced Small Cells - using FR2 - as a core part of the architecture to address this need for infill. Small Cells can also be used to add capacity where needed as well. Small Cell had been used in previous mobile generations, but largely as an add-on with associated compromises. 5G was the first time Small Cell had been defined as a core part of the architecture from the start.

A further technology area that 5G formalised was the potential integration of other types of wireless networks into service delivery. The “offloading” of data traffic from mobile networks to WiFi, when available, or when mobile network capacity was under pressure, had been piloted under 3G and 4G. However, it had never been intrinsic to standards. This meant there were problems in terms of continuing the same data session when it transferred networks – authentication could be lost. Session security was also an issue as the mobile network had no control/influence over the new host’s network security – also an issue for handback. Overall, whilst limited use of offload was made prior to 5G, the need was established and 5G standards embrace the concepts of seamlessly interworking with non-mobile wireless networks.

What has 5G delivered?

The first 5G network went live in South Korea on April 3rd 2019 – for six celebrities, simply to claim the “World’s First”. Verizon went live in the US hours later, with a full commercial offering and disputed the claim from South Korea.

When the three mobile operators in South Korea did finally commercially launch they added around 40,000 5G customers on the first day.

Global roll out of 5G is still in still in early days. By September 2020, according to the GSMA,  106 commercial launches hads taken place, covering 7% of the global population.

You can find which countries have 5G available for your IoT application by using our Global Mobile Network Availability tool. It also tells you which frequencies are being used as well – possibly key for the hardware you plan to use.

What does 5G mean for IoT?

Whilst much of the focus for 5G has been the high bandwidth delivery it offers, this in itself is of low value for IoT. Most true IoT applications are using under 5Mb per month, so a 150-200Mbps network to support that is of dubious additional value.

What is of far greater significance is the underlying improvement in network performance. The increased stability for connections, and reductions in latency, increasing the immediacy of the data and consequent actions are far more important. This means that far more mission dependent applications can be safely deployed using a public wireless technology, and probably explains the reason why those looking at self-driving cars are so focused on it.

There are two other transport capabilities introduced as a result of the work on 5G that are specific for IoT. These are NB-IOT and LTE-M. Both of these deliver improved support for IoT, having been designed specifically to support it. They are also timely introductions with the increasing retirement of 2G and 3G technologies by network operators around the world. Read more about NB-IOT and LTE-M.

You can find which countries have either NB-IOT or LTE-M available for your IoT application by using our Global Mobile Network Availability tool.

The key issues to consider in deploying an IoT application on an 5G network are:

  • The data rate is not guaranteed – it is “best endeavours”. Whilst it was possible to achieve high data speeds on LTE/4G, this is still dependent on the capacity and loading on the network. At times of high network loading it was possible that the data rate would be inconsistent. However, given the underlying requirements of most IoT applications it is unlikely that the impact of congestion would impact their operation, and the definition of maximum latency on the air interface (5ms) would also help keep IoT applications running effectively

  • LTE/4G has no error correction. In common with packet data on digital fixed networks, the assumption is that digital networking eliminates transmission errors. That means any transmission error checking has to be undertaken by the device.