Driving the Internet of Things with carrier aggregation

Internet of Things connectivity must reach a middle ground between coverage and bandwidth to provide for applications with very different requirements.

While it’s true that tracking, measurement, control, or monitoring systems in rural or remote areas have lower traffic and rely on low-bandwidth technologies such as GSM, a different trend is growing. A whole range of M2M and IoT applications using live video, rich media, on-the-go content, multi-user sharing, demand a high network capacity that can be provided today with LTE.

Carrier aggregation (CA), the key concept in LTE-Advanced, allows operators to supply even higher bandwidth than LTE, to support such connected devices. As its name suggests, carrier aggregation combines two or more carriers in order to offer a greater throughput.

Using CA, new transmission channels can be created using the operators’ existing frequency spectrum. It is available in both TDD and FDD systems, and can be achieved by combining carriers from the same frequency band or from different frequency bands, as shown below.

Capacity is essential for IoT, as hundreds of devices are in constant communication with the network. In CA systems, up to 100 MHz bandwidth can be reached, as each component carrier can have a maximum bandwidth of 20 MHz, and a maximum of 5 carriers could be aggregated. In practice though only two carriers have been used so far.

Operators may also opt to combine carriers from different spectrum bands, as some are already reported to be doing, and this can be very practical given that LTE networks are currently being deployed on distinct frequency bands.

For carrier aggregation to work on both ends, devices must be able to detect and read the multiple frequencies sent by the radio network. In theory, a peak speed of 500Mbps for uplink and 1Gbps for downlink could be achieved with carrier aggregation.

In commercial deployments so far, as reported recently by the GSA, a maximum downlink 300Mbps has been achieved on a number of devices including smartphones and mobile hotspots. According to the same report, only 88 commercial implementations of carrier aggregation systems have been launched so far in 45 countries, but others are underway.

Carrier aggregation can be used to offer increased bandwidth for IoT, and it can also improve coverage by combining low frequency carriers with high frequency ones. Trade-offs of this system include battery life, but we’ll talk more about LTE for IoT next week during IoT Evolution Expo.

The challenges behind VoLTE

In previous blog posts and demos we showed that a simplified approach is the way to obtain clear results in deploying VoLTE and 2G/4G mixed networks. We performed the industry’s first VoLTE call from a GSM mobile phone to an iPhone 6, through a single unified core network, the YateUCN, and we presented our solution for handling SRVCC (Single Radio Voice Call Continuity) as an inter-MSC (Mobile Switching Center) handover from 4G to 2G in the same YateUCN. Follow our take on why VoLTE hasn’t developed as rapidly as we all expected it would. We’ll give our insight and what we’ve learned from the many discussion we’ve had with mobile operators and smartphone producers alike.

Sure, VoLTE is great! Combining the powers of IMS and LTE, VoLTE offers excellent high-definition voice calls. It also guarantees a Quality of Service component, ensuring that customers get an unprecedented quality of voice services. However, VoLTE depends on far too many aspects to be fully functional and widely deployed, contrary to what optimistic reports have predicted in the past.

volte_issues

One of the main issues operators and customers alike are facing is the fact that there’s still a shortage of VoLTE capable smartphones. By April 2015 Verizon offered around 15 devices supporting VoLTE, while AT&T’s smartphone selection included around 19 devices capable of HD voice, in July 2015, as seen on their online shop. iPhone6 is still the only device capable of supporting VoLTE for all the operators that offer it. What’s more, most of these devices came from about 5 smartphone vendors, giving customers a limited choice when they buy a new phone.

Approximately 97% of VoLTE capable smartphones have their LTE chipset from the same vendor. According to reports from smartphone producers and operators alike, the VoLTE client is not stable enough, this being the reason why some vendors don’t even activate VoLTE in the baseband, and also why operators implement VoLTE in both the smartphones and the IMS network itself differently.

This also leads to the lack of interoperability between mobile carriers. Currently, VoLTE works only between devices belonging to the same network: for example, a T-Mobile customer using a VoLTE capable handset cannot roam in the AT&T VoLTE network of a called party. However, this was one of the main goals when VoLTE specifications were developed and we should still expect it to happen at some point.

Lastly, and perhaps most importantly, VoLTE deployments are scarce. A GSA report from July 2015 showed that only 25 operators have commercially launched VoLTE networks in 16 countries, while there are around 103 operators in 49 countries who are planning, trialling or deploying VoLTE. Compared with the total of 422 LTE networks commercially launched in 143 countries, VoLTE deployments are dramatically lower. This is the result of mobile carriers having a difficult time planing and building functional LTE and VoLTE networks, while also developing the essential Single Radio Voice Call Continuity (SRVCC) technology in an effective and performable way.

VoLTE still needs to leap over many hurdles until it becomes a technology used world wide. Operators, network equipment vendors, smartphones and chipset producers need to cooperate and jointly find technical solutions that will allow for a more swift VoLTE roll-out in most LTE networks.

An introduction to the LTE MAC Scheduler

LTE brought a completely new network architecture and managed to revolutionize the data capabilities ever achieved on a mobile network. LTE also brought a new type of radio network, much simpler in its organization. In a previous post we discussed about OFDM being the main reason behind LTE’s high data speed. Today we look into an essential component of the LTE radio network: the MAC Scheduler.

Sitting just above the Physical layer, the MAC Scheduler assigns bandwidth resources to user equipment and is responsible for deciding on how uplink and downlink channels are used by the eNodeB and the UEs of a cell. It also enforces the necessary Quality of Service for UE connections. QoS is a set of rules that come from the Policy and Charging Rules Function (PCRF) in the core network. These rules define priority, bit rate and latency requirements for different connections to the UE. They is usually based on the types of applications using the UE connection. For example, the QoS requirements for a VoLTE call are different from those for checking the e-mail.

As seen in the image below, the MAC scheduler has control over the OFDM modulation in the sense that it decides, according to information received from other LTE network components, how much bandwidth each UE receives at any given moment. In this figure, the resource element (sub-carrier) is represented on the frequency axis, while the sub-frames are represented on the time axis.

mac_scheduler1This figure shows downlink scheduling, but the MAC Scheduler controls uplink scheduling in a similar way.

In order to take its resource allocation decisions, the MAC Scheduler receives information such as:

  • QoS data from the PCRF: minimum guaranteed bandwidth, maximum allowed bandwidth, packet loss rates, relative priority of users, etc.
  • messages from the UEs regarding the radio channel quality, the strength or weakness of the signal, etc.
  • measurements from the radio receiver regarding radio channel quality, noise and interference, etc.
  • buffer status from the upper layers about how much data is queued up waiting for transmission

mac_scheduler2

Typically, a MAC Scheduler can be programmed to support one scheduling algorithm with many parameters.

Here are some examples of scheduling algorithms:

  • Round Robin – used for testing purposes and uses equal bandwidth for all UEs without accounting for channel conditions
  • Proportional Fairness – tries to balance between the QoS priorities and total throughput, usually preferred in commercial networks
  • Scheduling for Delay-Limited Capacity –  guarantees that the MAC Scheduler will always prioritize applications with specific latency requirements
  • Maximum C/I – guarantees that the Mac Scheduler will always assign resource blocks to the UE with the best channel quality

One of the key features of LTE is the ability to control and prioritize bandwidth across users. It is the MAC scheduler that gives LTE this capability.

Software-defined radio for frequency reuse in LTE

The expansion of 4G LTE challenges operators who have limited spectrum; as some decide to take down existing 2G (and even 3G) deployments in favor of 4G, bandwidth allocation in an area must be carefully planned to match the quality requirements of LTE.

In 4G LTE, spectrum is a crucial resource. Performance is dependent on the proximity between the radio network and the devices. The closer the radio tower, the higher the data throughput. This means that the more cell towers operators build, the better they can cover the area.

Frequency reuse is a widely adopted solution for LTE; essentially, a given area is served by more cell towers using the same frequency. An easier and more efficient approach to this is software-defined radio.

Cell edge interference management using YateENB

Cell edge interference management using YateENB

Frequency reuse means splitting an area in several new, smaller cells. In LTE, to maintain a high throughput, the same frequency is allocated to all the new cells, at the expense of higher interference at the cell edges. Since all the new cells have equal power, two or more cells meeting causes interference around the cells edges.

Apart from that, building and maintaining additional infrastructure required by frequency reuse results in high capital and operational expenses.

SDR in the LTE base station (eNodeB) can be a solution to these limitations. The fact that SDR implements the communication protocols in software and uses general-purpose hardware has several benefits.

The most important one is the effect on infrastructure costs. Base stations built on special-purpose hardware need heavier equipment and hence larger towers, which are expensive to install and operate. An eNodeB using general purpose hardware relies on more lightweight equipment, meaning that smaller towers can be deployed more densely in an area and provide better coverage. A lower power consumption associated with SDR-based BTS equipment also contributes to reducing the overall RAN costs.

Another major benefit of SDR is flexibility. SDR-based eNodeBs can be configured more easily to manage spectrum use at the edges of the cells, and thus minimize interference. Frequency sub-carriers can be selected at two cell edges in such a way that they do not overlap as in the case of conventional systems.

What’s more, SDR permits an adaptable power management so that different services can be assigned optimal QoS depending on the context.

Another aspect of SDR is the ability to build mixed networks. Base station equipment can be programmed to support different technologies at the same time and using the same hardware, serving more users with virtually no infrastructure investment. You can read more about this topic in this previous blog post.

SRVCC made easy

As promised in our last LTE technology post,  we want to tackle a new technology used in voice in 4G: Single Radio Voice Call Continuity. We’ll explain what SRVCC entails and give you an insight into our own approach towards this technology: inter-MSC SRVCC from 4G to 2G.

While most voice traffic in LTE  is provided with CSFB, today the next stage involves using VoLTE and a technology called SRVCC for providing seamless voice continuity from LTE to other 2G/3G networks in areas not covered by LTE.

One of the main issues for LTE for operators is that deployment is spotty and incomplete. Once the big challenge of deploying VoLTE has been achieved, operators have to use SRVCC to offer subscribers continuous voice traffic when they reach an area without LTE coverage.

SRVCC allows for inter-Radio Access Technology handover, while also providing handover between a packet data-only network to a CS network. As the name suggests, SRVCC removes the need for two simultaneous active radios in devices, as required by CSFB, preserving the battery life, and manages to maintain continuous QoS during voice calls which are in progress. SRVCC is also a mandatory technology for maintaining continuity during emergency calls.

Typically, SRVCC enables voice and data handover from LTE to legacy networks and viceversa. To enable SRVCC, operators need to upgrade their legacy MSCs, the LTE RAN and EPC and the IMS network for VoLTE.

We have a different, simpler approach to offer operators: our YateUCNserver handles SRVCC by performing and inter-MSC handover from 4G to 2G. Built to simultaneously be an MME/MSC and the IMS network for VoLTE, YateUCN performs SRVCC without the additional network upgrades (in LTE and 2G) mentioned above.

VoLTE_SRVCC_Handover

With YateUCN, the SRVCC handover will be performed as simple as an inter-MSC handover, without the additional investments normally required.

We are committed to innovation and believe in providing software-defined mobile network equipment, designed to cater to both 4G and 2G, while relieving operators from the huge costs of upgrade, maintenance and service. Our resilient and scalable YateUCN embodies this philosophy entirely.

Voice in LTE – CSFB trials and tribulations

As LTE is a packet data-only network, operators use two main solutions to provide voice to their subscribers with LTE devices: VoLTE, which we previously discussed in our blog posts, and Circuit Switched Fallback (CSFB).

CSFB is often seen as a “temporary” solution, until there are enough VoLTE devices on the market, and consists of 4G users being handed over to legacy 2G/3G networks to use voice services. However, CSFB also has major challenges, such as the fact that subscribers can lose their 4G data connectivity after their CSFB call ended. This is also a troublesome aspect for operators with LTE-only networks, as they have roaming agreements with MNOs for voice services over 2G or 3G, and will be forced to pay higher fees once their subscribers don’t return to their 4G networks.

CSFB allows operators with newly created LTE networks to exploit their legacy networks or to use MVNO agreements, and provide voice capabilities without major investments or fundamental changes to their circuit switched (CS) core networks. CSFB moves a subscriber from the LTE core network to the CS core network through the SGs interface during call setup (the SGs interface is added to the LTE architecture and allows mobility management and paging procedures between the MME and the MSC). Normally, one would expect the subscriber will return to the LTE network once the call has ended. The reality, however, is otherwise.

Circuit Switched Fallback

Among CSFB’s main issues, we can name:

  • data traffic suspends during the handover between networks
  • data rates decrease dramatically during the CSFB call’s answer and hang-up moments
  • mobile apps terminate during the CSFB voice call
  • data transfer is suspended during the call if the 2G/3G networks don’t support dual transfer mode
  • most importantly, once the voice call has ended, the subscriber cannot return to the home LTE network, especially in the case of MVNO agreements and not when the operator has both the LTE and CS networks

Studies have shown that behavior patterns such as those listed above depend on the data packet size and the running data packet interval.

Operators with LTE-only networks need to use roaming agreements with other MNOs to enable CSFB. Therefore, they are the ones who will bare the data traffic costs when their subscribers remain stuck in 2G/3G networks, sometimes even for hours.

The main impediment in proposing a solution that will work for all operators and will prevent such problems is that CSFB standards don’t give any insight into how devices are supposed to return to the LTE home network. One solution non-MVNOs typically adopt is to set up rules for the handover back to 4G or for the cell reselection procedure.

Stay tuned for our next blog post in which we’ll cover more on voice solutions for 4G, namely inter-MSC SRVCC from LTE to 2G.

Roaming in LTE – facing challenges with new opportunities

With LTE, operators are now able to offer their subscribers huge bandwidths and significantly improved quality of service, but they also have to face new challenges. LTE drastically changed the mobility architecture and led to the adoption of new interfaces, frequencies, protocols, which ultimately impacted on the state of roaming.

A 2013 Informa report on the market status of LTE Roaming found that most operators hadn’t even finished their roaming strategy. By 2015, operators who deployed LTE networks had data roaming only in a few countries.

Our answer to the LTE roaming dilemma is YateUCN.

So what’s different?

Roaming allows subscribers to use voice and data services when they are abroad. There are two main aspects to keep in mind when discussing roaming:

  • commercial  roaming agreements between operators
  • technical implementation SS7 (Camel/MAP) protocol in the case of 2.5G and 3G networks, and Diameter in 4G networks

Each roaming protocol requires new roaming and interconnect agreements, even with existing partners. Therefore, once an operator deploys a 4G network, it will need new roaming agreements for their LTE subscribers.

Operators who want to add an LTE network will have to face two challenges related to:

  • deploying a network with a radically different infrastructure, including new interfaces and protocols
  • setting up new roaming agreements for Diameter, since LTE roaming requires it

Our solution

YateUCN, our core network solution, allows 4G devices to authenticate to foreign partners over SS7 roaming agreements, to an HLR.

YateUCN is a mixed 2.5G/4G core network server, capable of replacing all the core network equipments associated to both networks, while also using both Camel/MAP and Diameter for roaming.

YateUCN_Network

 

YateUCN also makes it possible for 4G devices to be registered to a 2.5G network and a 4G network at the same time, if necessary.

With the innovative YateUCN, MNOs will tap the great opportunity LTE roaming is, while also using the standing roaming agreements with their partners. Operators will gain time to set up the right agreements, and at the same time will garner new revenues by encouraging their customers to use data roaming.

OFDM – the science behind LTE

No one wants the kitten video they’re watching on Youtube to stop and reload endlessly. We all want to send big chunks of data very fast, while still keeping the integrity of said data. Nevertheless, the faster you send data, the more likely it is to experience transmission problems, especially due to interference or weak signal.

OFDM (Orthogonal Frequency Division Multiplexing) is the radio science behind huge bandwidth capabilities we see in LTE. OFDM splits data into small sub-carriers, also known as data streams, on neighboring frequencies, over a single channel. It allows sending more data than through single carrier modulation techniques, and at a higher rate. OFDM also handles phenomena such as interference, noise or multipath significantly more efficiently than other modulation methods.

How it works

The following explanation is for non-engineers and is meant to shed some light on OFDM, so keep in mind that we are leaving out many details that are not critical for understanding what OFDM is or why we use it.

We’ll use a theoretical example: a bandwidth of 1 MHz and round numbers, which are easier to remember and apply to real-life scenarios.

Traditional single carrier modulation uses only one frequency to send the bits, as seen below.

SingleCarrier

In OFDM, the bandwidth of 1 MHz band is split into, say, 1000 sub-carriers of 1 kHz, and each of them sends one symbol per millisecond.

OFDM

Next, OFDM uses the FFT (fast Fourier transform) algorithm and its inverse to effectively retrieve the original data bits from the symbols and vice versa.

Last, but not least, OFDM has a special property called orthogonality, which essentially means that sub-carriers are spaced in such a way that they only partially overlap, as the peak of each sub-carrier intersects the zero crossing of the neighboring sub-carrier. This characteristic is perfectly illustrated in the image below, where you can see 5 of the 1000 1 kHz sub-carriers in the frequency domain, in a single channel.

Orthogonality is what allows us to pack sub-carriers really tight, without wasted frequencies between them as in traditional cases, which require using guard bands.

ortho_ofdm

Let’s go back to the example used earlier. The data rate obtained using OFDM is the same as in the case of single-carrier modulation, so you might wonder why we use it so enthusiastically in LTE, which is what we’ll explain below.

Effective against multipath propagation and interference

Multipath propagation occurs the moment a radio signal bounces-off obstacles that appear in its path: water sources, hills and mountains, buildings, trees etc. Multipath causes the transmitted signal to be sent on two or more paths to the receiver, making it difficult for the receiver to interpret what it receives. Only some frequencies are prone to experience multipath. In single carrier modulation systems, it has a damaging impact throughout the whole frequency and affects all of the data symbols.

SingleCarrier_multipath

Take a look at the image below: only one carrier experiences multipath, but since all sub-carriers transport redundant copies of the sent symbols, data loss is minimal.

OFDM_multipath

OFDM is also effective against interference because only some of the data streams will be affected by this phenomenon and data can be more easily recovered with redundant error-correction coding.

Spectral efficiency

When using OFDM, LTE can tailor the modulation to make the best possible use of the available radio path to and from the UEs. Because of the OFDM carrier structure, LTE can take advantage of the changes in channel conditions and uses different modulations depending how close or far the UEs are from the transmitter.

Because it uses OFMD, LTE can dynamically change the symbol alphabet, depending on the radio conditions, for each individual sub-carrier. For example, if you’re sending data close to the transmitter, LTE will apply a 64-QAM modulation scheme, that is 6 bits/symbol. But if you are moving further from the transmitter, and the radio conditions are unreliable, LTE will dynamically adapt to either 16-QAM or QPSK, sending 4 or 2 bits/symbol. In extreme cases, LTE can even use BPSK (1 bit/symbol).

Some disadvantages

OFDM also has its downsides.

OFDM has a high peak-to-average power ratio, and requires a highly linear and oversized power amplifier that usually has a low efficiency. In the image below, you can see a typical OFDM peak-to-average power ratio. This occurs because multiple sub-carriers with different phases combine constructively in the time domain.

Typically, to obtain a 5 W average output power, an OFDM system requires a 100 W power amplifier, representing an increase by a factor of 20 from the actual 5 W output. Otherwise, the distortion is far too destructive to allow OFDM to function normally.

peak-to-noise_power_ratio

OFDM is also very sensitive to Doppler shift. This phenomenon occurs when the UE is moving, thus making the frequency of the signal received different from the frequency of the initially transmitted signal. Among its effects in OFDM, Doppler shift deteriorates synchronization, data recovery, and destroys the orthogonality of sub-carriers.