SDN and beyond

Software-defined networking (SDN) and network function virtualization (NFV) are new approaches to designing and operating mobile networks, granting operators better management possibilities and better use of the network capabilities.

NFV represents the virtualization of network nodes roles, which culminates in separate software implementations performing the functions typically executed by hardware components. At the other end, SDN uses the virtualisation technology to split the control plane (where you need flexibility) from the data plane (where you need speed/performance). However, the price for this is complexity which translates into high operation costs.

Operators benefit from such frameworks because they increase the network capacity and performance, and allow for better manageability.

The YateUCN approach recognizes the usefulness of separating the user plane and the data plane, but it implements both of them in software. The control plane is implemented in the user space for flexibility while the user plane in the kernel space for speed.

As a result, operators who deploy YateUCN networks will gain from considerably scaling down equipment, and will have better control over the network scalability and performance requirements. The image below shows the YateUCN implementation and a common SDN deployment using an OpenFlow switch.

Unified Core Network vs. Common SDN deployment

Common NFV/SDN implementations rely on virtualizing the EPC, so that the functions of the MME (Mobility Management Entity), the SGW (Serving Gateway), and the PGW (Packet Data Network Gateway) are each implemented in software and run on the same hardware. Drawbacks of this approach include:

  • the separation between the control and user plane is achieved by means of a switch, usually hardware-based and external to the network. This is a limitation of software-defined network functions;
  • the switch is designed to replace the PGW and obtain the IP connection which it sends to the eNodeB over the user plane. This means that it must support both GTP protocol for the user plane and IP which determines the high costs for such equipment.
  • the complexity of NFV requires additional effort from the network to accommodate it, which increases the overall cost of the solution.

The implementation of YateUCN differs significantly from the above.

First, it uses commodity hardware, so no special-purpose equipment needs is needed. Simply put, YateUCN is a COTS server, which completely diminishes investment, staff, space, and power requirements.

Secondly, YateUCN differs from virtualized EPC because it implements a unique software, based on Yate, that performs all functions of the MME, SGW, and PGW. All-software implementation also means that multiple protocols (Diameter, SS7) are equally implemented in YateUCN, and no additional implementations are required for the core to connect to the Home Subscriber Server or IMS. This helps operators cut down on highly specialized staff needs and facilitates inter-working with legacy networks.

Thirdly, instead of using a hardware switch, YateUCN implements it in the Yate kernel. Because the Unified Core Network is based on Yate, an expandable Linux-based telephony engine, it was possible to integrate a software switch in the core software, allowing for much faster data processing and eliminating the need to work with multiple vendors.

YateUCN core network solution removes the barriers of entering the market due to simplicity, scalability and low cost. YateUCN specifications features and specifications list can be accessed here.

Definition: MIMO

LTE brought forth a variety of equipment and technologies. One of these new technologies is Multiple Input Multiple Output, also known as MIMO. It allows the use of use of multiple antennas in wireless communications is one of the main reasons why LTE has such high bandwidth rates.

It all started with the V-BLAST (Vertical-Bell Laboratories Layered Space-Time) project, in 1996, which is, in fact, at the basis of MIMO systems. V-BLAST was a detection algorithm of multiple signals whose main purpose was to reconstruct the multiple received signals into a single, faster stream of transmitted data. This, of course, is precisely why MIMO does.

The principal application of this technology is embodied in MIMO antennas, particularly used in LTE mobile networks. As opposed to SISO (Single Input and Single Output) – an antenna system with one transmitter and one receiver – two 2×2 MIMO antenna systems will use 2 transmitters and 2 receivers to generate 4 paths for transmitting and receiving different data at the same time. The two transmitters send different parts of the same data stream simultaneously, while the receivers have to piece them back together. MIMO increases overall performance and range and is able to send more data without additional power or added bandwidth requirements.

mimo_antenna_2015-9-3_version1.2Typically, radio signals traveling through the air are prone to being affected by various phenomena such as: fading, interference, path loss and more. What’s special about MIMO is that it does wonders in multipath environments, increasing the data throughput and lowering the bit error rate. MIMO is able to identify one signal from another at the receiver side because they have been altered differently by multipath. The receivers can spot the ‘clues’ that multipath left behind to correctly decode the received signals into a single faster data stream. As opposed to MIMO, SISO systems perform poorly in multipath conditions. Considering that LTE has gained such momentum in urban ares, the home ground of multipath, it’s easy to understand why 4G uses MIMO antennas.

As mentioned above, a 2×2 MIMO antenna will send each data stream through two independent channels to overcome fading. This is a concept called ‘diversity’ and it ensures that at least one data stream will be less affected by fading, increasing the chances of the receiver to decode more data correctly. ‘Polarization diversity’ is a ‘diversity flavor and is also used in MIMO systems. To give a simple example, polarization diversity would translate in using antenna pairs polarized orthogonally, either in a vertical/horizontal position or slanted at ± 45º. 

To sum up, the MIMO technology used in LTE antenna systems increases overall data throughput, reduces co-channel interference and multipath propagation effects, improves the signal to noise ratio and reduces the bit error rate.

Connecting public transport to the Internet of Things

Matched with contextual traffic data, information about the route and changing traffic conditions can be supplied in real time, so that both passengers and companies improve their planning efficiency.

Offering seamless and highly mobile IoT requires high bandwidth and thus only makes few applications practical.

Real-time location tracking is probably among the most common. Companies already use GPS to track their assets, but the data could also be used to offer riders accurate information about the time to destination, estimated arrival times, or traffic events.

On board entertainment systems offer a more personalized travel experience; location information combined with events information can drive travellers to activities or sites relevant to their itinerary and preferences.

In terms of planning, cameras and sensors installed in public transportation means and in their surrounding premises can collect information to estimate traffic flows and better plan and allocate their resources.

Safety can be improved with live video streaming, allowing a more rapid intervention and enabling the prevention of misconduct.

To make these solutions possible, it is essential to provide high bandwidth connectivity, and that is in itself a challenge. Even with Access Points installed in vehicles, resources from the mobile network still need to be accessed. Technology try-outs in this sense include LTE-A carrier aggregation to increase the bandwidth (as discussed here), MIMO systems to enhance spectral efficiency, or small cell technology to bring the radio cell closer to the device.

Alongside, connectivity on-the-go needs to be managed at carrier level in the sense of providing seamless coverage irrespective of the mobile operator. As this 2014 EU report underlines, ubiquitous connectivity for public transport requires ‘terminals to get connected regardless of the operator exploiting the access network’, and ‘avoid services cut-offs’. Tower infrastructure sharing is the solution adopted today, and it is particularly viable because it also allows to reduce their operating costs and provide additional capacity, reports the GSMA.

Internet of Things applications have already started to enable some of these trends in large metropolitan areas all over the world. Transport companies, mobile operators, and platform providers can leverage IoT solutions for real-time tracking and monitoring, improved efficiency and safety, and a better travel experience.

Predictions about numbers of IoT/M2M connected devices that we’re supposed to be seeing in the very near future are astounding. So we can only imagine what the huge amounts of data collected will lead to once it’s analyzed and turned into ‘actionable’ information.

Driving the Internet of Things with carrier aggregation

Internet of Things connectivity must reach a middle ground between coverage and bandwidth to provide for applications with very different requirements.

While it’s true that tracking, measurement, control, or monitoring systems in rural or remote areas have lower traffic and rely on low-bandwidth technologies such as GSM, a different trend is growing. A whole range of M2M and IoT applications using live video, rich media, on-the-go content, multi-user sharing, demand a high network capacity that can be provided today with LTE.

Carrier aggregation (CA), the key concept in LTE-Advanced, allows operators to supply even higher bandwidth than LTE, to support such connected devices. As its name suggests, carrier aggregation combines two or more carriers in order to offer a greater throughput.

Using CA, new transmission channels can be created using the operators’ existing frequency spectrum. It is available in both TDD and FDD systems, and can be achieved by combining carriers from the same frequency band or from different frequency bands, as shown below.

Capacity is essential for IoT, as hundreds of devices are in constant communication with the network. In CA systems, up to 100 MHz bandwidth can be reached, as each component carrier can have a maximum bandwidth of 20 MHz, and a maximum of 5 carriers could be aggregated. In practice though only two carriers have been used so far.

Operators may also opt to combine carriers from different spectrum bands, as some are already reported to be doing, and this can be very practical given that LTE networks are currently being deployed on distinct frequency bands.

For carrier aggregation to work on both ends, devices must be able to detect and read the multiple frequencies sent by the radio network. In theory, a peak speed of 500Mbps for uplink and 1Gbps for downlink could be achieved with carrier aggregation.

In commercial deployments so far, as reported recently by the GSA, a maximum downlink 300Mbps has been achieved on a number of devices including smartphones and mobile hotspots. According to the same report, only 88 commercial implementations of carrier aggregation systems have been launched so far in 45 countries, but others are underway.

Carrier aggregation can be used to offer increased bandwidth for IoT, and it can also improve coverage by combining low frequency carriers with high frequency ones. Trade-offs of this system include battery life, but we’ll talk more about LTE for IoT next week during IoT Evolution Expo.

The challenges behind VoLTE

In previous blog posts and demos we showed that a simplified approach is the way to obtain clear results in deploying VoLTE and 2G/4G mixed networks. We performed the industry’s first VoLTE call from a GSM mobile phone to an iPhone 6, through a single unified core network, the YateUCN, and we presented our solution for handling SRVCC (Single Radio Voice Call Continuity) as an inter-MSC (Mobile Switching Center) handover from 4G to 2G in the same YateUCN. Follow our take on why VoLTE hasn’t developed as rapidly as we all expected it would. We’ll give our insight and what we’ve learned from the many discussion we’ve had with mobile operators and smartphone producers alike.

Sure, VoLTE is great! Combining the powers of IMS and LTE, VoLTE offers excellent high-definition voice calls. It also guarantees a Quality of Service component, ensuring that customers get an unprecedented quality of voice services. However, VoLTE depends on far too many aspects to be fully functional and widely deployed, contrary to what optimistic reports have predicted in the past.

volte_issues

One of the main issues operators and customers alike are facing is the fact that there’s still a shortage of VoLTE capable smartphones. By April 2015 Verizon offered around 15 devices supporting VoLTE, while AT&T’s smartphone selection included around 19 devices capable of HD voice, in July 2015, as seen on their online shop. iPhone6 is still the only device capable of supporting VoLTE for all the operators that offer it. What’s more, most of these devices came from about 5 smartphone vendors, giving customers a limited choice when they buy a new phone.

Approximately 97% of VoLTE capable smartphones have their LTE chipset from the same vendor. According to reports from smartphone producers and operators alike, the VoLTE client is not stable enough, this being the reason why some vendors don’t even activate VoLTE in the baseband, and also why operators implement VoLTE in both the smartphones and the IMS network itself differently.

This also leads to the lack of interoperability between mobile carriers. Currently, VoLTE works only between devices belonging to the same network: for example, a T-Mobile customer using a VoLTE capable handset cannot roam in the AT&T VoLTE network of a called party. However, this was one of the main goals when VoLTE specifications were developed and we should still expect it to happen at some point.

Lastly, and perhaps most importantly, VoLTE deployments are scarce. A GSA report from July 2015 showed that only 25 operators have commercially launched VoLTE networks in 16 countries, while there are around 103 operators in 49 countries who are planning, trialling or deploying VoLTE. Compared with the total of 422 LTE networks commercially launched in 143 countries, VoLTE deployments are dramatically lower. This is the result of mobile carriers having a difficult time planing and building functional LTE and VoLTE networks, while also developing the essential Single Radio Voice Call Continuity (SRVCC) technology in an effective and performable way.

VoLTE still needs to leap over many hurdles until it becomes a technology used world wide. Operators, network equipment vendors, smartphones and chipset producers need to cooperate and jointly find technical solutions that will allow for a more swift VoLTE roll-out in most LTE networks.

An introduction to the LTE MAC Scheduler

LTE brought a completely new network architecture and managed to revolutionize the data capabilities ever achieved on a mobile network. LTE also brought a new type of radio network, much simpler in its organization. In a previous post we discussed about OFDM being the main reason behind LTE’s high data speed. Today we look into an essential component of the LTE radio network: the MAC Scheduler.

Sitting just above the Physical layer, the MAC Scheduler assigns bandwidth resources to user equipment and is responsible for deciding on how uplink and downlink channels are used by the eNodeB and the UEs of a cell. It also enforces the necessary Quality of Service for UE connections. QoS is a set of rules that come from the Policy and Charging Rules Function (PCRF) in the core network. These rules define priority, bit rate and latency requirements for different connections to the UE. They is usually based on the types of applications using the UE connection. For example, the QoS requirements for a VoLTE call are different from those for checking the e-mail.

As seen in the image below, the MAC scheduler has control over the OFDM modulation in the sense that it decides, according to information received from other LTE network components, how much bandwidth each UE receives at any given moment. In this figure, the resource element (sub-carrier) is represented on the frequency axis, while the sub-frames are represented on the time axis.

mac_scheduler1This figure shows downlink scheduling, but the MAC Scheduler controls uplink scheduling in a similar way.

In order to take its resource allocation decisions, the MAC Scheduler receives information such as:

  • QoS data from the PCRF: minimum guaranteed bandwidth, maximum allowed bandwidth, packet loss rates, relative priority of users, etc.
  • messages from the UEs regarding the radio channel quality, the strength or weakness of the signal, etc.
  • measurements from the radio receiver regarding radio channel quality, noise and interference, etc.
  • buffer status from the upper layers about how much data is queued up waiting for transmission

mac_scheduler2

Typically, a MAC Scheduler can be programmed to support one scheduling algorithm with many parameters.

Here are some examples of scheduling algorithms:

  • Round Robin – used for testing purposes and uses equal bandwidth for all UEs without accounting for channel conditions
  • Proportional Fairness – tries to balance between the QoS priorities and total throughput, usually preferred in commercial networks
  • Scheduling for Delay-Limited Capacity –  guarantees that the MAC Scheduler will always prioritize applications with specific latency requirements
  • Maximum C/I – guarantees that the Mac Scheduler will always assign resource blocks to the UE with the best channel quality

One of the key features of LTE is the ability to control and prioritize bandwidth across users. It is the MAC scheduler that gives LTE this capability.

Software-defined radio for frequency reuse in LTE

The expansion of 4G LTE challenges operators who have limited spectrum; as some decide to take down existing 2G (and even 3G) deployments in favor of 4G, bandwidth allocation in an area must be carefully planned to match the quality requirements of LTE.

In 4G LTE, spectrum is a crucial resource. Performance is dependent on the proximity between the radio network and the devices. The closer the radio tower, the higher the data throughput. This means that the more cell towers operators build, the better they can cover the area.

Frequency reuse is a widely adopted solution for LTE; essentially, a given area is served by more cell towers using the same frequency. An easier and more efficient approach to this is software-defined radio.

Cell edge interference management using YateENB

Cell edge interference management using YateENB

Frequency reuse means splitting an area in several new, smaller cells. In LTE, to maintain a high throughput, the same frequency is allocated to all the new cells, at the expense of higher interference at the cell edges. Since all the new cells have equal power, two or more cells meeting causes interference around the cells edges.

Apart from that, building and maintaining additional infrastructure required by frequency reuse results in high capital and operational expenses.

SDR in the LTE base station (eNodeB) can be a solution to these limitations. The fact that SDR implements the communication protocols in software and uses general-purpose hardware has several benefits.

The most important one is the effect on infrastructure costs. Base stations built on special-purpose hardware need heavier equipment and hence larger towers, which are expensive to install and operate. An eNodeB using general purpose hardware relies on more lightweight equipment, meaning that smaller towers can be deployed more densely in an area and provide better coverage. A lower power consumption associated with SDR-based BTS equipment also contributes to reducing the overall RAN costs.

Another major benefit of SDR is flexibility. SDR-based eNodeBs can be configured more easily to manage spectrum use at the edges of the cells, and thus minimize interference. Frequency sub-carriers can be selected at two cell edges in such a way that they do not overlap as in the case of conventional systems.

What’s more, SDR permits an adaptable power management so that different services can be assigned optimal QoS depending on the context.

Another aspect of SDR is the ability to build mixed networks. Base station equipment can be programmed to support different technologies at the same time and using the same hardware, serving more users with virtually no infrastructure investment. You can read more about this topic in this previous blog post.