In the age of digitisation, the telecommunications landscape has changed forever. Revenue from core services such as voice and messaging has declined, and digital modernisation has brought sweeping operational change.
New revenue channels such as apps and entertainment require a high degree of customer interaction, backed by analytics and streamlined networking.
Microservices adoption is high, as more and more companies realise that in order to interact better and more frequently with their customers, they need to have a more agile and scalable application infrastructure.
With the abundance of new features and services that consumers now expect , there comes a significant amount of investment in back-end products and solutions. In many cases, telcos find themselves caught between legacy hardware and lightweight cloud services. As the range of services provided by telcos continues to grow, scaling of hardware architecture proves to be limiting, providing neither the flexibility nor scope for additional services.
Adding new customer-facing services requires a simplified, fast and low-risk approach. At the network level, this has traditionally meant adding extra hardware in the form of Application Delivery Controllers (ADCs). These have been around for nearly two decades, and fulfilled a very important role in the load balancing and delivery of applications.
However, as so many functions of a modern IT environment move towards software-defined infrastructure and support tools, older hardware-based solutions are proving to be costly and less flexible. Services take longer to add, speed can be compromised, and there is an element of risk involved each time a device needs to be added to the network, or an existing device reconfigured. All of these factors can have a downstream effect on user satisfaction.
Moving from legacy hardware to a full software environment for load-balancing will allow telecommunications companies to respond to an ultra-competitive and consumer-driven market here in ANZ, offering faster services at a fraction of the cost and time.
Leaders in the field are starting to coin the phrase “bringing infrastructure closer to the app”- that is, rather than relying on the wider networking team to make changes to load balancing, which slows down time-to-market, moving to a virtual load-balancing model moves control closer to DevOps team members, who are able to provide fast, efficient changes for a more flexible model. Adding microservices directly through the software saves time and money, and brings those services to consumers faster.
Further to this, hardware load-balancers are not equipped to cope with the use of containers, so a software solution is required to facilitate the flow of East-West, or server-to-server traffic.
In cases where it is not feasible to end-of-life existing hardware, deploying a cloud-based software solution for application delivery can enhance the existing hardware in a telco’s architecture. This will often make hardware work more efficiently, and allow DevOps staff to add and enhance services more effectively, while hardware ADCs perform a more standard network routing role.
For example, one large telco in Australia has relied on a number of older hardware load-balancers for several years, which simply direct layer four and layer seven traffic. The hardware is an integral part of the company’s network, but was still proving to be a bottleneck when adding new services. By leaving the hardware in place for directing layer four traffic, but deploying software load-balancing for layer 7 traffic, the company was able to provide much faster, more efficient services at the same time as reducing the amount of hardware they owned and the associated cost of running it.
The first phase of digital modernisation for many telcos has been to cut costs. Replacing a hardware ADC with a software-defined solution can save as much as 75 percent at today’s prices.. There is a rule-of-thumb in networking circles, that for every five hardware ADCs a company deploys, they will need to employ one staff member. As discussed, while in many cases a company will still require hardware for routing traffic,that number can be reduced from say, ten ADCs to two, if used in conjunction with a software solution.
Other benefits of a software solution include:
● Rapid deployment and continuous integration/continuous delivery (CI/CD) require fully automatable, software‑based application delivery.
● Deploy everywhere – telcos can use the same software for bare metal, cloud, containers, and more, making deployment flexible and easy.
● Rapid response – no waiting for special hardware, installation and configuration. This will enable telco companies to scale up, scale out, and respond to customer demands in real time.
● No contractual constraints, thereby obtaining performance without artificial performance caps or hefty fees for traffic increases.
Above all, moving to a software-defined solution reduces risk. Suppose a company had ten hardware load balancers, each deploying 1000 apps. Making changes to that device comes with a fair degree of risk, from human error through to device and configuration failure. If the hardware fails, then services will be suspended, and customer trust will be lost.
Moving to software load-balancers reduces that risk, as a DevOps team can build, test and deploy microservices within the software, thus making thoroughly sure that they will work before going to market. Again, this moves control closer to the developers, reducing risk of hardware failure.
In the hyper-virtualised world into which we are moving, it makes sense to have a more nimble, faster and less costly means of responding to customer demand. Software-based solutions for load-balancing – or bringing the infrastructure closer to the app – can meet many of those requirements. .