Hybrid Fiber Coax, known as “HFC”, is the well-established cable access network architecture. As the name suggests, HFC consists of two separate media connecting a cable head-end to the subscriber.
While the coax portion that connects to the subscriber gets most of the attention, the ‘heavy lifting’ is actually done by the fiber portion of the network. It’s here where we get the huge capacity necessary to deliver the content from and between cable head-ends, data centers, Internet PoPs and video server farms.
Why go ‘fiber deep’?
Today, cable MSOs are migrating to a “fiber deep” approach that pushes fiber all the way to the local access node. This approach aims to support the next-generation distributed access architecture (DAA) in which the PHY function is distributed from its traditional location in the head-end, out to the local access point: a process known as Remote PHY, or RPHY.
With RPHY comes the ability to increase the capacity in the access network and, since 2016, MSOs both large and small have been doing this using new technologies such as DOCSIS 3.1 and full duplex DOCSIS (FDX). These technologies enable higher data rates and, in the case of FDX, symmetric downstream and upstream connectivity speeds up to 10Gbps.
With these technologies, MSOs can continue to leverage the installed coax network making it more economical to compete with the high-speed services offered by fiber-based providers. However, these kinds of connections in the access (i.e., coax) network put enormous pressure on the metro core. Operating at these higher capacities, each subscriber is capable of consuming an entire wavelength, a situation that quickly exhausts capacity…
Considering that most metro core networks were built using 10G technology, MSOs need to migrate to a scalable, high-capacity infrastructure capable of providing the connectivity from the head-end to the content hubs located deeper in the core of the network.
In a recent survey published jointly by Light Reading and the SCTE, more than one-third of MSOs saw the need for 100G transport in their network over the next five years, while 18% believe they will need 200G and nearly 30% believe they will need 400G!*
Unfortunately for MSOs, time and experience have shown that forecasting bandwidth is a highly inexact science. And in fact, many will likely end up needing more capacity than they anticipate so they need a way to mitigate the risk of making the wrong choice when selecting their optical transport solution.
Choosing next-gen optical transport, today
Fortunately for the MSOs, the optical transport equipment industry has responded to the needs of this market. Today, the next generation of flexible transport systems are capable of tuning their performance depending on service demand. Based on third-generation digital signal processors (DSP), these platforms not only allow the network operator to decide what level of bandwidth to provision, but also to adjust that level to a higher (or lower) data rate as demand grows (or shrinks!). With this level of functionality, these next-gen systems now provide the risk mitigation MSOs need.
So if the 100G network the MSO forecasts now needs to be a 200G, or even a 400G network further down the line, a simple software adjustment is all that’s required to accommodate changing capacity demands – all with no need to replace cards or overbuild a whole new network!
With a highly functional, scalable optical transport network providing the backbone infrastructure, MSOs can rest assured their fiber deep strategies are well supported from day one and future-proofed for whatever capacity requirements may come their way!
* Source: LightReading survey, click here to get it
Guillaume Crenn has more than 20 years of experience in WDM Product Development and Operations in the telecoms industry. Prior to joining EKINOPS in 2010, he served as a WDM System Design Manager for Alcatel-Lucent and CORVIS-Algety (Telecom), as a Telecommunication Project Manager within the SANEF company (French motorway operating company)