top of page
  • Writer's picturePKS Media Works

Why Hyper-scale Data Center requires Unique Connectivity Solutions? How it can be done differently

Deployment of Data Center (DC) is not a new phenomenon in India and it has been in existence for more than two decades. Colocation business model is widely prevalent in Enterprise space and growth during early days was primarily driven by Enterprises.



Tata Communications Ltd., Sify, Netmagic, CTRL-S, & GPX etc were pioneers in India who had set up multiple data centres in many metro cities like Mumbai, Delhi, Bangalore, Chennai and Hyderabad.


However, emergence of Technologies, like Cloud and Virtualization, have caused exponential growth of data demand in this space. Along with this, wide scale proliferation of 4G, Government initiatives for Data localization and high uptake of video content, are some of the key reasons, driving the increased capacity demand in DCs. Buoyed by this increasing demand, many global players like NTT, STT, Equinix, Colt, PDG, Digital Realty, have decided to make significant investment in this space. Along with this, leading industries like Adani, Bharti Airtel, Hiranandani ,TEECL etc., have also made announcement to make substantial investments in this space.


Advantage India


India being a hyper-competitive market, it is very important to design a DC which has many unique differentiators. There are multiple design considerations related to Power, Space, Cooling, Security, Scalability etc., that make a DC efficient and differentiated.


One of the most critical parameters for making a DC very compelling is, its Inter-Connect Design.


Let us do a deep dive into DC Interconnect needs!!


In modern hyperconnected world, Compute and Storage are the two most critical needs.


Additionally, applications and data may also be hosted in different clouds, leading to emergence of multi-cloud applications. This further enhances the need of interconnect among DCs deployed by different providers.


All these cloud companies have started offering services with custom throughput, scalability latency etc. All these services need a very strong Network design to cater to the specific needs of any Cloud service provider located in any DC. Such customized Quality of Service ( QoS) expectation is impossible to achieve without very high quality of connectivity design.


Data Center Inter Connect Need


There are different kinds of Interconnect enlisted below,


1. Inter-DC Connectivity


2. DC Cluster Connectivity


3. Intra-DC connectivity


4. DC to Gateway locations


5. Inter-City DC Connectivity (Availability Zone)


Let us do little deeper study of each of the above listed Interconnect categories.


1.Inter-DC Connectivity: The connectivity among the DC has become one of the key requirements, which normally decides the attractiveness of a DC. This need has gained more prominence due to the fact that, anchor customers of maximum hyperscale DC are Cloud companies.


Normally hyperscale Cloud companies take leased infrastructure in multiple DCs in a given availability zone and connect them using high speed redundant and scalable, OFC links.


Additionally, many applications work in multi-cloud environment. All these requirements make a DC attractive, if it has direct connectivity with other DCs located in any particular city. Preferred means for providing such connectivity should be Direct fiber connectivity.


However, having direct connectivity may not always be feasible due to unavailability of dark fibers to connect DCs on redundant path.


In such cases, the legacy way of connecting is, through leased bandwidth. However, bandwidth-based connectivity inhibits scalability and may not give best latency experience. In many cases, frequent switching (Flaps) of the link is also observed due to challenges associated with underlying transport network infrastructure.


2. DC Cluster Connectivity: In many cases, the total infrastructure available for lease, gets consumed quickly. Normally, hyperscale cloud companies prefer clustering of servers with typical latency less than 50 microseconds.


Such extreme latency experience is feasible only when the distance between two facilities does not exceed 5 Km of physical length.


If the distance constraint among the closely located facilities are met, existing Hyperscale cloud company may prefer to lease more infrastructure from the same DC operator.


However, the single biggest requirement in such cases is, the availability of abundant spare dark fiber for the direct connectivity of servers located in different DCs.


3. Intra-DC Connectivity: While a lot of emphasis has been put for the inter-DC connectivity, the complexity of intra-DC connectivity is also extremely high.


Inside a DC, a typical rack space consists of multiple shelves. Each of these shelves are unit of Compute and storage and gets connected in specific spine and leaf architecture, with higher level servers. The connectivity of each shelf with higher level server is through multiple redundant fiber links. The schematic of a typical intra-DC connectivity has been depicted below.


...

This is highly complex design, if not done properly, may lead to multiple challenges in managing the operations of the DC. The design should ensure that all links are laid with almost equal distance, while also ensuring that maze of cable should not inhibit airflow within DC.


The selection of Cable for Plenum and Riser section also need specific considerations. Bundling, Clamping and Routing of cable should be done in such a way that it does not inhibit the smooth air flow.


It is very important to select right products for Cabling, Connectivity and Termination purpose. Maintaining very good updated record of each strand of fiber within DC is the key for efficient management of connectivity within DC.


4. DC to Gateway Locations: There are variety of Gateway location based on specific upward connectivity.


Typically, most prominent gateway locations are International Gateways, where international submarine systems are terminated via Cable Landing Stations (CLS). Another category of Gateway location is, the one, which hosts Cross Connects like Internet exchange, DC exchange etc.


These exchanges are neutral locations where traffic from different entities, hosted in a typical DC environment, are exchanged among them. For example, NIXIE Mumbai, hosts a neutral Internet peering exchange and all Internet Service providers or Telcos, normally have very high-capacity links, connected at this site, from their respective DC locations.


Hence very good connectivity, on a redundant low latency path, makes a DC attractive for potential tenants.


5. Inter-City DC Connectivity: In a country like India massive data consumption occurs in almost every part of the country. This has led to proliferation of hyperscale data centers in almost all major cities of the country.


However, in order to ensure prevention of data from potential loss, copy of information is maintained at multiple locations. Cloud companies also offer facilities of DC and DR in cloud environment to many of its clients.


All these necessitates, continuous replication of Data happening among multiple DC available in different zones. Cloud companies have created their own availability zones, based on multiple risk factors associated with potential disruption threats.


For example, AWS has created two major availability zones in India with Clusters on DCs located in two cities – Mumbai and Hyderabad. All DCs located in different cities need to be connected using scalable, resilient, redundant and best latency path. Normally, such links are leased by the tenants located inside the DC. For example, Google located in various DCs of Mumbai and Delhi, may take bandwidth on lease from Enterprise Bandwidth providers, like Bharti Airtel , Tata Communications Limited etc.,, on multiple redundant path with specific Quality of Service SLA.


Many hyperscale cloud companies have started demanding bandwidth on OPGW (Optical Power Ground Wire) based fiber link. In India, dominant provider of OPGW bandwidth is PGCIL, which commands high premium compared to others, due extremely high availability.


Based on details provided above, it becomes extremely important to give due considerations to the connectivity needs of Data Centres. Rapid evolution of DCs happened from Hosted to Virtualized and subsequent adoption of Micro-service based Containerized model.


The role of Connectivity has become extremely critical, in this rapidly evolving scenario. Good Connectivity has become a Unique Selling Point (USP) for any Greenfield DC operator. A best-in-class DC, without commensurate Connectivity design, is akin to a Palace being built, without building good quality access road!! The legacy approach of designing connectivity will limit the ability of a DC to support the emerging and future needs.


Key Pillars of the Quality of Connectivity


In modern hyper-connected world, the most sought-after tenants in any Data Center (DC) are cloud companies. One of the most critical determinants of attractiveness of any new DC is – the Quality of Connectivity.


Hence, the next logical question is – What factor determines the quality of connectivity that differentiates one DC from another.


Following are few factors that determines the quality of connectivity available at any DC location,


1. Redundancy – Most of the tenants of DC normally expect very high availability of service. The typical availability expectation in most of the cases is > 99.9%. Some of the Cloud companies expect availability guarantee of more than 99.985% to support Tier III kind of compliance. Such level of availability cannot be guaranteed without multiple end to end redundant path connecting the DC from other facilities. Though it is responsibility of Bandwidth provider to provide such SLA, it will not be possible to achieve such degree of redundancy without active involvement of DC owner.


2. Scalability – We are witnessing continuously increasing Data consumption in both Consumer and Enterprise space, enabled by proliferation of good quality broadband. The increased data consumption, leads to exponential increase of Data flow towards the DC. In many cases, it has been observed that traffic towards few prominent DCs have been doubling every quarter. In such scenario, any DC having lack of scalability may fall out of favour from the potential tenants. There are certain ways in which high degree of scalability can be ensured without incurring too much initial Capex during set up phase for a green field DC operator.


3. Latency – Till few years ago, any latency of carrier grade (< 50 ms) was good enough to support all the services, except for few customers like Stock Brokers and Media Companies. However, nowadays numerous cloud hosted consumer applications like Gaming, Automation etc., have started demanding latency near 1 millisecond. Customers are willing to pay multiple times for a specific committed latency and with imposition of hefty penalty for breach of SLA. Theoretically, it is possible to achieve latency of 5 microseconds per Km of optical distance. However, achieving such extreme level of latency needs a lot of design considerations in the beginning of setting up the connectivity.


4. Quality of Service (QoS) – While we have specifically touched the latency, there are other QoS parameters that determines the quality and stability of DC interconnect links. These parameters are Switching (Flaps), Packet drops, Jitter, Throughput variation etc. It is extremely important to engage proactively with Connectivity service providers to ensure very good quality connectivity at any DC location.


5. Optimized Connectivity Capex – While a number of considerations for ensuring good quality connectivity at any given DC has been listed, it is also important to ensure all these expectations are met without incurring significant Capex. Practically initial Capex for setting up, best in class connectivity, will be borne by DC operator without any revenue expectation. Hence it is very important to make sure that all above listed objectives are fulfilled with best possible TCO.



40 views0 comments
bottom of page