The cable guys

17 February 2020

If you think of an enterprise as a human body, the data centre is the heart that pumps the lifeblood around it.

To use another analogy, numerous issues can also plague the mechanics of the data centre’s anatomy as time goes on.

One of those is improper cabling, which can spell trouble for the entire enterprise, leading to things like “spaghetti” cabinets, difficult equipment installations and extended periods of troubleshooting and maintenance.

Indeed, the early days of interconnection many managers, for whatever reason, ignored structured cabling. However, as the industry standards took hold, quality generally improved. 

Quite simply, the failure to properly manage this critical part of data centre infrastructure can cause serious issues, from increased operating costs to more expensive outages. Yet while cabling ostensibly requires technical skill, is that really a defence if things go wrong as a result of bad management? Mike Connaughton, technical sales manager - data centre solutions, Nexans says, depending on the data centre, it need not be such a difficult process, provided cable management is not a mere afterthought.

“With cable management in mind during the design phase, it can be made simple,” he says. “But I have also seen cases where cable management was added later, for a variety of reasons, and these were more complicated. For example, I have seen a data centre that had some halls using below-floor cabling pathways and overhead cabling in other halls. The transitions were not always elegant.”

Matt Edgely, commercial director, TeleData UK agrees and says the right amount of capacity, route, diversity and structural planning work must be put in at the outset. “Investing that bit more into your day one infrastructure can save a huge amount of retrospective planning and work arounds in the future – which is where a large amount of complexity can come into play,” he says. “Following best practice and doing the simple things right, like auditing, labelling, removing obsolete cables etc. makes for a far less complex and a far more effective in-house cabling system.”

For Cindy Ryborz, marketing manager data centres EMEA, Corning Optical Communications, there are many factors that can impact the complexity of any data centre project, such as the architecture (ToR, EoR, MoR, the distance between racks and rows), whether it’s copper or fibre, the number of cabinets or ports, possible capacity additions, as well as security and regulations that need to be met.

“Corning’s project to maximise capacity at the 9,717 square metre Telehouse North colocation data centre in London, for example, required a new cable management solution that would provide flexible and future-ready, intrabuilding connectivity to each of the five floors and customer colocation suites – an interesting challenge,” she says.

 

However, Alberto Zucchinali, data centre solutions and services manager at Siemon, says cable management is one of the most challenging topics in data centres. “Many different cables (data communications, power etc.) should be arranged in a proper way, ideally have limited size and proper routing, be flexible, allow for frequent moves, adds and changes,” he adds. “They should also not interfere with each other, as data centre operations should be implemented in a proper way – this requires good design in order to keep power and data separate and possibly make good use of often limited unused spaces. Moreover, bad/no labelling/colour coding does not allow for locating cables easily for troubleshooting, testing or repair, to install new equipment, or to remove extra cables after equipment has been moved or upgraded”.

Airflow (I won’t bore you with another human biology comparison) is another key element of cable management and Connaughton says there are two specific areas where cabling can have a significant impact. “One is when it is within the rack, sloppy cabling practices can block the direct airflow from the equipment fans,” he adds. “Both the inlet and exhaust fans need to be kept clear. The use of proper patch cord management, reduced diameter cords and appropriate panels are all ways of mitigating this problem. The other is underfloor cabling can create air dams where the cool air is trapped in locations and not allowed to move to the vents below the racks. Since it is below the floor, the ‘out of sight-out of mind’ problem can exist. Reduced cable diameter and proper pathway fill ratios are key strategies to prevent this problem.”

For Edgely, good cable management and following simple best practice guidelines such as calculated cable lengths and adhering to routes can have a huge impact on the effectiveness of cooling systems under-floor. “Bad cable management can cause barriers and uneven airflow resulting in an imbalance of forced air pressure,” he adds. “It can also limit a data centre’s ability to make changes to cooling systems in the future for fear of disturbing cables.”

So, when it comes to data centres, what is the most common challenge when you’re kitting one out?

“Within a multi-tenanted data centre where you could be cross connecting from any rack to any rack at any time, it’s making sure you have the structure and capability to do so from the outset,” says Edgely. “This takes you back to the importance of planning again. Of course, there is the issue of ongoing quality control which comes down to process, people, training and management.”

Connaughton’s colleague and product manager Michael Wang says the most common challenge in cable management is always how to provide a scalable management solution for cabling. This is separated to cable management for horizontal/backbone cable and patch cords.

“Data centres have different types of infrastructure and installation methods,” he says. “For example, enterprises as opposed to hyperscale or cloud data centres) prefer to build data centres from day one using structured cabling and pathways to handle horizontal cable and patch cords, but on the other hand, many data centres now prefer a modular approach, either for cabinets installed in PoD way or pre-loaded cabinets.”

Wang says some will only adopt power and infrastructure at day one and install cabinets on demand. Therefore, the selection of pathway is more for backbone and not for horizontal cables which could be run through pathways put direct on top of cabinets and/or inside cabinets. “The important part of a pathway system today is to allow cable to safely enter into cabinets which may not have been available from day one,” he continues. “Examples include use of ‘waterfalls’ in FIBREROUTE or overhead patching frames. We also need to consider cable diameter. Some of the mega projects now require 144 or much higher fibre count cables, so cable makers are now designing advanced cables with higher fiber count but which are much smaller than in the past.”

Density is also increasing in the cabinets - which requires management - and Wang says his firm now offers Ultra High Density (UHD) cabling not only for fibre but also for copper. “Slimflex patch cords are 30~50% smaller than before,” he says. “We know that new ethernet technologies will ask for more higher density connectors than in the past so this will only be more challenging for customers. Data centres also require a lot of fan-out cable or patch cords for new ethernet applications. Solutions to handle fan-out cable and routing for those cables/cords will be important for customers.” 

Wang says he has seen some customers “just hang all the fan-out connectors on the cabinets” which will definitely cause a headache in future. “So, we need to provide robust designs for the transition point of the fan-out cable and even consider those as horizontal cables instead of patch cords,” he adds. “The ENSPACE panel is a good example to allow customers to manage patch cords in a scalable way. The concept of an individual drawer allows customers to manage patch cords according to business demand.”

Paul Cave, technical pre-sales manager for Excel Networking Solutions, says achieving the correct level of resilience within the initial design as well as providing the correct ‘expansion’ space is the most common problem. “Too many people underestimate the correct level of spare capacity with containment,” Cave adds. “This is then compounded by M&E contractors ‘value engineering’ the containment to reduce costs, too many projects design for today rather than planning for tomorrow, some DCs I have been in use for over 10 years with multiple iterations of technology upgrades and are now reaching their limit, not due to space or cooling or even power but because they cannot physically run anymore cables to them.”

Although prior-planning usually prevents poor performance, there must be a number of situations were slapdash cable management has led to data centres – for want of a better expression – ‘going under’, no?

“It doesn’t happen often, but it has happened,” says Edgely. “Shoddy management of one aspect of the data centre usually flows through into other areas and customers are savvy enough to know this and naturally this affects sales figures.”

Cave concurs and while he “cannot mention any names”, one offender is operated by a major high-street supermarket.

“The thing that struck me first when I entered the room was the noise of the CRAC units on the walls - they were working at almost maximum capacity,” he says. “The room wasn’t actually overly hot the problem was the design and highlights a couple of the points already raised. In this particular data centre, all the cables were run underneath the raised floor, this was also an air handling space with the cold air supply, no cables were routed at high level.”

Cave says that when raising some of the tiles the culprit “was obvious”. It had undergone a number of upgrades to equipment over the years but it never removed any of the old redundant cable as staff didn’t note which cables were unused.

“Whilst this data centre did not ultimately fail, it did result in a very expensive transition plan, which involved the lease of an external data hall whilst this particular one was completely redesigned and rebuilt it was a very long and expensive process,” he continues. “It must be noted that the original data centre had been first designed and built in the mid-1990s when computer equipment and connectivity was totally different and they just kept trying to fit more equipment in.”

Cave is in good company when it comes to witnessing bad practices.

“The worst example that I have ever seen was a case where the pathways were so full of abandoned cable, that there were several full-time employees whose only job was to trace and removed abandoned cable from the pathway,” says Connaughton. “It was an agonisingly slow process to watch – most of the cables had no labelling and no consistent pathway so each cable had to be manually traced from beginning to end and cut out along the way.”

Zucchinali says he once received a phone call from an installer who asked us Siemon to “retrofit” a brand-new data centre as cable management was simply forgotten. “No cabling distribution was considered during the entire design and in absence of any structured connectivity this would have quickly created a jungle of flying patch cords all over the room,” he adds.

Although it’s obvious, or it should be, that the advances in cable quality play a key role in the job a data centre does, has the cabling new and innovative to support the cable network in a data centre?

“Not specifically that we can think of,” says Edgely. “For us, we have similar challenges to those faced 15 years ago – although there have been some positive developments in some consumables and tools.”

For Michael Adams – solution design engineer, operations at Interxion, “there are a huge amount of cabling products on the market with a range of different attributes that could support the cable networks” within a data centre. “That said, every data centre has different requirements and not every product can provide a catch-all solution. For example, MPO cabling can vary in usefulness from site to site, so we stick to the splicing method,” he says. “Where we do see value is the high density panels within our MMR’s, which support our customers who have a significant demand for cross-connects.”

Connaughton says, “generally speaking”, over the past few years, a few that stand out. 

“Reduced diameter cables – there is a practical limit to how small a cable can be manufactured and still be handled properly, but this reduction does a lot to help manage the cabling in the pathway. Aside from taking up less space, it also can make the cables more flexible to allow for neater bundles.”

In addition, he says, the weight reduction can simplify the pathway requirements.

“Polarity switch connectors – polarity has always been important, but as parallel optics became more popular, the fixed relationship between the fibres in an MPO/MTP connector made polarity critical,” he says. “New connector designs allow for the polarity of a connector to be altered in the field. In the past, it was common to install an additional patch cord to make this correction.” Connaughton also highlights patch panel design. “Integrating cable management into the patch panel has helped keep the panel neater, he continues. “This has shown up in several forms including angled panels and sliding trays. This has become especially important as densities have increased.”

Cave says the last development that had a key impact on DCs was the MPO (MTP) connector, which allowed for higher density fibre connectivity however that connector has been around for over 20 years. “Most major DCs are based on Singlemode fibre which effectively has unknown bandwidth, the combination of these two seem to be the way forward. We have had Category 8 and OM5 however the take-up of these has been extremely limited to say the least although Category 8 has been around since 2016 there is still no equipment that can use this, and the cost of OM5 cable and connectivity being more expensive than SM has meant it is seen as a dead-end by some.”

Apart from increased operating costs and expensive outages mentioned at the beginning of the article, there are plenty of other problems that can be faced.

Connaughton says one is “an aesthetics issue” because a cabling system that is sloppy can lead to other, harder to measure problems. He also says, “sloppiness begets sloppiness” and if management allows the cabling to be unkempt, what else will it overlook? “ This goes along with the “broken windows” theory of civic management – keeping everything neat and tidy encourages everyone to want to keep it that way,” Connaughton adds.

Last but not least, he cites utilisation rates. “While this is a contributor to operating costs, poor cabling practices can make it extremely difficult to know whether there are available ports for additional connections,” Connaughton says. “This is an area where Automated Infrastructure Management (A.I.M.) systems can play an important role.”

Ryborz says signal-loss can be a common problem as moves, adds and changes are made within the data centre over time.

“Bend-insensitive fibre is an effective solution here and can exhibit up to a tenfold reduction in loss at the point of the bend when compared to conventional multimode fibre,” she adds. “This protects the system margin or power budget headroom and prevents unscheduled downtime. As well as problems related with connectivity, that may cause the failure of a single component or even the whole network failure, hence, it becomes extremely important to have clean components (connectors/adaptors) that can be used in the installation process.”

So, it looks as though we are there with cables, but don’t data centres also need to be wary of connections/adaptors? After all, the cable might be great but unless the connector is up to standard the end product won’t be great.

Well, when it comes to maintenance there’s one thing all the contributors agree on.

“Cleaning, cleaning, cleaning,” says Connaughton. “The single biggest problem in optical connections is cleanliness. Proper cleaning of the installed connector and the test lead are critical. Especially since newer and faster networks speeds tend to coincide with smaller power budgets.

Cave has the stats to back it up, too. “Fluke Research state 85% of all fibre faults is end-face contamination NTT state in excess of 80% therefore this is the number one problem within DC connectivity and one I have experienced on numerous occasions,” he says. “The main recommendation I give anyone handling fibre within a DC is to get the best fibre inspection they can afford and then learn how to clean fibre correctly.”

If there’s one thing to remember from this, it’s to keep it clean, guys.