Data Centres

What is a Data Centre?


A data centre is a physical room, building or facility that houses IT infrastructure for building, running and delivering applications and services. It also stores and manages the data associated with those applications and services.

Data centres play a fundamental role in our society and digital economy. Everything that happens online, is housed in a data centre. Data Centres house many digital applications and thus form the foundation of our Internet. In these buildings full of servers and other digital equipment, videos and other files are stored, important software runs and data is exchanged between different networks that form a data distribution hub. Data Centres support a wide range of activities of the government, business and society.



Types of Data Centers?


There are 5 common types of Data Centers -

  1. Enterprise Data Center
  2. Multi-Tnant Data Centre/Colocation Data Centers
  3. Hyperscale Data Centers
  4. Edge/Micro Data Centers
  5. Container/Modular


Enterprise Data Center

An enterprise data center is a private data center facility that supports a single organization. These types of data centers are best suited for companies that have unique network needs—or companies that do enough business to take advantage of economies of scale. Enterprise data centers are custom built to be compatible with the organization’s distinctive enterprise apps and processes.

Multi-Tenant Data Centre/Colocation Data Centers

Multi-tenant data centers (also known as colocation data centers) offer data center space to businesses that want to host their computing hardware and servers offsite. These facilities provide the proper data center components—power, cooling, security and networking equipment—needed to do so.

Companies that don’t have the space for their own enterprise data center—or an IT team to dedicate to managing one—often choose a colocation data center. This allows them to redirect financial and personnel resources to other initiatives.

Hyperscale Data Centers

Hyperscale data centers are designed to support very large-scale IT infrastructure. According to Synergy Research Group, there are 700 hyperscale data centers in existence—but that’s twice as many as five years ago. While this may be a small percentage compared to the number of data centers across the globe (there are more than 7 million data centers worldwide), hyperscale data centers are on the rise. Interesting fact: Amazon, Microsoft and Google account for more than half of all hyperscale data centers.

Like enterprise data centers, hyperscale data centers are owned and operated by the company they support—just on a much larger scale for cloud computing platforms and big data storage. A typical hyperscale data center has at least 5,000 servers, 500 cabinets and 10,000 square feet of floor space.

Edge/Micro Data Centers

The demand for instantaneous connectivity, expansion of IoT and need for analytics and automation are driving the growth of edge solutions so computing occurs closer to the actual data.

These types of data centers are small and located near the people they serve to handle real-time data processing, analysis and action, making low-latency communication with smart devices possible. By processing data services as close to end users as possible, edge data centers allow organizations to reduce communication delay and improve the customer experience.

As innovative technologies continue to transform the way we live and work—from robots, telemedicine and 5G to autonomous vehicles, wearable healthcare technology and smart electrical grids—we’ll continue to see more of these types of data centers emerge.

Container/Modular Data Centers

A container data center is usually a module or shipping container that’s packaged with ready-made, plug-and-play data center components: servers, storage, networking gear, UPS, generators, air conditioners, etc.

The concept of a container/modular data center was first introduced only about 15 years ago, and it’s now being used in temporary and permanent deployments. You’ll often find modular data centers on construction sites or in disaster areas (to support alternat care sites during the pandemic, for example). In permanent environments, they’re deployed to free up space inside a building or to allow an organization to scale quickly to accommodate new technology, such as adding IT infrastructure to an education institution to support digital classrooms.



Data Center Tier Ratings & Redundancy

Companies also rate data centers by tier to highlight their expected uptime and reliability.

Let’s break it down:

•      Tier 1: A Tier 1 data center has a single path for power and cooling and few, if any, redundant and backup components. It has an expected uptime of 99.671% (28.8 hours of downtime annually).

•      Tier 2: A Tier 2 data center has a single path for power and cooling and some redundant and backup components. It has an expected uptime of 99.741% (22 hours of downtime annually).

•      Tier 3: A Tier 3 data center has multiple paths for power and cooling and systems in place to update and maintain it without taking it offline. It has an expected uptime of 99.982% (1.6 hours of downtime annually).

•      Tier 4: A Tier 4 data center is built to be completely fault-tolerant and has redundancy for every component. It has an expected uptime of 99.995% (26.3 minutes of downtime annually).

Which tier of data center you need depends on your service SLAs and other factors.

In addition to hardware, where you decide to build your data center can have a big impact on your results.

The four data center tiers are progressive. Data centers can move up and down the ratings, and each level includes the requirements of the lower rankings.

While reliability goes up with higher levels, tier 4 is not always a better option than a data center with a lower rating. Each tier fits different business needs, so tiers 3 or 4 (the most expensive options) are often an over-investment.

Tier 1 Data Center

Tier 1 infrastructure provides the power and cooling capacity to support the full IT load. These facilities have a single path for power and cooling, and there is no redundancy for any critical system.

The staff must shut down operations entirely for regular maintenance or emergency repairs.

The requirements for a tier 1 facility are:

•      An uninterruptible power supply (UPS) for power spikes and outages.

•      A designated space for IT systems.

•      An engine-generator.

•      Dedicated cooling equipment that runs outside office hours.

Tier 1 data centers also require systems, protocols, and equipment that ensure the data center is up and running beyond standard office hours (nights and weekends).

Tier 2 Data Center

Tier 2 infrastructure has all the features of a tier 1 data center but with added backup options. These data centers offer better protection against disruptions with:

•      Extra engine generators.

•      Energy storage.

•      Chillers.

•      Raised floors.

•      UPS modules.

•      Pumps.

•      Heat rejection equipment.

•      Fuel tanks and cells.

•      Extra cooling units.

Like tier 1, tier 2 centers rely on a single distribution path for power and cooling, so these facilities are still vulnerable to unexpected disruptions. The uptime is better than with a lower-rated data center, so tier 2 clients experience up to 22 hours of downtime per year.

Tier 3 Data Center

A tier 3 data center is a concurrently maintainable facility with multiple distribution paths for power and cooling. Unlike tier 1 and 2 data centers, a tier 3 facility does not require a total shutdown during maintenance or equipment replacement.

A tier 3 facility requires all the components present in a tier 2 data center, but these facilities must also have N+1 availability:

•      "N" refers to the necessary capacity to support the full IT load.

•      "+1" stands for an extra component for backup purposes.

N+1 redundancy ensures an additional component starts operating if the primary element runs into a failure or the staff removes the part for planned maintenance.

Tier 3 data centers also require a backup solution that can keep operations running in case of a local or region-wide power outage. The facility must ensure equipment can continue to operate for at least 72 hours following an outage.

Tier 3 setups have a significant jump in availability when compared to lower ratings. Clients that rely on a tier 3 data center experience up to 1.6 hours of downtime per year.

Tier 4 Data Center

Tier 4 data centers add fault tolerance mechanisms to the tier 3 list of requirements. These data centers have multiple physically isolated systems that act as redundant components and distribution paths. Besides all the tier 3 conditions, a tier 4 facility must ensure:

•      All components have the support of two generators, two UPS systems, and two cooling systems.

•      Each distribution path is independent so that a single failure in one does not cause a domino effect with other components.

•      Operations continue to run for a minimum of 96 hours following a local or regional power outage.

•      The power source does not connect to any external source.

The separation between redundant components is vital for a tier 4 data center. Physical separation prevents a local event from compromising both systems.

Tier 4 data centers either have 2N or 2N+1 redundancy:

•      2N redundancy (or N+N) means the facility has a wholly mirrored, independent system on stand-by. If anything happens to a primary component, an identical backup replica starts operating to ensure continued operations.

•      The 2N+1 model provides twice the operational capacity (2N) and an additional backup component (+1) in case a failure happens while a secondary system is active.

A level 4 facility can ensure clients do not experience more than 26.3 minutes of downtime annually. The reason why tier 4 service level agreements (SLAs) do not guarantee 100% uptime is because of a slight chance a component might run into a problem during the maintenance of its redundant counterparts.



Where are they located?

Data Centers are located all over the world. There are over 8000 data centers world wide.

Top 20 Countries


Data Centers by Region




Who is building them?

The very large software companies such as Microsoft, Amazon, Google & Apple have been building their own for years. However the majority are built by third parties.

They then rent out server space to everyone. Equinix, Digital Reality, NTT, Amazon Web Services are just to name a few.



Typical Design


How to design and build a data center

A data center is the technological hub of modern enterprise operations. The data center provides the critical IT infrastructure needed to deliver resources and services to business employees, partners and customers around the world.

Currently, data center architectures include traditional on-premises centers, colocation facilities, cloud data centers, edge computing centers, and centers utilizing modular or containerized designs. Each serves different needs and scales according to demand.

Constructing a data center commonly starts with thorough planning of the layout and infrastructure, followed by site selection, calculating power and cooling requirements, and ensuring compliance with industry standards. The actual construction phase carefully adheres to the predefined architectural design and planning.

Design standards for data centers are guided by industry best practices and certifications such as those from Uptime Institute’s Tier Standard and ANSI/TIA-942. These standards dictate the specifications for redundancy, fault tolerance, and overall reliability.

Data Center Design & infrastructure Standards

Below are just some of the major data center design and infrastructure standards:

•      Uptime Institute Tier Standard. The Uptime Institute Tier Standard focuses on data center design, construction and commissioning, and it is used to determine the resilience of the facility as related to four levels of redundancy/reliability.

•      ANSI/TIA 942-B. This standard involves planning, design, construction and commissioning of building trades, as well as fire protection, IT and maintenance. It also uses four levels of reliability ratings, implemented by BICSI-certified professionals.

•      EN 50600 series. This series of standards focuses on IT cable and network design and has various infrastructure redundancy and reliability concepts that are loosely based on the Uptime Institute's Tier Standard.

•      ASHRAE. The ASHRAE guidelines -- which are not specific to IT or data centers -- relate to the design and implementation 

What are the main components of a data center?

There are two principal aspects to any data center: the facility, and the IT infrastructure that resides within the facility.

These aspects coexist and work together, but they can be discussed separately. 

Facility

The facility is the physical building used for the data center. In simplest terms, a data center is just a big open space where infrastructure will be deployed. Although almost any space has the potential to operate some amount of IT infrastructure, a properly designed facility considers the following array of factors:

Space. There must be sufficient floor space, a simple measure of square feet or square meters to hold all the IT infrastructure that the business intends to deploy now and in the future. The space must be located on a well-considered site with affordable taxes and access. The space is often subdivided to accommodate different purposes or use types.

Power. There must be adequate power in watts, often as much as 100 megawatts to operate all the IT infrastructure. Power must be affordable & clean, meaning free of fluctuation or disruption and reliable. Renewable and supplemental/auxiliary power must be included.

Cooling. The enormous amount of power delivered to a data center is converted into computing, i.e., work and a lot of heat, which must be removed from the IT infrastructure using conventional HVAC systems, as well as other unconventional cooling technologies.

Security. Considering the value of the data center and its critical importance to the business, the data center must include controlled access using a variety of tactics, ranging from employee badge access to video surveillance.

Management. Modern data centers typically incorporate a building management system (BMS) designed to help IT and business leaders oversee the data center environment in real time, including oversight of temperature, humidity, power and cooling levels, as well as access and security logging. 

Infrastructure

An infrastructure represents the vast array of IT gear deployed within the facility. This is the equipment that runs applications and provides services to the business and its users. A typical IT infrastructure includes the following components:

Servers. These computers host enterprise applications and perform computing tasks.

Storage. Subsystems, such as disk arrays, are used to store and protect application and business data.

Networking. The gear needed to create a business network includes switches, routers, firewalls and other cybersecurity elements.

Cables and racks. Miles of wires interconnect IT gear, and physical server racks are used to organize servers and other gear within the facility space.

Backup power. Uninterruptible power supply (UPS), flywheel and other emergency power systems are critical to ensure orderly infrastructure behaviour in the event of a main power disruption.

Management platforms. One or more data center infrastructure management (DCIM) platforms are needed to oversee and manage the IT .

Reason for Design

Just to be clear there isn’t one typical design for a data center.

Although a number of providers like to duplicate their design across a number of projects, you will usually see differences. The differences between companies are usually significant but the fundamentals are the same. Each company is trying to create a design that maximises power efficiency. The more efficient the power the cheaper it is to run the facility.

A good example of this is the “Hot Aisle” “Cold Aisle” design.

“Hot Aisle” “Cold Aisle” Design

To maximise cooling efficiency the design team aligned the servers in such a way as to create a “Hot Aisle”. This is hot because the cool air from the CRAC units is sucked into the servers. As it passes over the hot servers it cools them down and the cool air heats up. This hot air then goes into the “Hot Aisle” and is circulated back into the CRAC units to be re-cooled and sent back to the data

Typical Layouts




CSA (Civil, Structural & Architectural)

The underground services (CIVIL) if often the most complex part of the CSA packages.

This is a series of trenches, chambers and cable ducting that surrounds the perimeter of the building with a number of “pop ups” within the building.

The section in the blue circle would be of particular difficulty and there are a number of cross over points. If you try and visualise it, they will all be a different levels and need to be installed in the right order. Below is a section taken of the area in the model.

The CSA on a Data Centre is not as dominant as it would be on a commercial or healthcare project.

They are often classified as large warehouses. Its typically a steel frame with cladding.

Foundations are dependent upon the local substrata.



Electrical

The Electrical discipline is typically the largest and most costly on a Data Centre Project.

The electrical is all systems associated with getting power to the equipment. Its made up of but not limited to -

  • Sub Station
  • Lighting
  • Switch Rooms 
  • Small Power
  • Transformers
  • Fire Detection
  • UPS Security
  • Generator Access Control


The power comes from the national grid to a local substation.


The voltage from the national grid is too high so it needs to be reduced. This happens by going through the transformer. The voltage is now considered Low and that is where you get LV cable from. This LV cable goes to the Local Switch Room (EQX call these Switch Rooms (MeWall’s) which is typically inside the building.

The power then goes from here to the PDU (Power Distribution Units) inside the data halls then these feed the servers. We will cover into more detail the Electrical Systems in another training sessions.



Mechanical

The mechanical discipline on the Data Centre is generally the smallest out of the CSA, Elec and Mech. The main packages are

•      Cooling

•      Ventilation

•      BMS (Building Management System)

•      Fire Protection (Gas, Sprinkler)

•      DWS (Domestic Water Services)

Cooling

Data Centers produce a huge amount of heat. This mainly comes from the servers in the data halls and the electrical equipment in the Switch Rooms. This equipment needs to be kept cool or it will overheat and shutdown.

To cool the servers the data hall is often surrounded by a cooling corridor. In this corridor you will have mechanical equipment that pumps cool air into the data hall through louvred panels. These cooling units are called various things. CRAC’s, CRAH, SCU’s. 

However on EQX projects they are called the XSCALE CRAY Walls.

This mechanical equipment is fed from an often external “Chiller” which produces the cool water that is used to cool the air going into the data hall.

The cold water running through the chilled pipework cools the air around the pipework.

This cold air is then sucked in through the XSCALE CRAY Wall and into the data hall where it cools the servers.

We will cover into more detail the Mechanical Systems in another training sessions.


Complete and Continue