File Name: virtualized data center and cloud infrastructure planning and design .zip
This advanced EMC Cloud Architect certification course track covers in-depth details and considerations for planning, designing and migrating to a virtualized data center VDC and cloud environment.
- Cloud Computing: Automating the Virtualized Data Center
- Azure Virtual Datacenter
- Cloud Computing: Automating the Virtualized Data Center
A data center American English  or data centre British English  [note 1] is a building , dedicated space within a building, or a group of buildings  used to house computer systems and associated components, such as telecommunications and storage systems. Since IT operations are crucial for business continuity , it generally includes redundant or backup components and infrastructure for power supply , data communication connections, environmental controls e. A large data center is an industrial-scale operation using as much electricity as a small town. Data centers have their roots in the huge computer rooms of the s, typified by ENIAC , one of the earliest examples of a data center. Many cables were necessary to connect all the components, and methods to accommodate and organize these were devised such as standard racks to mount equipment, raised floors , and cable trays installed overhead or under the elevated floor.
Cloud Computing: Automating the Virtualized Data Center
A data center American English  or data centre British English  [note 1] is a building , dedicated space within a building, or a group of buildings  used to house computer systems and associated components, such as telecommunications and storage systems.
Since IT operations are crucial for business continuity , it generally includes redundant or backup components and infrastructure for power supply , data communication connections, environmental controls e. A large data center is an industrial-scale operation using as much electricity as a small town. Data centers have their roots in the huge computer rooms of the s, typified by ENIAC , one of the earliest examples of a data center.
Many cables were necessary to connect all the components, and methods to accommodate and organize these were devised such as standard racks to mount equipment, raised floors , and cable trays installed overhead or under the elevated floor. A single mainframe required a great deal of power and had to be cooled to avoid overheating. During the boom of the microcomputer industry, and especially during the s, users started to deploy computers everywhere, in many cases with little or no care about operating requirements.
However, as information technology IT operations started to grow in complexity, organizations grew aware of the need to control IT resources. The advent of Unix from the early s led to the subsequent proliferation of freely available Linux -compatible PC operating-systems during the s.
These were called " servers ", as timesharing operating systems such as Unix rely heavily on the client—server model to facilitate sharing unique resources between multiple users. The availability of inexpensive networking equipment, coupled with new standards for the network structured cabling , made it possible to use a hierarchical design that put the servers in a specific room inside the company.
The use of the term "data center", as applied to specially designed computer rooms, started to gain popular recognition about this time.
The boom of data centers came during the dot-com bubble of — Installing such equipment was not viable for many smaller companies. Many companies started building very large facilities, called Internet data centers IDCs ,  which provide enhanced capabilities, such as crossover backup: "If a Bell Atlantic line is cut, we can transfer them to The term cloud data centers CDCs has been used.
Modernization and data center transformation enhances performance and energy efficiency. Information security is also a concern, and for this reason, a data center has to offer a secure environment that minimizes the chances of a security breach. A data center must, therefore, keep high standards for assuring the integrity and functionality of its hosted computer environment.
Focus on modernization is not new: concern about obsolete equipment was decried in ,  and in Uptime Institute was concerned about the age of the equipment therein. The Telecommunications Industry Association 's Telecommunications Infrastructure Standard for Data Centers  specifies the minimum requirements for telecommunications infrastructure of data centers and computer rooms including single tenant enterprise data centers and multi-tenant Internet hosting data centers. The topology proposed in this document is intended to be applicable to any size data center.
Telcordia GR, NEBS Requirements for Telecommunications Data Center Equipment and Spaces ,  provides guidelines for data center spaces within telecommunications networks, and environmental requirements for the equipment intended for installation in those spaces.
These criteria were developed jointly by Telcordia and industry representatives. They may be applied to data center spaces housing data processing or Information Technology IT equipment. The equipment may be used to:. Data center transformation takes a step-by-step approach through integrated projects carried out over time.
This differs from a traditional method of data center upgrades that takes a serial and siloed approach. The term "Machine Room" is at times used to refer to the large room within a Data Center where the actual Central Processing Unit is located; this may be separate from where high-speed printers are located. Air conditioning is most important in the machine room. Aside from air-conditioning, there must be monitoring equipment, one type of which is to detect water prior to flood-level situations.
A raised floor standards guide named GR was developed by Telcordia Technologies , a subsidiary of Ericsson. Although the first raised floor computer room was made by IBM in ,  and they've "been around since the s",  it was the s that made it more common for computer centers to thereby allow cool air to circulate more efficiently.
The first purpose of the raised floor was to allow access for wiring. The "lights-out"  data center, also known as a darkened or a dark data center, is a data center that, ideally, has all but eliminated the need for direct access by personnel, except under extraordinary circumstances. Because of the lack of need for staff to enter the data center, it can be operated without lighting.
All of the devices are accessed and managed by remote systems, with automation programs used to perform unattended operations. In addition to the energy savings, reduction in staffing costs and the ability to locate the site further from population centers, implementing a lights-out data center reduces the threat of malicious attacks upon the infrastructure.
The Telecommunications Industry Association 's TIA standard for data centers, published in and updated four times since, defined four infrastructure levels.
Four Tiers are defined by the Uptime Institute standard:. The field of data center design has been growing for decades in various directions, including new construction big and small along with the creative re-use of existing facilities, like abandoned retail space, old salt mines and war-era bunkers. Local building codes may govern the minimum ceiling heights and other parameters. Some of the considerations in the design of data centers are:.
Modularity and flexibility are key elements in allowing for a data center to grow and change over time. Data center modules are pre-engineered, standardized building blocks that can be easily configured and moved as needed.
A modular data center may consist of data center equipment contained within shipping containers or similar portable containers. Temperature [note 10] and humidity are controlled via:. To prevent single points of failure , all elements of the electrical systems, including backup systems, are typically fully duplicated, and critical servers are connected to both the "A-side" and "B-side" power feeds. Static transfer switches are sometimes used to ensure instantaneous switchover from one supply to the other in the event of a power failure.
Air flow management addresses the need to improve data center computer cooling efficiency by preventing the recirculation of hot air exhausted from IT equipment and reducing bypass airflow. Cold aisle containment is done by exposing the rear of equipment racks, while the fronts of the servers are enclosed with doors and covers.
Ducting prevents cool and exhaust air from mixing. Rows of cabinets are paired to face each other so that cool air can reach equipment air intakes and warm air can be returned to the chillers without mixing. Alternatively, a range of underfloor panels can create efficient cold air pathways directed to the raised floor vented tiles.
Either the cold aisle or the hot aisle can be contained. Another alternative is fitting cabinets with vertical exhaust ducts chimney  Hot exhaust exits can direct the air into a plenum above a drop ceiling and back to the cooling units or to outside vents.
Data centers feature fire protection systems, including passive and Active Design elements, as well as implementation of fire prevention programs in operations. Smoke detectors are usually installed to provide early warning of a fire at its incipient stage. Two water-based options are: . Physical access is usually restricted.
Layered security often starts with fencing, bollards and mantraps. Fingerprint recognition mantraps is starting to be commonplace. Logging access is required by some data protection regulations; some organizations tightly link this to access control systems.
Multiple log entries can occur at the main entrance, entrances to internal rooms, and at equipment cabinets. Access control at cabinets can be integrated with intelligent power distribution units , so that locks are networked through the same appliance. Energy use is a central issue for data centers. Power draw ranges from a few kW for a rack of servers in a closet to several tens of MW for large facilities.
Some facilities have power densities more than times that of a typical office building. Power costs for often exceeded the cost of the original capital investment. Given a business as usual scenario greenhouse gas emissions from data centers is projected to more than double from levels by In an month investigation by scholars at Rice University's Baker Institute for Public Policy in Houston and the Institute for Sustainable and Applied Infodynamics in Singapore, data center-related emissions will more than triple by The most commonly used energy efficiency metric of data center energy efficiency is power usage effectiveness PUE , calculated as the ratio of total power entering the data center divided by the power used by IT equipment.
It measures the percentage of power used by overhead cooling, lighting, etc. State-of-the-art is estimated to be roughly 1. The U. Environmental Protection Agency has an Energy Star rating for standalone or large data centers.
To qualify for the ecolabel, a data center must be within the top quartile of energy efficiency of all reported facilities. California's title 24 of the California Code of Regulations mandates that every newly constructed data center must have some form of airflow containment in place to optimize energy efficiency.
The focus of measuring and analyzing energy use goes beyond what's used by IT equipment; facility support hardware such as chillers and fans also use energy. The energy demand for information storage systems was also rising. Calculations showed that in two years the cost of powering and cooling a server could be equal to the cost of purchasing the server hardware.
In Facebook , Rackspace and others founded the Open Compute Project OCP to develop and publish open standards for greener data center computing technologies. As part of the project Facebook published the designs of its server, which it had built for its first dedicated data center in Prineville. Making servers taller left space for more effective heat sinks and enabled the use of fans that moved more air with less energy.
By not buying commercial off-the-shelf servers, energy consumption due to unnecessary expansion slots on the motherboard and unneeded components, such as a graphics card , was also saved. This design had long been part of Google data centers. Power is the largest recurring cost to the user of a data center.
A power and cooling analysis , also referred to as a thermal assessment, measures the relative temperatures in specific areas as well as the capacity of the cooling systems to handle specific ambient temperatures. Power cooling density is a measure of how much square footage the center can cool at maximum capacity.
An energy efficiency analysis measures the energy use of data center IT and facilities equipment. A typical energy efficiency analysis measures factors such as a data center's power use effectiveness PUE against industry standards, identifies mechanical and electrical sources of inefficiency, and identifies air-management metrics.
Case studies have shown that by addressing energy efficiency holistically in a data center, major efficiencies can be achieved that are not possible otherwise. This type of analysis uses sophisticated tools and techniques to understand the unique thermal conditions present in each data center—predicting the temperature, airflow , and pressure behavior of a data center to assess performance and energy consumption, using numerical modeling.
Thermal zone mapping uses sensors and computer modeling to create a three-dimensional image of the hot and cool zones in a data center. This information can help to identify optimal positioning of data center equipment. For example, critical servers might be placed in a cool zone that is serviced by redundant AC units. Data centers use a lot of power, consumed by two main usages: the power required to run the actual equipment and then the power required to cool the equipment.
Power-efficiency reduces the first category. Cooling cost reduction from natural ways includes location decisions: When the focus is not being near good fiber connectivity, power grid connections and people-concentrations to manage the equipment, a data center can be miles away from the users.
Azure Virtual Datacenter
Solutions Review compiles the 11 essential books that data center directors or managers need to add to their reading lists. Data center directors are sidled with a large responsibility. Knowing how to keep your data center secure and operating smoothly is critical. Books, whether hardcover or digital, are an excellent source for professionals looking to learn about a specific field of technology, and data center directors are no exception. By understanding how to manage, secure, and scale apps with vSphere 6. This Learning Path begins with an overview of the features of the vSphere 6.
A more robust platform architecture and implementation have been created to build on the prior Azure Virtual Datacenter VDC approach. Enterprise-scale landing zones in the Microsoft Cloud Adoption Framework for Azure are now the recommended approach for larger cloud-adoption efforts. The following guidance serves as a significant part of the foundation for the Ready methodology and the Govern methodology in the Cloud Adoption Framework. To support customers making this transition, the following resources are archived and maintained in a separate GitHub repository. Skip to main content. Contents Exit focus mode. Is this page helpful?
Skip to search form Skip to main content You are currently offline. Some features of the site may not work correctly. Josyula and Malcolm D. Orr and G. Josyula , Malcolm D. Orr , G.
Cloud Computing: Automating the Virtualized Data Center
What is OpenNebula? It has been a unique leading open source virtualization technology for many years. OpenNebula cloud — bigger than expected in business. Enterprise cloud computing is the next step in the evolution of data center virtualization.
This catalog displays prices as applicable in the USA. For local pricing and purchase options, contact your EMC account representative. Virtualized Data Center and Cloud Infrastructure This new breed of architect brings cross-domain planning and design expertise to deliver virtualization and cloud designs based on business strategies encompassing all key technical domains systems, storage, networking, security, etc.