Despite the complexity introduced by virtualization, the data-center environment is evolving in that direction, and Doggart says the security environment should do the same.
In its report “Security Virtualization Within the Next Generation Data Center,” Crossbeam found that 78 percent of IT professionals said their companies were in some phase of planning or implementing an NGDC, but that only 3 percent had been fully implemented. Most survey respondents agreed on the level of virtualization required in a data center and even agreed security should be a part of it. However, most of the current virtualization efforts are going toward application servers (62.4 percent) and storage (44.9 percent), with security at 35.3 percent, followed by switches and routers at 29.6 and 27.2 percent, respectively.
Security virtualization can be achieved in more than one way, but Doggart feels not all security virtualization is created equal. One way to do it is within the server itself, but the problem is that servers running mission-critical applications have stringent performance metrics to meet, and security takes away from that with its processor-intensive nature.
“The more core data a server processes, the more security it has to process, and it can have a compounding effect on performance,” Doggart said. “There are also many more applications running on virtual machines these days, and they operate in different trust zones with different levels of security. How do you make sure you don’t get cross-contamination?”
His solution is to run security on its own virtual platform, allowing for physical separation from the servers so they can be tuned for performance. In Crossbeam’s survey two-thirds of respondents agreed, saying security should be virtual but separate.
Doggart said the industry is still a long way from realizing the full potential of a fully virtualized data center, and that while strides are being made in virtualizing servers, storage and security, the industry needs to come together to design a roadmap from point A to point B.
In his new book, The Art of the Data Center: A Look Inside the World’s Most Innovative and Compelling Computing Environments, as well as his previous works, Build the Best Data Center Facility for Your Business (2005) and Grow a Greener Data Center (2009), Cisco’s Douglas Alger explores the history and future of the data center. Then and now there are some constants, such as efforts to be more efficient and reduce costs, and perhaps the biggest driver of change for both is power density.
Starting 10 years ago, as hardware got smaller yet more powerful, power density within the same footprint soared. “The good news for data-center managers [was] you could put a lot more hardware into a cabinet. The bad news was you could put a lot more hardware into a cabinet,” Alger said. “On the heels of that you had the goal of making things greener, so a lot of efficiency practices that were exceptions are now the rule.”
It would be a mistake to assume, however, that the drive for greener data centers is motivated entirely by a social conscience. Data centers have captured the attention of CEOs, Alger said, because they feel the need to control their costs, most of which are related to energy and resource consumption, the biggest operational cost in a data center. So if their conscience doesn’t get them, their CFO will. And if the CFO doesn’t, perhaps the operations manager will, because despite the conscientious environmentalism and the cost is another benefit to being efficient. Say you need to take a ride in a car and still aren’t convinced by someone else paying the cost of gas: you might be convinced by not having to stop the car so often.
So even if you pull off the top-layer benefits that people focus on — being green and saving costs — you can increase productivity and optimize resource consumption, and that is the real business value of having a green design, said Alger.