The Raritan Blog

What to Look for in a Smart Card-Enabled KVM Solution

September 13, 2010

With the release of the U.S. federal government’s HSPD-12 directive a few years ago, many CIO’s and IT managers found themselves with a key issue to address: how to authenticate both local and remote IT personnel as they access government servers and networks. HSPD-12 mandates secure, authenticated access to all federal information systems and buildings. While smart cards were already in use in several industries worldwide, their use really exploded when the U.S. Department of Defense responded to HSPD-12 by utilizing smart card technology as the basis for implementing its Common Access Card program (CAC). More recently, the DoD introduced a new type of smart card known as a Personal Identity Verification card (PIV), which must conform to the FIPS-201 standard.

Using a smart card to access a PC or server that’s within arm’s reach is easy. However, a major challenge is to support this directive in the data center or any application in which users must access multiple servers or PCs that are often located in a separate room, let alone several feet away. It’s inefficient to connect a smart card reader to each device and insert the card each time access is needed. In fact, it’s usually not possible to do so. In many cases, users need to access servers in inaccessible rooms – and with different security levels.

To meet this need, several smart card-enabled KVM solutions have been introduced by the industry’s primary vendors. Of course, no two are exactly alike, so what do you need to look for? It’s important to choose not only a solution that fulfills the basic requirement of supporting smart card authentication to multiple servers from a single location, but also one that makes the necessary feature adjustments that meet and exceed the highly secure operation requirements inherent of a smart card environment.


VM’s and ILO’s and RDP, oh my!

September 9, 2010

So, you’ve built a state-of-the-art lights-out data center with all the latest best-of-breed technologies. You have virtual machines, Windows servers, Linux/Unix servers, assorted service processors, blade systems, and a resilient network and security infrastructure. You also have a heterogeneous assortment of technologies, that each requires a different set of access and administration tools. How can you provide your systems admins a centralized system to access everything from a “single pane of glass” to simplify their daily job responsibilities?

Enter Command Center, a vendor-agnostic data center access and management system. Command Center was originally designed to manage KVM and serial switches, but the IT world has changed, and so has Raritan’s Command Center. Command Center offers a wealth of IP-centric tools that are perfect for providing network-based access and management capabilities for today’s modern data centers. Server access tools include RDP, VNC, and SSH for your Windows, Unix, and Linux systems. VMware Virtual Machines are dynamically supported by tight integration with Virtual Center and offer VI Client, RDP, VNC, and SSH access in addition to virtual server system information. Service processors can be integrated by using the native ILO/RILO, DRAC, IPMI, and RSA interface capabilities of Command Center. Network and security systems can be accessed via a web browser, SSH, and Telnet. And if you are using KVM and Serial switches, they can be centrally managed and accessed too!


It's NOT about the Aisles, it's about the Inlets!

September 8, 2010

So many people have lost sight of the fact that cooling within a data center is not really a philosophical activity, it’s about moving heat away from the sensitive electronics inside various types of equipment within a temperature range specified by the manufacturer.  Unfortunately, the smallest unit or element within this world is typically a semiconductor of some type, most commonly a chip buried deep inside a circuit board, buried deep inside an enclosure, deep inside a rack, somewhere inside a row or pod, and ultimately within a room. The trick is, how can one cost-effectively move heat away from these components? How do we move heat away from JUST these components?  It is absolutely true that the most cost effective cooling would be using some technology that directly removed heat just from the devices that needed it. There really is no need to cool everything, the circuit boards or the cabinets or the racks or even the rows, IF we had the ability to remove heat from heat-generating active devices at the chip level. But, that is VERY, VERY hard to do cost-effectively.

Various approaches have been attempted over the years, using combinations and exotic applications of air and liquid plumbing and channeling schemes, trying to minimize the amount of collateral cooling, but in the end, the vast majority of us simply focus on moving vast amounts of air through the aisles closest to each piece of equipment’s inlet air vent, which in turn gets sucked into active devices by more fans through these inlets. The simple goal is to move enough air which is lower in temperature across the whole surrounding environment so that the chips can comfortably bathe in. Tons of inefficiency (or should that be tonnes?), but years and years of tried and true best practices, experience, and products that do just that.

 

A couple of details about moving air are important here.

  • Air FANS account for almost 47% of all energy used to cool a data center. This is huge! We all think about exotic condensers and chillers, Freon and other chemicals, but the reality is the fans used to generate air flow and move heat account for almost half of all cooling energy. FANs are VERY important. There has been a quiet revolution over the past 5 years to deploy Variable Speed FAN drive controllers to address just this opportunity. A quick fix so to speak. As a rule of thumb, if a fan is operated at 80% speed, it consumes 50% of the power! (It’s called the “Rule of Cubes”)
  • Most Active Equipment includes FANS inside the enclosures. Most of these fans operate in at least 2 speeds, LO and HI. The most common first transition point for these fans is about 76-degrees. Yes, across all manufacturers, they all use about the same figure. Inlet air at 76-degrees or less allows the internal fans to operate at LO speed. Anything above that and the fans kick into HI. Does it matter? You Bet! It’s the Variable speed issue all over again. FANS typically may operate at 60%-80% speed in the lower speed, and 100% in the higher speed. 

So, what can we all do? Try to keep the INLET temperature for all active devices at 76-degrees. Period. NOT higher or Lower. Treat this as a magic number. Any lower (than 76-degrees) and it is wasting energy. Anything above 76-degrees and the internal FANS kick into HI speed and waste a ton of energy. The core goal today would be to get the inlet temperatures for ALL devices to 76-degrees. (And use active real-time monitoring like the Raritan PX-series environmental monitors to assure this happens)

 

Lastly, you will be rewarded! As a rule of thumb, when changing data center temperatures over the ASHRAE TC9 specified range, every degree of cooling can be estimated at 4% in energy costs. Just turning UP the temperature by 1-degree could save 4% off your energy bill. BUT remember that magic 76-degree figure or you may bite off more than you can chew!


Raritan will be at the AFCOM Data Center World Fall Conference

September 3, 2010

2010 AFCOM Data Center World Fall Conference

Raritan Booth #609

Oct. 4-5, 2010

Mirage Hotel and Convention Center

Las Vegas, Nevada


Join Raritan at the BICSI Fall Conference

September 3, 2010

2010 BICSI Fall Conference

Raritan Booth #118

September 12-15, 2010 

MGM Grand Hotel & Convention Center

Las Vegas, Nevada


Page 82 of 95 pages ‹ First  < 80 81 82 83 84 >  Last ›

Subscribe


Upcoming Events

Yotta 2025
September 8 - 11  •  MGM Grand, Las Vegas, NV
Datacloud USA
September 16 - 17  •  Austin Marriott Downtown, Austin, TX
SCTE TechExpo 25
September 29 - October 1  •  Walter E. Washington Convention Center, Washington, DC
OCP Global Summit
October 14 - 16  •  San Jose Convention Center
7x24 Exchange Fall
October 19 - 22  •  Phoenix, AZ

View all Events

Latest Raritan News

Legrand Brings Greater Flexibility to Data Center Operators with New Intelligent Rack PDU Universal Input Option
Posted on February 26, 2025
Legrand Expands Full Suite of DX2 SmartSensors, Keeping Data Centers Ahead of Rack Power and Environmental Monitoring Challenges
Posted on December 18, 2024
Legrand Wins Back-to-Back Awards for Intelligent Rack Power Distribution Innovation
Posted on May 24, 2024
Legrand Certifications and Process Controls Provide Confidence in Information Security for Network-Connected Devices in Data-Related Applications
Posted on April 1, 2024
Legrand Releases Version 4.0 of Raritan’s Industry-Leading Secure KVM Switches, Raising Bar for Secure Desktop Access
Posted on July 31, 2023

View all news