(www.dell.com), in Austin, Texas.
Server management begins with hardware preparation and then OS loading and configuration to get applications running effectively, according to Reynolds. Next up is monitoring the server’s health and status, he says. Once it is established that the system performs satisfactorily, Reynolds says end-users can control the entire system and, if necessary, alter it to meet expectations or re-purpose the server to take advantage of higher levels of performance through new technology.
At the most basic level of server management, there are deployment tools from hardware original equipment manufacturers. “Usually, one tool is designed to configure the BIOS (binary operating system). It may help you update drivers. And frequently, it will help you insert this process of BIOS configuration and driver update into an installation process the server may already have,” Reynolds explains.
Resident agent
Tools may also include an agent that resides on the system and gives end-users command-and-control capability over hardware functions, Reynolds notes. Other functions also may include the ability to query sensors—for example, temperature or network traffic sensors—and send alerts if something isn’t working at specification.
Software tools also monitor OS health, he remarks. These may include tools that scan system files periodically to determine if they are stalled or unresponsive. One such tool might be Microsoft’s Management Console, Reynolds says. Monitoring functionality could be performed from an external console, he adds. “Typically, this depends on the size of the environment, for hardware and OS. There are tools that can reside on a laptop or some other work station.” Regardless, the goal is to give users simultaneous control over multiple systems. “The console functions much like a console on a factory floor that uses human-machine-interface software to graphically display process conditions, such as malfunctions,” he states.
The console concept combines reactive and preventive maintenance, Reynolds says. “If anything happens, I have the ability to see that from the console. I can then reconfigure the server, update the software—basically, remediate the problem,” he notes. A key functionality is the ability to pre-program the console so it will either page or e-mail an individual or individuals, if there is an event of given severity, Reynolds says. “That alert can have within it text and the specific system identification tag for which the alert was given.”
On the horizon for server management is grouping resources and managing the service level. To do that, information technology (IT) groups within companies have begun to commit to groups of internal users a service level of uptime and performance, he says. This commitment leads to better capacity planning and resource management, Reynolds explains. Better management includes dynamic allocation of IT resources, he says—that involves adding or removing resources as the needs of the network change, so that the cost of delivering services is exactly what is needed to meet the service level.
Also on the horizon are server management standards. One current example is SMASH—systems management architecture for server hardware—from the Portland, Ore.-based Distributed Management Task Force Inc. (www.dmtf.org). “It gives common syntax for command and control of servers, storage devices and network devices,” Reynolds says. Such standards give end-users flexibility and choice in managing systems without sacrificing control, he says. “And that increases efficiency.”