Studies/Papers


Watts Water Technologies needed to replace 1000 old shop-floor terminals with more flexible desktops. They ended up choosing SUSE Linux Enterprise Desktop on Neoware thin client hardwares along with ZENworks to help manage the environment. You can also check out the Open PR blog entry for some info.  From the customer success story…

After evaluating several desktop and thin-client solutions, Watts Water Technologies selected SUSE Linux Enterprise Desktop for use in a thin-client deployment, as well as Novell ZENworks to manage more than 1,000 desktops.

“Linux really shines and Novell has a great Linux strategy,” said Ty Muscat, Data Center Manager for Watts Water Technologies. “We have almost every platform imaginable and are moving more and more to SUSE Linux Enterprise desktops and servers. We like having an open platform with a lot of flexibility.”

The results:

“Without Novell, we would have had to invest far more to get anything similar to what we have with SUSE Linux Enterprise Desktop,” said Muscat. “The ongoing management and maintenance costs of other options would have been overwhelming for us.”

MacGyver knew his stuff when it came to building a flame thrower out of popsicle sticks, chewing gum, dental floss and a styrofoam cup — plus he always had that cool Swiss Army knife. But I bet even he wouldn’t have been able to use eight PlayStation 3’s, Linux and some technical hacker-know-how to do some scientific supercomputing. But someone’s done it!

This interesting blog article from ZDnet talks about how a researcher from University of Massachusetts built a very low cost “supercomputer” capable of about 200 GFlops all running on PS3’s. While the Linux distro used wasn’t SUSE Linux Enterprise (it was Yellow Dog Linux)… and while there are several other considerations which keep the PS3 from being the scientific computing platform of choice, it’s definitely another fine example of how flexible Linux can be compared to other OS’s.

So, if you’re looking for an excuse to get approval for a purchase order of equipment for your gaming– er, “supercomputing lab”… look no further.

From the article:

The emergence of global standards for measuring the energy efficiency of datacentres moved a step closer yesterday with the launch of a raft of new research papers from green IT industry consortium The Green Grid.

The consortium has released an updated version of its Datacentre Energy Efficiency Metrics whitepaper that incorporates infrastructure efficiency into the original metrics.

It also said that it expects its Power Usage Effectiveness (PUE) and Datacentre efficiency metric for assessing the proportion of power going into a datacentre that is used to power the IT kit to be adopted by the industry and used by all datacentres to report their efficiency.

More here.

Sick of hearing about “Green” yet? Better learn to deal with it, “Green”‘s drumbeat is really just beginning and it’s not just a fad, it’s something that fits a condition we have in IT, and it’s a way to get more money and headcount for managers, so listen up.

What is “Green” computing? Here’s as good a definition as I could find, click through for more from Techtarget.

Green computing is the environmentally responsible use of computers and related resources. Such practices include the implementation of energy-efficient central processing units (CPUs), servers and peripherals as well as reduced resource consumption and proper disposal of electronic waste (e-waste).

One of the earliest initiatives toward green computing in the United States was the voluntary labeling program known as Energy Star. It was conceived by the Environmental Protection Agency (EPA) in 1992 to promote energy efficiency in hardware of all kinds. The Energy Star label became a common sight, especially in notebook computers and displays. Similar programs have been adopted in Europe and Asia.

How “Green” is your office environment? Take the Greening the Cube Farm quiz and see!

Last but not least, is buying “Green” storage for business continuity, disaster recovery and archival enough? Not nearly enough, according to the marketing director of Overland Storage.

RossB

Will Virtualization Doom Server Sales?

From the article:

The promise behind virtualization has long been that one well-equipped server could do the work of several. So what happens once customers begin following that idea — and buying fewer servers?

That scenario is cause for concern, according to industry analyst Infiniti Research. This week, the firm published a study indicating that server sales will trail off in coming years, and even decline, as virtualization reduces the need for physical hardware.

The company’s TechNavio online research unit released the findings to coincide with the upcoming Storage Expo conference in London next week.

The study suggests that sales will slow to two percent in 2008 — representing a marked decline from the 5.9 percent annual growth rates that fellow market researcher IDC saw in 2006, and the 8.9 percent from a recent Gartner study.

Read the rest of the article.

A new Aberdeen Group study reports that as Virtualization keeps expanding both in it’s role in the datacenter and as a tool for consolidation of services/storage and cost savings, it’s becoming even more vital as a way to provide Business Continuity, High Availability and Disaster Recovery.

For us, virtualization is a given. Our system utilization was low and if there was a peak, it only happened for an hour.

The rest of the time our systems are idle. Our application servers are just not using enough of the physical resources.

— Manager of Portal Operations for a Consumer and Applications Portal Company

The report includes a number of case studies and significant findings, such as:

  • 54% of firms use virtualization to support DR plans
  • 48% use virtualization to support HA strategies
  • 50% use virtualization to support BC implementation

For the typical organization who suffers from excess capacity and the costs associated, virtualization is a must. Along with that move to enterprise level virtualization comes the need for enterprise level business continuity planning.

Since the use of virtualization for BC, HA, and DR purposes is still merging, it is imperative that companies make sure it is implemented with the careful planning and testing of systems. This also will help insure there are no unnecessary redundancies and more efficient process in data recovery management. This latter issue, which is just starting to take hold within the physical world, is certainly going to be the next big issue as more companies use virtualization to support BC, HA, and DR processes.

Recovering data generated from virtualized systems will become a crucial discussion in the coming months.

Register for a free copy of the report here.

Enjoy,

RossB

AMD’s Virtual Experience is a pretty cool marketing/virtual tradeshow where you can view videos and short presentations on a variety of technologies related to AMD — such as SUSE Linux Enterprise in the Novell booth.

Of course, you could visit the other vendors at the virtual trade show, but why not start by checking out the Novell booth and learn how SLE takes advantage of the technologies in the latest generation of AMD Quad-core Opteron processors…

Next Page »