Because of big data, 100Gig ethernet and virtualization, server payloads are changing and so is their architecture. I have posted in the past a few pointers to this new reality, mainly psuhed by Google’s enormous server appetite (as well as creativity) as well as Facebook Open Compute project. Can Intel compete with ARM licencees around power consumption, cost, and integration (communications, on-board storage, and board management controller) ? AMD is making a big bet that could pay off in the very near term – not sure ARM’s organization will be able to fx their mobile myopia on time though.
[Reproduced from GigaOM]
AMD executive: The data center is changing and ARM will be the compute
[by Stacey Higginbotham 06.19.13]
AMD is betting big on ARM chips in the data center because the demands of client computing have changed the way computing and data centers are built and designed.
There has been a complete transformation of the client side of computing, and because of that the infrastructure on the back end is changing. As part of that change, the new chip architecture inside the servers in the data center will use the ARM architecture, said Andrew Feldman, GM and corporate VP at AMD.
In his presentation at GigaOM’s Structure conference on Wednesday, Feldman explained that the data center is not only the cloud, it’s providing the value for most of the phones, tablets and myriad devices we carry every day.
“The demand for compute has left the client side and moved into the data center,” said Feldman. “Over a three-year period we went from 3 percent to a third of the U.S. population owning a tablet … We now spend hours and hours a day in the cloud where before, we were on the couch.”
This change means we’re not just changing computing, but also networking and storage. He said IT has become software-defined. And the building blocks aren’t the only thing changing, the buildings where we house the compute is changing as well. Even where we put those buildings is changing.
“We used to put data centers in urban environments but where do we put them now? In Eastern Washington or along river banks in Oregon to take advantage of lower power,” said Feldman.
“The data center now does the compute for the client side. Millions of millions of users each with parallel work. We don’t ask it to do CAD/CAM …. the vast majority of the work we ask it to do is simple parallel work for the client side. And that work is very different.”
It’s not about CPU performance, which means that work requires a different type of processor. “And in the future we believe it’s going to be an ARM processor,” Feldman said.
So Feldman called for the industry to rethink how it designs servers to make them more efficient. The server world should also embrace open source hardware like what the Open Compute Project wants to offer. He left the audience with the thought that in the 60 year history of computing, smaller, higher-volume parts have always won. That used to be x86-based processors. But in 2012 more than 8 billion ARM CPUs were shipped, more than twenty times the x86 volume.
It’s too bad Feldman didn’t spend some time talking about how AMD plans to adapt to the realities of an ARM-based chip world, where dozens of vendors have the ability to design and build ARM-based chips. That’s a big shift from building x86 chips that only two vendors can sell.
Leave a Reply.
head of product in colorado. travel 🚀 work 🌵 food 🍔 rocky mountains, tech and dogs 🐾