Making fast chips useable

One aspect of dealing with big data is accelerating data processing. That is what Prof. Koen Bertels’ group is working on with programmable chips called FPGAs.

Data science solves and creates its own problems like no other discipline. Data sets continually grow, as do the pipeline capacities, server powers and our data hunger. Market analysis, genetic profiling or weather forecasts can never be precise and fast enough. That’s why TU researchers in Professor Koen Bertels’ group of computer engineering at EEMCS faculty are working on a technology that substantially accelerate computations, sometimes several orders of magnitude.

A CPU (central processing unit) is flexible in the sense that it can perform any task on data (addition, subtraction, comparison etcetera) but the price for that flexibility is the limited speed of operations.

Dedicated chips on the other hand are designed to perform only one specific task like image processing, data routing or sound compressing. Their performance is much faster.

An FPGA (field programmable gate array) is somewhere in-between a CPU and a fixed-task chip. Its hardware can be programmed to perform a certain task after which it seems the chip is hard-wired to do so. But, and that is the fun part, within a few milliseconds the chip can be reset and programmed for a totally different task.

“It’s like you take a dedicated chip, melt its hardware and reconfigure its architecture”, explains Dr. Alexandru Iosup (parallel and distributed systems at EEMCS), who works with Bertels in the context of the Delft Data Initiative.

FPGA’s have been on the market since 1985, produced mainly by chip manufacturers Xilinx and Altera. Telecommunications and networking were the earliest fields of application. Later followed by consumer, automotive and industrial applications. But only recently (2100) FPGA’s were coupled with microprocessors, making them easier to use. Xilinx’s Zynq processor, consisting of an FPGA with an ARM chip (90 percent of the smart phones has one), is a recent example of such a complete system on a programmable chip.

Bertels works with IBM on a prototype of their Power 8 supercomputer that will be equipped with FPGA’s for extra speed. Bertels’ group develops interfaces for developers to make use of these reprogrammable chips. Iosup explains: “If using these FPGA’s is too complicated, people will use other, slower computers.”

Genetic analysis for gene-specific patient treatments, astronomy, geophysical analysis, real-time hurricane path prediction and cloud gaming all require superfast computing. But as Iosup sees it, the need for faster and more energy efficient computing is even more urgent than that.

Today’s super computers contain up to 2-3 million cores, each consuming 60-200 watts of power, pushing the data centres’ energy demand into hundreds of megawatts. Already Google is responsible to 2-3% of the US total energy use and this figure is only rising.

“Unless we become much more energy-efficient or distribute, we will lack the energy and the space to process all the data we gathered”, says Iosup. That’s motivating them and their colleagues to make fast and energy-efficient technology, such as FPGAs, more accessible to a wider range of users.

–> This is one of over forty research projects that were presented on Delft Data Science New Year Event, held  at the EEMCS building on January 13 2014.

Editor Redactie

Do you have a question or comment about this article?

Comments are closed.