Entry Date:
January 25, 2017

Moving the Abyss: Database Management on Future 1000-Core Processors

Principal Investigator Srinivas Devadas

Project Start Date September 2014

Project End Date
 August 2017


There are fundamental problems to speeding up CPUs beyond what is possible today. Because individual transistors are not getting any faster, manufacturers like Intel and AMD are no longer able to get massive performance improvements just by increasing clock speeds (e.g., going from 2GHz to 10GHz). To overcome this, future CPUs will contain hundreds to thousands of smaller computational cores on a single chip which will all run at the speed similar to current processors (e.g., 2GHz). This means that each single core will only be as powerful as current CPUs, but that the total aggregate power of all the cores will be significantly more than what is possible today. An important problem with the advent of these new CPUs is that the database systems that are used in all aspects of our society are ill-suited for this change. Such database systems are used to store and access data for a variety of applications, including on-line business (e.g., Google, Facebook), scientific instruments (e.g., astronomical telescopes), and medicine (e.g., MRI scanners). The reason that they are not ready to handle these new "many-core CPUs" is because most of them use ideas that were designed in the 1970s and 1980s when processors only had a single core. Thus, the purpose of this project is to develop both software and hardware technologies that will allow database systems to utilize the full computational power of future CPU architectures. The results of this project will enable organizations to deploy future applications on fewer machines that use less energy than what is currently used today.

Computer architectures are moving towards an era dominated by many-core machines with hundreds of cores on a single chip. This unprecedented level of on-chip parallelism introduces a new dimension to scalability that current database management systems (DBMSs) were not designed for. In particular, it becomes exceedingly difficult for the DBMS to perform concurrency control, logging, and indexing efficiently. With hundreds of threads running in parallel, the complexity of coordinating competing reads and writes to data diminishes the benefits of increased core counts. Thus, in this project the PIs propose to develop a software-hardware co-design approach for DBMSs in the many-core era. On the software side, rather than attempting to remove scalability bottlenecks of existing DBMS architectures through incremental improvements, the PIs seek a bottom-up approach where the architecture is designed to target many-core systems from inception. On the hardware side, instead of simply adding more cores to a single chip, the PIs will design new hardware components that can unburden the software system from computationally critical tasks.