Assuming a hybrid SCM-DRAM hardware architecture, we propose a novel software architecture for database systems that places primary data in SCM and directly operates on it, eliminating the need for explicit I/O.
As Dear Reader is aware, I myself am not a fan of MVCC semantics for RDBMS. It is resource heavy and invites developers to consider transactions cost free, thus ending up with applications awash in 'persistent transactions'. Alas, I suspect due to involvement with SAP/HANA, this paper's PoC is MVCC based. I don't hold this failure against it.
It's quite clear now that there is a 'spectrum' of invention with regard to RDBMS (really, any stored data), from such direct transactions to SCM at one end, to caching/buffering/logging and thence to SCM as we do today with DRAM and SSD/HDD. What to do? What to do?
Here's what's proposed:
A hybrid SCM-DRAM hardware architecture allows for a radical change and a novel database system
architecture in different perspectives:
• There is no need to copy data out of the storage system since directly modifying the data stored in SCM becomes possible;
• The use of persistent data structures in combination with an adequate concurrency scheme allows for completely dropping the logging infrastructure;
• It can be dynamically decided where to store certain database objects as long as their loss can be tolerated;
• It is possible to achieve near-instant recovery. We realize the above design principles in SOFORT, our prototype storage engine which we implemented from scratch for this purpose. In the following, we give an overview of SOFORT and contrast our design decisions with state-of-the-art.
More specifically:
Most previous works have focused solely on improving OLTP performance in row-stores. In comparison, SOFORT is a hybrid SCM-DRAM main-memory transactional storage engine intended for hybrid analytical and transactional workloads. It is a single-level store, i.e., it keeps a single copy of the primary data in SCM, directly operates on it, and persists changes in-place in small increments.
So, not just MVCC, but column-store. No matter. The principles of implementation should be agnostic to both characteristics.
Once again, from the wiki, IBM OS/400 (now called 'IBM i'):
The system was one of the earliest to be object-based. Unlike traditional operating systems like Unix and Windows NT there are no files, only objects of different types. The objects persist in very large, flat virtual memory, called a single-level store.
And, once again, the machine and the OS and the RDBMS (a version of DB2) are an integrated whole. Here's an overview of the OS as i5/OS nomenclature.
In the News Biz, it is considered a major flaw to bury the lede. Well, here's the lede, and it's buried at the very end. As these various reports have demonstrated, the potential use of SCM falls into two contradictory patterns: read/write to it directly with no intermediaries or simply substitute it for legacy file/file systems. What to do?
It turns out that SCM existed many decades ago, but was supplanted by semi-conductor memory, and processing.
When not being read or written, the cores maintain the last value they had, even if the power is turned off. Therefore they are a type of non-volatile memory.
Who knew? I did, since I started in the compute world while such machines were still around, barely.
So, why is SCM viewed as such an innovation? I suppose partly because many in the compute industry have no memory of, or study of, the past. Also, core was bulky and expensive, so the notion of even megabyte memories was envelope pushing. And it's been many years since MRAM was first proposed and is still not 'mainstream'. Bye the by, the wiki piece mentions the CDC 6600, which was the machine I used while at the UMass econ department grad program. Ah the memories of pounding on that O59.
Turns out that semi-conductor circuits aren't all equal, and as time went on, cpu speed exceeded not only memory but especially disk speed. As time went on, the RISC protocol began to take hold, and with increasing density of fabrication, the core conundrum had to be dealt with: how to get a speedier computer with cpu speed ever increasing, but disk speed all but static? Enter multi-level cache/buffer, OO execution, multi-thread, and multi-processor. IOW, there became many a slip twixt the cup and the lip. The main report discussed here takes 200 pages to work around the many points of disconnect in modern cpu/memory/storage regimes vis-a-vis SCM. Making best use of SCM simply won't work with all these tiers in modern cpu and memory management controllers, just to make SCM look like *nix files. The cpu should just see a flat address-space of persistent objects, and manipulate them directly, not some convoluted file system on disk, even a SSD mediated by myriad caches and buffers. Managing all those caches/buffers/memory and storage to put the needed data in the 'right place at the right time' becomes a valueless exercise. The worst nightmare up and down the semi-conductor production stack!!
It will take a bunch of radical EEs to build such a microprocessor which could look a lot like that TI-990 of yore, and there's no guarantee that the resulting machine will be 'better' than today's standard. A really simple machine built with at least an order of magnitude fewer elements. Not what Intel or even ARM would welcome; if you don't need billions and billions of transistors to build a cpu, then what becomes of all that money spent building ever teenier nodes to squeeze more transistors into a wafer? If not, then SCM will go the way of the Sony Walkman. Lots of money spent drilling a dry hole. Or taking a worthless compound through PIII trials.
More to come, I'm sure. But this one is more than long enough.
No comments:
Post a Comment