DocsAutoDBAHardware Provisioning

Hardware Provisioning

Creating a new database service

CrystalDB’s AutoDBA and cloud native serverless architecture eliminate the need for up-front resource provisioning decisions. The only information you need to provide when creating a database service is the location. Presently supported locations are the AWS regions us-east-1 and us-west-2, and we are working to support other cloud providers and additional AWS regions.

In contrast, provisioning a database traditionally requires DBAs to do detailed planning. This includes selecting the number and type of processors, amount of memory, amount of storage, and storage performance characteristics, such as bandwidth and operations per second. Assessing these needs accurately often requires testing, and confidence in tests requires realistic workloads. Even with the best testing, provisioning in advance relies on assumptions about future business needs and application evolution.

Ensuring high availability and meeting data protection requirements traditionally requires DBAs to provision redundant resources and configure monitoring and failover mechanisms. CrystalDB’s cloud native architecture is a foundation with high levels of internal redundancy, making additional configuration such as database replication unnecessary. For more detail on how CrystalDB ensures high availability, see the High Availability section.

Operating a database Service

CrystalDB’s cloud native architecture adjusts database resources every second while the service is online and processing transactions. This elasticity allows AutoDBA to take an entirely different approach to provisioning: It responds in real time to the workload’s actual resource needs, rather than based on projections or synthetic benchmarks.

At a high level, AutoDBA repeatedly solves a cost optimization problem: What are the minimum resources that meet the workload’s present requirements? This gets interesting when there are tradeoffs and interactions among various resource types.

To get a sense for how resource requirements and performance characteristics are intertwined, consider the ripple effects of increasing memory. Adding memory has a cost, but the memory can be used to create a larger buffer cache, reducing the need for I/O. The most readily evident benefits are less work for the storage service and reduced transaction latency, but other benefits include reduced work for the processor and the network, which are involved in fetching blocks from storage. There are secondary benefits as well: With reduced latency, fewer transactions are active in the system for any given transaction arrival rate. This reduces demand for working memory, making even more memory available for caching. Depending on the workload, it can also reduce contention, increasing system throughput and possibly further lowering latency. Lower concurrency also improves the efficiency of the processor’s various internal caches because fewer transactions need to share them.

How much memory is optimal for your database? AutoDBA monitors incoming transactions, data access patterns and transaction contention, building a model of how your application responds to changes in memory. This allows it to react instantly when workload patterns shift.