Lead Oracle , London , United Kingdom
The cloud data management infrastructure is being transformed by serverless databases because of their operational simplicity, usage-based pricing, and elastic scalability. However, their performance in real-world workloads analysis is still unexplored. This paper presents an in-depth analysis of serverless database systems using simulation-based benchmarks evaluating Aurora Serverless and FaunaDB against RDS PostgreSQL. We simulate cold start latencies, dynamic cost settlement, autoscaling behaviors, transaction throughput, and various cost per transaction efficiencies. Our findings reveal up to 45% cost saving in burst-heavy workload scenarios while exposing the latency costs stemming from cold starts and storage rehydration during recovery. Throughput and stream-level metrics are evaluated highlighting IOPS, CPU consumption, query drop rates revealing the critical Elapsed Time benchmarks and operational choke point windows. This work provides direct guidance for system designers and cloud served database users seeking to shift from provisioned static architectures, fueling upcoming research addressing surge anticipation, data processing, and distributed multi-cloud frameworks for real-time replication in data-centered systems.
This is an open access article distributed under the Creative Commons Attribution Non-Commercial License (CC BY-NC) License which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
0
The statements, opinions and data contained in the journal are solely those of the individual authors and contributors and not of the publisher and the editor(s). We stay neutral with regard to jurisdictional claims in published maps and institutional affiliations.