MMS • RSS
Posted on nosqlgooglealerts. Visit nosqlgooglealerts
For the Amazon DynamoDB team, AWS re:Invent 2024 was an incredible experience to connect and reconnect with our customers. We interacted with hundreds of customers to learn what is top of mind for their builders and businesses as they head into the new year. We work backward from these conversations and feedback to prioritize our roadmaps.
re:Invent is also a great opportunity for customers and builders who use AWS to share what they have built and why, architectural patterns, and best practices in the form of breakout sessions and workshops. The key themes this year were “better together” integrations, data modeling, and building globally resilient, scalable applications on DynamoDB. In case you missed some of these sessions, or you wanted to get caught up on why customers like Klarna, Krafton, Vanguard, Fidelity, and JPMorgan Chase are building on DynamoDB, you can read this helpful summary of some of the DynamoDB highlights from re:Invent 2024.
Keynotes and leadership sessions
The keynotes and leadership sessions at re:Invent set the tone for the conference, highlighting the AWS vision for the future of cloud databases. In these presentations, AWS executives and product leaders unveiled new features, shared strategic insights, and showcased the central role of DynamoDB in NoSQL database offerings. Let’s recap the key DynamoDB-related announcements and breakouts from these sessions.
Keynote with Matt Garman, AWS CEO
Matt Garman’s keynote talked about what customers wanted out of a hypothetically “feature perfect” database product. High availability, multi-Region, predictable low latency, strong consistency, and zero operational burden were some of the key callouts. The keynote also mentioned the origination of the DynamoDB service, through the “Dynamo paper,” and announced the new DynamoDB multi-Region strong consistency for global tables feature in preview.
In this image, Matt explains the background of databases at AWS, including purpose-built databases like DynamoDB.
To learn more about how AWS continues to innovate across every aspect of the cloud, see the full CEO keynote with Matt Garman.
AWS databases: The foundation for data-driven and generative AI apps (DAT201)
In this highly anticipated innovation talk, Jeff Carter and G2 talked about database innovation in 2024. G2 talked about how tens of thousands of customers use DynamoDB global tables for publishing social media activity or streaming video usage.
To explain global tables strong consistency, G2 walked viewers through an instructive example of how a bank with available credit lines on credit cards vends credit. In this image, G2 explains the write path of multi-Region journal, and how writing to one AWS Region gets acknowledged and written to all of the other Regions’ global tables.
United Airlines was included in part of the talk, naming DynamoDB global tables as their primary system for seating assignments due to its exceptional scalability and active-active multi-Region high-availability, with single-digit millisecond latency. BMW chose DynamoDB because of its fully managed single-digit millisecond performance and serverless design for their vehicle connectivity service discovery mechanism.
Finally, Jeff talked about reducing the cost of our infrastructure, and passing those cost savings to customers. In this image, he explains the price reductions across DynamoDB global tables, Amazon Keyspaces (for Apache Cassandra), Amazon ElastiCache Serverless for Valkey, and node-based Amazon ElastiCache for Valkey.
To learn more about our commitment to innovation in databases, with a focus on making them more scalable, resilient, and user-friendly for customers across various industries, check out the DAT201 innovation talk on the AWS Events channel.
Dive deep into Amazon DynamoDB (DAT406)
Amrith Kumar returns to re:Invent after presenting DAT330 in 2023. This time, Amrith presented a talk on diving deep into DynamoDB, specifically around the challenges behind how DynamoDB delivers on three key themes: predictable, low latency, and at any scale. He also dove deeper into two new features released this quarter: warm throughput and multi-Region strong consistency for global tables. He touched on how DynamoDB works under the hood, and referenced the recently published DynamoDB paper, which explores what the DynamoDB team has learned after 10 years of operating a predictable, low-latency database service that runs at any scale.
In this image, Amrith explains the inner workings of how partition metadata lookup works, with a helpful QR code that links to the DynamoDB paper published in the 2022 USENIX Annual Technical Conference.
To learn more about the inner workings of DynamoDB, check out the full breakout session Dive deep into Amazon DynamoDB (DAT406). Refer to Amrith’s AWS blog posts Query data with DynamoDB Shell – a command line interface for Amazon DynamoDB as well as Use Amazon DynamoDB global tables in DynamoDB Shell.
To learn more about warm throughput, refer to Understanding DynamoDB warm throughput.
An insider’s look into architecture choices for Amazon DynamoDB (DAT419)
In this breakout session, learn from Amrith Kumar and Joseph Idziorek to get a thorough understanding of how the core architecture of DynamoDB overcomes the limitations of relational databases to power the most demanding applications on the planet, and learn how and why over 1 million AWS customers use DynamoDB to build low-latency, performant, scalable, high-throughput applications.
In this image, Amrith walks through the high-level architecture that customers know as DynamoDB.
In this image, Amrith used a real customer quote to highlight that to “build boring systems” in the right way leads to better outcomes.
In this image, Joseph talks about those architecture choices that align with the design goals of building a database that is predictable, highly available, with low-latency at any scale.
Joseph capped off the talk with the recent 2024 launches for DynamoDB, like the 50% price reduction for on-demand throughput, security launches, global tables, serverless, and integrations. There are many more insightful and important moments in this talk; all of which can be found in the full DAT419 talk on the AWS Events channel.
Multi-Region strong consistency with Amazon DynamoDB global tables (DAT425-NEW)
In this talk, Jeff Duffy and Somu Perianayagam from the DynamoDB team explained the newly released preview of global tables multi-Region strong consistency (MRSC). Global tables provide a fully managed solution for deploying a multi-Region, multi-active database, without having to build and maintain your own replication solution.
Jeff explained the new MRSC functionality, the context behind the feature, and provided a quick demo. In this image, Jeff explains the comparison between multi-Region eventual and strong consistency.
Somu dove deeper into how the team originally built DynamoDB global tables and then talked about how the team built multi-Region strong consistency. In this image, Somu talks through the comparison between asynchronous and MRSC global tables.
To learn more, check out the full breakout talk Multi-Region strong consistency with Amazon DynamoDB global tables (DAT425-NEW), and learn more about global tables and multi-Region strong consistency (preview) in the Amazon DynamoDB Developer Guide.
“Better together” integrations
One major theme at re:Invent 2024 was the power of integrating DynamoDB with other AWS services. These “better together” integrations showcase how DynamoDB can be part of a broader, more comprehensive data strategy. In this section, we cover the zero-ETL cross-service integrations that were announced this year, demonstrating how DynamoDB can work with services like Amazon Redshift, Amazon Simple Storage Service (Amazon S3), and Amazon SageMaker Lakehouse to enhance data analytics and machine learning (ML) workflows.
Deep dive into Amazon DynamoDB zero-ETL integrations (DAT348)
This session provided an overview into what zero-ETL is: a set of fully managed data pipelines that minimizes the need for you to build and maintain your own extract, transform, and load (ETL) pipelines, for common use cases like data ingestion and data replication. Sean Ma and David Gardner provided an in-depth look into zero-ETL integrations between DynamoDB and analytical services like Amazon Redshift, SageMaker Lakehouse, and Amazon OpenSearch Service. Specifically, this talk focused on how these integrations can simplify data replication and analytics workflows, while reducing operational burden.
David talked through the Command Query Responsibility Segregation (CQRS) microservices architecture. In this image, David explains how DynamoDB streams provides a reliable downstream delivery of data.
David also talked about the importance of removing the undifferentiated heavy lifting of building and managing ETL pipelines.
Sean walked us through the newly released zero-ETL integration with SageMaker Lakehouse. In this image, Sean talks through simplifying data replication and ingestion needs so customers can make data available, without impacting production workloads.
The talk also provided a live demonstration of the end-to-end setup of DynamoDB zero-ETL to SageMaker Lakehouse, and displayed how replicated data becomes immediately queryable in Amazon Athena.
To learn more about how zero-ETL integrations can help you by simplifying the process of making data available for analytics without impacting production workloads, check out the full Deep dive into Amazon DynamoDB zero-ETL integrations (DAT348) breakout session.
Additionally, see DynamoDB zero-ETL integration with Amazon SageMaker Lakehouse, DynamoDB zero-ETL integration with Amazon Redshift, and Best practices for working with DynamoDB zero-ETL integration and OpenSearch Service.
Data modeling in DynamoDB
Data modeling is a critical skill for using DynamoDB effectively, and re:Invent 2024 featured several sessions dedicated to this topic. From core concepts to advanced techniques, these talks provided valuable insights for both newcomers and advanced users. Let’s explore how DynamoDB experts shared their knowledge to help attendees optimize their applications built on DynamoDB.
Advanced data modeling with Amazon DynamoDB (DAT404)
In this talk, AWS Data Hero Alex DeBrie covered advanced techniques for data modeling in DynamoDB, focusing on mechanical sympathy, napkin math calculations, and practical use cases for DynamoDB streams.
In this image, Alex talks about what factors can affect your data modeling—covering the importance of setting the right primary key, your API structure, secondary indexes, pagination mechanics, and the consistency model.
To learn more the importance of using DynamoDB features to build solutions, and receive guidance on when to consider using external systems to complement the capabilities of DynamoDB, check out the full DAT404 talk on AWS Events.
You can also check out Alex’s past talk at re:Invent 2023, Advanced data modeling with Amazon DynamoDB (DAT410).
Data modeling core concepts for Amazon DynamoDB (DAT305)
DynamoDB Specialist Solutions Architects Jason Hunter and Sean Shriver covered core concepts and best practices for data modeling in DynamoDB. The key focus areas included a phone book analogy to represent partition key and sort key structure, designing efficient data models for common access patterns, optimizing for performance and cost at scale, and handling complex requirements like soft deletes, accurate counting, and full-text search.
In this image, Sean walks through defining a partition key (PK) and a sort key (SK), and some advice on how to use them effectively.
One particular area where customers often face problems is optimizing for storage, because DynamoDB charges by the byte for storage. By considering client-side compression and decompression, customers can save costs by storing the attribute in a base64-encoded object. In this image, Sean walks through how optimizing for item size using compression would work.
The core data modeling best practices that were called out were:
- Use descriptive partition keys and secondary keys (“USER#123” instead of simply “123”)
- Consider overloading global secondary indexes (GSIs) to support multiple access patterns
- Split static and dynamic data to reduce write costs
- Use write sharding to distribute load across partitions
To learn more about the importance of thoughtful data modeling, and get a comprehensive overview of data modeling in DynamoDB, from basic concepts to advanced optimization techniques, check out the full DAT305 session on AWS Events.
Dive deep into Amazon DynamoDB using design puzzles (DAT302-R)
This chalk talk was not recorded; however, a previous iteration was posted as a video on the AWS Events channel. To see previous presentations on this topic, check out a previous design puzzlers talk as well as How to solve common Amazon DynamoDB design puzzles on The Data Dive OnAir.
Customer sessions
One of the most valuable aspects of re:Invent is hearing directly from customers about how they use AWS services in the real world. Several DynamoDB customers shared their experiences this year, demonstrating the diverse ways organizations are using DynamoDB to solve complex challenges in different industries.
The evolution story of game architecture: PUBG: Battlegrounds, Krafton (GAM311)
In this talk, JungHun Kim of Krafton talked about how PUBG evolved their AWS architecture to cover millions of concurrent game users globally. PUBG: Battlegrounds was initially released with legacy infrastructure that included Windows servers, but has since been seamlessly modified to enrich a stable game experience.
In this image, Krafton explains how they use dedicated resources like DynamoDB local, which is part of a self-service infrastructure setup for their Quality Assurance (QA) team.
To learn more about the valuable lessons that Krafton learned in their transition journey on a live environment, check out the full GAM311 talk on AWS Events.
Building an educational startup on AWS: A heroic journey (DEV305)
In this talk, two AWS Heroes, Matt Coulter and Kristi Perreault, talked through the challenges of building a startup (teachmeaws.com) with an unpredictable number of users in a cost-effective, serverless way, in order to facilitate rapid development and reduce operational overhead. They chose DynamoDB because it fit their needs, being inexpensive, scaling to zero, great for infrequent access, and predictable interactions. Choosing DynamoDB enabled Kristi and Matt to focus on building what’s important, preventing complex queries, and provides them with the option to grow.
In this image, Kristi talks about cost as a big factor in choosing DynamoDB, which led to other benefits as well.
To learn more about how two AWS Heroes built and deployed an educational platform for AWS, on AWS, check out DEV305 on the AWS Events channel.
Klarna: Accelerating credit decisioning with real-time data processing (FSI319)
Klarna’s Tony Liu presented the challenges with credit underwriting at the time of making a credit decision on a customer’s purchase transaction. Klarna wanted to reduce friction on adding or updating features, and wanted to decouple real-time and batch infrastructure. Klarna wanted to migrate to a more ideal solution, which included strict contracts from producers, cost-efficient and scalable, minimal differences between batch and real-time, and experiment with new features using historical data.
In this image, Tony explains the real-time processing layer of their application, which consists of the update state (write) side and the feature calculation (read) side. The DynamoDB table in the middle stores the intermediate state, with a domain-specific data model.
To learn more about how Klarna accelerates credit decisioning with real-time data processing, check out FSI319 on the AWS Events channel.
Fidelity Investments: Building for mission-critical resilience (FSI318)
In this talk, Keith Blizard and Joe Cho talked about how Fidelity successfully implemented chaos engineering at scale for their most critical systems, providing a representative model of how to build resilient, highly available applications.
In this image, Keith talks about the fault injection methodology, and a high-level architecture diagram of their chaos execution and reporting system.
To learn more about how Fidelity developed a system that allowed their teams to run chaos experiments across their cloud environments, check out the full FSI318 talk on the AWS Events channel.
How Vanguard rebuilt its mission-critical trading application on AWS (FSI322)
In this talk, learn how Vanguard worked backward from its outcomes of needing increased scalability, improved resiliency, accelerated time to market and agility, improved customer satisfaction, and decreasing total cost of ownership when looking to rebuild its trading application using AWS.
Vanguard identified portions of the data architecture that could be expressed as NoSQL key-value lookups and migrated them to DynamoDB. This decreased operational overhead by not having to manage the patching and maintenance work.
In this image, Vanguard’s Andrew Clemens describes their newly brokerage trading cloud architecture, and specifically how they used DynamoDB as the database for their routing destination service, allowing them to stamp the trade with the right market center.
Another system, the backend execution management system, listens to the message. After translating the message into an industry standard fixed message, they store the fixed message into a DynamoDB database, before sending it outbound to the market centers.
To learn more about how Vanguard rebuilt its retail investor trading platform on AWS to achieve higher resiliency, improve performance, and lower costs, check out the full FSI322 talk on the AWS Events channel.
JPMorgan Chase: Real-time fraud screening at massive scale (FSI315)
In this talk, we learned how JPMorgan Chase (JPMC) built a cloud-centered fraud screening engine, capable of handling thousands of transactions per second, while protecting over $10 trillion in daily payments volume. JPMC chose DynamoDB for low-latency data lookups to store batch feature aggregations as well as their decisions.
In this image, Karthik Devarapalli explains feature engineering and how they use DynamoDB to store the materializations of features.
To learn more about how JPMC performs thousands of fraud checks per second within hundreds of milliseconds, check out the full FSI315 talk on the AWS Events channel.
Summary
re:Invent 2024 showcased the evolution of DynamoDB as a cornerstone of serverless, scalable architectures. Key themes emerged around global resilience, zero-ETL integrations, and data modeling techniques. Throughout the conference, customers from industries such as gaming and financial services demonstrated how DynamoDB empowers them to build predictable, low-latency, highly available solutions that operate at massive scale.
As we look towards 2025, we are excited to see how customers use our newly announced features like multi-Region strong consistency and the newly released targets for zero-ETL integrations.
We encourage you to explore the wealth of DynamoDB content from re:Invent 2024. Visit the Databases session catalog to access all relevant sessions, or check out the full AWS re:Invent 2024 Databases Playlist for curated video content from the Databases re:Invent session track.
Whether you’re an experienced DynamoDB user or just starting your journey, there’s always more to learn. Get started with the Amazon DynamoDB Developer Guide and the DynamoDB getting started guide, and join the millions of customers that push the boundaries of what’s possible with DynamoDB.
About the author
Michael Shao is a Senior Developer Advocate on the Amazon DynamoDB team. Michael has spent over 8 years as a software engineer at Amazon Retail, with a background in designing and building scalable, resilient, and highly performing software systems. His expertise in exploratory, iterative development helps drive his passion for helping AWS customers understand complex technical topics. With a strong technical foundation in system design and building software from the ground up, Michael’s goal is to empower and accelerate the success of the next generation of developers.