Modern Database Management Systems – Amrita Vishwa Vidyapeetham

MMS Founder
MMS RSS

Posted on nosqlgooglealerts. Visit nosqlgooglealerts

Overview of RDBMS – Storage and File Structures, Indexing and Hashing – Indexing Structures – Single and Multi-level indexes. Query Processing Optimization and Database Tuning: – Algorithms for Query Processing and Optimization- Physical Database Design and Tuning.

Intermediate and Advanced SQL – Embedded SQL Dynamic SQL, Functions and Procedural Constructs, Recursive Queries, Advanced SQL Features.

Transactions Processing and Concurrency Control – Transaction Concept, Transaction model, Storage Structure, Transaction Atomicity and Durability, Transaction Isolation, Serializability. Object Relational Data Models – Complex Data Types, Inheritance, Nesting and Unnesting. NoSQL Databases – NoSQL Data Models, Comparisons of various NoSQL Databases. CAP Theorem, Storage Layout, Query models. Key-Value Stores. Document-databases – Apache CouchDB, MongoDB. Column Oriented Databases – Google’s Big Table, Cassandra.

Advanced Application Development – Connecting to MongoDB with Python, MongoDB query Language, Updating/Deleting documents in collection, MongoDB query operators. MongoDB and Python patterns – Using Indexes with MongoDB, GeoSpatial Indexing, Upserts in MongoDB. Document database with Web frameworks.

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


MongoDB, Inc. (NASDAQ:MDB) Short Interest Update – MarketBeat

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

MongoDB, Inc. (NASDAQ:MDBGet Free Report) was the recipient of a large increase in short interest during the month of October. As of October 31st, there was short interest totalling 4,290,000 shares, an increase of 25.1% from the October 15th total of 3,430,000 shares. Based on an average daily trading volume, of 1,210,000 shares, the days-to-cover ratio is currently 3.5 days.

MongoDB Stock Down 3.1 %

Shares of MongoDB stock opened at $291.59 on Friday. MongoDB has a twelve month low of $212.74 and a twelve month high of $509.62. The company has a current ratio of 5.03, a quick ratio of 5.03 and a debt-to-equity ratio of 0.84. The business’s 50 day simple moving average is $278.11 and its 200-day simple moving average is $275.45.

MongoDB (NASDAQ:MDBGet Free Report) last posted its earnings results on Thursday, August 29th. The company reported $0.70 earnings per share (EPS) for the quarter, beating analysts’ consensus estimates of $0.49 by $0.21. The firm had revenue of $478.11 million during the quarter, compared to analyst estimates of $465.03 million. MongoDB had a negative net margin of 12.08% and a negative return on equity of 15.06%. MongoDB’s revenue was up 12.8% on a year-over-year basis. During the same period in the previous year, the business posted ($0.63) EPS. As a group, equities research analysts expect that MongoDB will post -2.39 EPS for the current year.

Analysts Set New Price Targets

MDB has been the topic of several research analyst reports. Piper Sandler raised their target price on shares of MongoDB from $300.00 to $335.00 and gave the stock an “overweight” rating in a research report on Friday, August 30th. Needham & Company LLC lifted their price target on shares of MongoDB from $290.00 to $335.00 and gave the company a “buy” rating in a report on Friday, August 30th. UBS Group upped their price objective on MongoDB from $250.00 to $275.00 and gave the stock a “neutral” rating in a report on Friday, August 30th. DA Davidson boosted their price target on MongoDB from $330.00 to $340.00 and gave the company a “buy” rating in a research report on Friday, October 11th. Finally, Sanford C. Bernstein raised their price objective on MongoDB from $358.00 to $360.00 and gave the stock an “outperform” rating in a report on Friday, August 30th. One equities research analyst has rated the stock with a sell rating, five have assigned a hold rating, nineteen have given a buy rating and one has given a strong buy rating to the company’s stock. According to MarketBeat, the company has a consensus rating of “Moderate Buy” and a consensus price target of $334.25.

Check Out Our Latest Stock Report on MDB

Insider Buying and Selling at MongoDB

In other news, CFO Michael Lawrence Gordon sold 5,000 shares of the firm’s stock in a transaction on Monday, October 14th. The stock was sold at an average price of $290.31, for a total transaction of $1,451,550.00. Following the sale, the chief financial officer now owns 80,307 shares in the company, valued at $23,313,925.17. This trade represents a 5.86 % decrease in their ownership of the stock. The transaction was disclosed in a document filed with the Securities & Exchange Commission, which is available through this link. Also, Director Dwight A. Merriman sold 1,000 shares of the company’s stock in a transaction on Friday, August 30th. The shares were sold at an average price of $290.40, for a total value of $290,400.00. Following the completion of the transaction, the director now owns 1,138,006 shares of the company’s stock, valued at $330,476,942.40. The trade was a 0.09 % decrease in their ownership of the stock. The disclosure for this sale can be found here. In the last 90 days, insiders have sold 24,281 shares of company stock worth $6,657,121. 3.60% of the stock is owned by insiders.

Hedge Funds Weigh In On MongoDB

Several large investors have recently bought and sold shares of the stock. Benjamin Edwards Inc. bought a new position in MongoDB during the third quarter valued at about $1,055,000. Avala Global LP acquired a new stake in shares of MongoDB during the 3rd quarter valued at $47,960,000. Point72 Hong Kong Ltd bought a new stake in shares of MongoDB during the 3rd quarter worth $7,964,000. Firsthand Capital Management Inc. grew its holdings in shares of MongoDB by 150.0% in the 3rd quarter. Firsthand Capital Management Inc. now owns 10,000 shares of the company’s stock worth $2,741,000 after acquiring an additional 6,000 shares during the last quarter. Finally, JAT Capital Mgmt LP lifted its holdings in shares of MongoDB by 53.5% during the third quarter. JAT Capital Mgmt LP now owns 86,320 shares of the company’s stock valued at $23,337,000 after purchasing an additional 30,073 shares during the last quarter. Institutional investors and hedge funds own 89.29% of the company’s stock.

MongoDB Company Profile

(Get Free Report)

MongoDB, Inc, together with its subsidiaries, provides general purpose database platform worldwide. The company provides MongoDB Atlas, a hosted multi-cloud database-as-a-service solution; MongoDB Enterprise Advanced, a commercial database server for enterprise customers to run in the cloud, on-premises, or in a hybrid environment; and Community Server, a free-to-download version of its database, which includes the functionality that developers need to get started with MongoDB.

Further Reading

This instant news alert was generated by narrative science technology and financial data from MarketBeat in order to provide readers with the fastest and most accurate reporting. This story was reviewed by MarketBeat’s editorial team prior to publication. Please send any questions or comments about this story to contact@marketbeat.com.

Before you consider MongoDB, you’ll want to hear this.

MarketBeat keeps track of Wall Street’s top-rated and best performing research analysts and the stocks they recommend to their clients on a daily basis. MarketBeat has identified the five stocks that top analysts are quietly whispering to their clients to buy now before the broader market catches on… and MongoDB wasn’t on the list.

While MongoDB currently has a “Moderate Buy” rating among analysts, top-rated analysts believe these five stocks are better buys.

View The Five Stocks Here

7 Stocks to Buy And Hold Forever Cover

Click the link below and we’ll send you MarketBeat’s list of seven stocks and why their long-term outlooks are very promising.

Get This Free Report

Like this article? Share it with a colleague.

Link copied to clipboard.

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Curi RMB Capital LLC Raises Holdings in MongoDB, Inc. (NASDAQ:MDB) – MarketBeat

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

Curi RMB Capital LLC boosted its stake in shares of MongoDB, Inc. (NASDAQ:MDBFree Report) by 11.5% in the third quarter, according to the company in its most recent Form 13F filing with the Securities & Exchange Commission. The firm owned 22,872 shares of the company’s stock after buying an additional 2,362 shares during the period. Curi RMB Capital LLC’s holdings in MongoDB were worth $6,183,000 as of its most recent filing with the Securities & Exchange Commission.

Several other hedge funds and other institutional investors have also bought and sold shares of MDB. Swedbank AB raised its stake in shares of MongoDB by 156.3% during the 2nd quarter. Swedbank AB now owns 656,993 shares of the company’s stock worth $164,222,000 after acquiring an additional 400,705 shares in the last quarter. Thrivent Financial for Lutherans grew its holdings in MongoDB by 1,098.1% in the second quarter. Thrivent Financial for Lutherans now owns 424,402 shares of the company’s stock worth $106,084,000 after purchasing an additional 388,979 shares during the last quarter. Clearbridge Investments LLC increased its position in shares of MongoDB by 109.0% during the first quarter. Clearbridge Investments LLC now owns 445,084 shares of the company’s stock worth $159,625,000 after purchasing an additional 232,101 shares in the last quarter. Point72 Asset Management L.P. purchased a new stake in shares of MongoDB during the 2nd quarter valued at $52,131,000. Finally, Renaissance Technologies LLC boosted its position in shares of MongoDB by 828.9% in the 2nd quarter. Renaissance Technologies LLC now owns 183,000 shares of the company’s stock worth $45,743,000 after purchasing an additional 163,300 shares in the last quarter. Institutional investors own 89.29% of the company’s stock.

Insiders Place Their Bets

In other news, Director Dwight A. Merriman sold 3,000 shares of MongoDB stock in a transaction that occurred on Tuesday, September 3rd. The stock was sold at an average price of $290.79, for a total value of $872,370.00. Following the sale, the director now directly owns 1,135,006 shares of the company’s stock, valued at $330,048,394.74. The trade was a 0.26 % decrease in their ownership of the stock. The sale was disclosed in a legal filing with the SEC, which is available at this link. Also, CRO Cedric Pech sold 302 shares of the firm’s stock in a transaction on Wednesday, October 2nd. The shares were sold at an average price of $256.25, for a total value of $77,387.50. Following the completion of the transaction, the executive now owns 33,440 shares in the company, valued at $8,569,000. The trade was a 0.90 % decrease in their position. The disclosure for this sale can be found here. Over the last ninety days, insiders sold 24,281 shares of company stock worth $6,657,121. Company insiders own 3.60% of the company’s stock.

Wall Street Analyst Weigh In

Several research analysts recently issued reports on the company. Bank of America upped their target price on MongoDB from $300.00 to $350.00 and gave the company a “buy” rating in a research report on Friday, August 30th. Scotiabank upped their price objective on shares of MongoDB from $250.00 to $295.00 and gave the company a “sector perform” rating in a report on Friday, August 30th. Piper Sandler lifted their target price on shares of MongoDB from $300.00 to $335.00 and gave the stock an “overweight” rating in a research note on Friday, August 30th. Truist Financial upped their price target on shares of MongoDB from $300.00 to $320.00 and gave the company a “buy” rating in a research note on Friday, August 30th. Finally, Needham & Company LLC raised their price target on shares of MongoDB from $290.00 to $335.00 and gave the stock a “buy” rating in a report on Friday, August 30th. One investment analyst has rated the stock with a sell rating, five have issued a hold rating, nineteen have given a buy rating and one has given a strong buy rating to the company. According to MarketBeat.com, the company has an average rating of “Moderate Buy” and a consensus target price of $334.25.

View Our Latest Stock Report on MDB

MongoDB Stock Down 3.1 %

NASDAQ:MDB opened at $291.59 on Friday. The company has a debt-to-equity ratio of 0.84, a quick ratio of 5.03 and a current ratio of 5.03. MongoDB, Inc. has a 52 week low of $212.74 and a 52 week high of $509.62. The stock’s 50-day moving average price is $278.11 and its 200-day moving average price is $275.45.

MongoDB (NASDAQ:MDBGet Free Report) last issued its quarterly earnings results on Thursday, August 29th. The company reported $0.70 earnings per share (EPS) for the quarter, topping analysts’ consensus estimates of $0.49 by $0.21. The company had revenue of $478.11 million for the quarter, compared to the consensus estimate of $465.03 million. MongoDB had a negative return on equity of 15.06% and a negative net margin of 12.08%. MongoDB’s revenue for the quarter was up 12.8% on a year-over-year basis. During the same quarter in the prior year, the business posted ($0.63) earnings per share. Sell-side analysts predict that MongoDB, Inc. will post -2.39 EPS for the current year.

MongoDB Profile

(Free Report)

MongoDB, Inc, together with its subsidiaries, provides general purpose database platform worldwide. The company provides MongoDB Atlas, a hosted multi-cloud database-as-a-service solution; MongoDB Enterprise Advanced, a commercial database server for enterprise customers to run in the cloud, on-premises, or in a hybrid environment; and Community Server, a free-to-download version of its database, which includes the functionality that developers need to get started with MongoDB.

See Also

Institutional Ownership by Quarter for MongoDB (NASDAQ:MDB)

This instant news alert was generated by narrative science technology and financial data from MarketBeat in order to provide readers with the fastest and most accurate reporting. This story was reviewed by MarketBeat’s editorial team prior to publication. Please send any questions or comments about this story to contact@marketbeat.com.

Before you consider MongoDB, you’ll want to hear this.

MarketBeat keeps track of Wall Street’s top-rated and best performing research analysts and the stocks they recommend to their clients on a daily basis. MarketBeat has identified the five stocks that top analysts are quietly whispering to their clients to buy now before the broader market catches on… and MongoDB wasn’t on the list.

While MongoDB currently has a “Moderate Buy” rating among analysts, top-rated analysts believe these five stocks are better buys.

View The Five Stocks Here

10 Best Cheap Stocks to Buy Now Cover

MarketBeat just released its list of 10 cheap stocks that have been overlooked by the market and may be seriously undervalued. Click the link below to see which companies made the list.

Get This Free Report

Like this article? Share it with a colleague.

Link copied to clipboard.

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


AWS Amplify and Amazon S3 Integration Simplifies Static Website Hosting

MMS Founder
MMS Steef-Jan Wiggers

Article originally posted on InfoQ. Visit InfoQ

AWS recently announced a new integration between AWS Amplify Hosting and Amazon Simple Storage Service (S3), enabling users to deploy static websites from S3 quickly. This integration streamlines the hosting process, allowing developers to deploy static sites stored in S3 and deliver content over AWS’s global content delivery network (CDN) with just a few clicks according to the company.

AWS Amplify Hosting, a fully managed hosting solution for static sites, now offers users an efficient method to publish websites using S3. The integration leverages Amazon CloudFront as the underlying CDN to provide fast, reliable access to website content worldwide. Amplify Hosting handles custom domain setup, SSL configuration, URL redirects, and deployment through a globally available CDN, ensuring optimal performance and security for hosted sites.

Setting up a static website using this new integration begins with an S3 bucket. Users can configure their S3 bucket to store website content, then link it with Amplify Hosting through the S3 console. From there, a new “Create Amplify app” option in the Static Website Hosting section guides users directly to Amplify, where they can configure app details like the application name and branch name. Once saved, Amplify instantly deploys the site, making it accessible on the web in seconds. Subsequent updates to the site content in S3 can be quickly published by selecting the “Deploy updates” button in the Amplify console, keeping the process seamless and efficient.

(Source: AWS News blog post)

This integration benefits developers by simplifying deployments, enabling rapid updates, and eliminating the need for complex configuration. For developers looking for programmatic deployment, the AWS Command Line Interface (CLI) offers an alternative way to deploy updates by specifying parameters like APP_ID and BRANCH_NAME.

Alternatively, according to the respondent on a Reddit thread, users could opt for Cloudflare:

If your webpage is static, you might consider using Cloudflare – it would probably be cheaper than the AWS solution.

Or using S3 and GitLab CI, according to a tweet by DrInTech:

Hello everyone! I just completed a project to host a static portfolio website, leveraging a highly accessible and secure architecture. And the best part? It costs only about $0.014 per month!

Lastly, Amplify Hosting integration with Amazon S3 is available in AWS Regions where Amplify Hosting is available and pricing details for S3 and hosting on the respective pricing pages.

About the Author

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Podcast: Trends in Engineering Leadership: Observability, Agile Backlash, and Building Autonomous Teams

MMS Founder
MMS Chris Cooney

Article originally posted on InfoQ. Visit InfoQ

Transcript

Shane Hastie: Good day, folks. This is Shane Hastie for the InfoQ Engineering Culture Podcast. Today I’m sitting down across many miles with Chris Cooney. Chris, welcome. Thanks for taking the time to talk to us today.

Introductions [01:03]

Chris Cooney: Thank you very much, Shane. I’m very excited to be here, in indeed many miles. I think it’s not quite the antipodes, right, but it’s very, very close to the antipodes. Ireland off New Zealand. It’s the antipodes of the UK, but we are about as far away as it gets. The wonders of the internet, I suppose.

Shane Hastie: Pretty much so, and I think the time offset is 13 hours today. My normal starting point is who is Chris?

Chris Cooney: That’s usually the question. So hello, I’m Chris. I am the Head of Developer Relations for a company called Coralogix. Coralogix is a full stack observability platform processing data without indexing in stream. We are based in several different countries. I am based in the UK, as you can probably tell from my accent. I have spent the past 11, almost 12 years now as a software engineer. I started out as a Java engineer straight out of university, and then quickly got into front-end engineering, didn’t like that very much and moved into SRE and DevOps, and that’s really where I started to enjoy myself. And over the past several years, I’ve moved into engineering leadership and got to see organizations grow and change and how certain decisions affect people and teams.

And now more recently, as the Head of Developer Relations for Coralogix, I get to really enjoy going out to conferences, meeting people, but also I get a lot of research time to find out about what happens to companies when they employ observability. And I get to also understand the trends in the market in a way that I never would’ve been able to see before as a software engineer, because I get to go meet hundreds and hundreds of people every month, and they all give me their views and insights. And so, I get to collect all those together and that’s what makes me very excited to talk on this podcast today about the various different topics that got on in the industry.

Shane Hastie: So let’s dig into what are some of those trends? What are some of the things that you are seeing in your conversation with engineering organizations?

The backlash against “Agile” [02:49]

Chris Cooney: Yes. When I started out, admittedly 11, 12 years ago is a while, but it’s not that long ago really. I remember when I started out in the first company I worked in, we had an Agile consultant come in. And they came in and they explained to me the principles of agility and so on and so forth, so gave me the rundown of how it all works and how it should work and how it shouldn’t work and so on. We were all very skeptical, and over the years I’ve got to see agility become this massive thing. And I sat in boardrooms with very senior executives in very large companies listening to Agile Manifesto ideas and things like that. And it’s been really interesting to see that gel in. And now we’re seeing this reverse trend of people almost emotionally pushing back against not necessarily the core tenets of Agile, but just the word. We’ve heard it so many times, there’s a certain amount of fatigue around it. That’s one trend.

The value of observability [03:40]

The other trend I’m seeing technically is this move around observability. Obviously, I spend most of my time talking about observability now. It used to be this thing that you have to have so when things have gone wrong or to stop things from going wrong. And there is this big trend now of organizations moving towards less to do with what’s going wrong. It’s a broader question, like, “Where are we as a company? How many dev hours did we put into this thing? How does that factor in the mean times to recovery reduction, that kind of thing?” They’re much broader questions now, blurring in business measures, technical measures, and lots more people measures.

I’ll give you a great example. Measuring rage clicks on an interface is like a thing now and measuring the emotionality with which somebody clicks a button. It’s a fascinating, I think it’s like a nice microcosm of what’s going on in the industry. Our measurements are getting much more abstract. And what that’s doing to people, what it’s doing to engineering teams, it’s fascinating. So there’s lots and lots and lots.

And then, obviously there’s the technical trends moving around AI and ML and things like that and what’s doing to people and the uncertainty around that and also the excitement. It’s a pretty interesting time.

Shane Hastie: So let’s dig into one of those areas in terms of the people measurements. So what can we measure about people through building observability into our software?

The evolution of what can be observed [04:59]

Chris Cooney: That’s a really interesting topic. So I think it’s better to contextualize, begin with okay, we started out, it was basically CPU, memory, disk, network, the big four. And then, we started to get a bit clever and looked at things at latency and response sizes, data exchanged over a server and so forth. And then, as we built up, we started to look at things like we’ve got some marketing metrics in there, so balance rates, how long somebody stays on a page and that kind of thing.

Now we’re looking at the next sort of tier, so the next level of abstraction up, which is more like, did the user have a good experience on the website, and what does that mean? So you see web vitals are starting to break into this area, things like when was the meaningful moment that a user saw the content they wanted to see? Not first ping, not first load, load template. The user went to this page, they wanted to see a product page. How long was it, not just how long did the page take to load before they saw all the meaningful information they needed? And that’s an amalgamation of lots and lots of different signals and metrics.

I’ve been talking recently about this distinction between a signal and an insight. So my taxonomy, the way I usually slice it, is a signal is a very specific technical measurement of something: latency, page load time, bytes exchange, that kind of thing. And in insight, there’s an amalgamation of lots of different signals to produce one useful thing, and my litmus test for an insight is that you can take it to your non-technical boss and they will understand it. They will understand what you’re talking about. When I say to my non-technical boss, “My insight is this user had a really bad experience loading the product page. It took five seconds for the product to appear, and they couldn’t buy the thing. They figured they that couldn’t work out where to do it”. That would be a combination of various different measures around where they clicked on the page, how long the HTML ping took, how long the actual network speed was to the machine, and so on.

So that’s what I’m talking about with the people experience metrics. It’s fascinating in that respect, and this new level now, which is directly answering business questions. It’s almost like we’ve built scaffolding up over the years, deeply technical. When someone would say, “Did that person have a good experience?” And we’d say, “Well, the page latency was this, and the HTTP response was 200, which is good, but then the page load time was really slow”. But now we just say yes or no because of X, Y and Z. And so, that’s where we’re going to I think. And this is all about that trend of observability moving into that business space, is taking much more broad encompassing measurements and a much higher level of abstraction. And that’s what I mean when I said to more people metrics as a general term.

Shane Hastie: So what happens when an organization embraces this? When not just the technical team, but the product teams, when the whole organization is looking at this and using this to perhaps make decisions about what they should be building?

Making sense of observations [07:47]

Chris Cooney: Yes. There are two things here in my opinion. One is there’s a technical barrier, which is making the information literally available in some way. So putting a query engine, and so putting, what’s an obvious one? Putting Kibana in front of open search is the most common example. It’s a way to query your data. Making a SQL query engine in front of your database is a good example. So just doing that is the technical boom. And that is not easy, by the way. That is a certain level of scale. Technically, that is really hard to make high performance queries for hundreds, potentially thousands of users hence taken concurrently. That’s not easy.

Let’s assume that’s out of the way and the organization’s work that out. The next challenge is, “Well, how do we make it so that users can get the questions they need answered, answered quickly without specialist knowledge?” And we’re not there yet. Obviously AI makes a lot of very big promises about natural language query. It’s something that we’ve built into the platform in Coralogix ourselves. It works great. It works really, really well. And I think what we have to do now is work out how do we make it as easy as possible to get access to that information?

Let’s assume all those barriers are out of the way, and an organization has achieved that. And I saw something similar to this when I was a Principal Engineer at Sainsbury’s when we started to surface, it’s an adjacent example, but still relevant, introduction of SLOs and SLIs into the teams. So where before if I went to one team and said, “How has your operational success been this month?” They would say, “Well, we’ve had a million requests and we serviced them all in under 200 milliseconds”. Okay. I don’t know what that means. Is 200 milliseconds good? Is that terrible? What does that mean? We’d go to another team and they’d say, “Well, our error rate is down to 0.5%” Well, brilliant. But last month it was 1%. The month before that it was 0.1% or something.

When we introduced SLOs and SLIs into teams, we could see across all of them, “Hey, you breached your error budget. You have not breached your error budget”. And suddenly, there was a universal language around operational performance. And the same thing happens when you surface the data. You create a universal language around cross-cutting insights across different people.

Now, what does that do to people? Well, one, it shines spotlights in places that some people may not want them shined there, but it does do that. That’s what the universal language does. It’s not enough just to have the data. You have to have effective access to it. You have to have effective ownership of it. And doing that surfaces conversations that would be initially quite painful. There are lots of people, especially in sufficiently large organizations, that have been kind of just getting by by flying under the radar, and it does make that quite challenging.

The other thing that it does, some people, it makes them feel very vulnerable because they feel like KPIs. They’re not. We’re not measuring that performance on if they miss their error budget. When I was the business engineer, no one would get fired. We’d sit down and go, “Hey, you missed your error budget. What can we do here? What’s wrong? What are the barriers?” But it actually made some people feel very nervous and very uncomfortable with it and they didn’t like it. Other people thrived and loved. It became a target. “How much can we beat our budget by this month? How low can we get it?”

Metrics create behaviors [10:53]

So the two things I would say, the big sweeping changes in behavior, it’s that famous phrase, “Build me a metric and I’ll show you a behavior”. So if you measure somebody, human behavior is what they call a type two chaotic system.

By measuring it, you change it. And it’s crazy in the first place. So as soon as you introduce those metrics, you have to be extremely cognizant of what happens to dynamics between teams and within teams. Teams become competitive. Teams begin to look at other teams and wonder, “How the hell are they doing that? How is their error budget so low? What’s going on?” Other teams maybe in an effort to improve their metrics artificially will start to lower their deployment frequency and scrutinize every single thing. So while their operational metrics look amazing, their delivery is actually getting worse, and all these various different things that go on. So that competitiveness driven by uncertainty and vulnerability is a big thing that happens across teams.

The other thing that I found is that the really great leaders, the really brilliant leaders love it. Oh, in fact, all leadership love it. All leadership love higher visibility. The great leaders see that higher visibility and go, “Amazing. Now I can help. Now I can actually get involved in some of these conversations that would’ve been challenging before”.

The slightly more, let’s say worrying leaders will see this as a rod with which to beat the engineers. And that is something that you have to be extremely careful of. Surfacing metrics and being very forthright about the truth and being kind of righteous about it all is great and it’s probably the best way to be. But the consequence is that a lot of people can be treated not very well if you have the wrong type of leadership in place, who see these measurements as a way of forcing different behaviors.

And so, it all has to be done in good faith. It all has to be done based on the premise that everybody is doing their best. And if you don’t start from that premise, it doesn’t matter how good your measurements are, you’re going to be in trouble. Those are the learnings that I took from when I rolled it out and some of the things that I saw across an organization. It was largely very positive though. It just took a bit of growing pains to get through.

Shane Hastie: So digging into the psychological safety that we’ve heard about and known about for a couple of decades now.

Chris Cooney: Yes. Yes.

Shane Hastie: We’re not getting it right.

Enabling psychological safety is still a challenge [12:59]

Chris Cooney: No, no. And I think that my experience when I first go into reading about, it’s like Google’s Project Aristotle and things like that may be. And my first attempt at educating an organization on psychological safety was they had this extremely long, extremely detailed incident management review, where if something goes wrong, then they have, we’re talking like a 200-person, several day, sometimes several day. I think on the low end it was like five, six hours, deep review of everything. Everyone bickers and argues at each other and points fingers at each other. And there’s enormous documents produced, it’s filed away, and nobody ever looks at it ever again because who wants to read those things? It’s just a historical text about bickering between teams.

And what I started to do is I said, “Well, why don’t we trial like a more of a blameless post-mortem method? Let’s just give that a go and we’ll see what happens”. So the first time I did it, the meeting went from, they said the last meeting before them was about six hours. We did it in about 45 minutes. I started the meeting by giving a five-minute briefing of why this post-mortem has to be blameless. The aviation industry and the learnings that came from that around if you hide mistakes, they only get worse. We have to create an environment where you’re okay to surface mistakes. Just that five-minute primer and then about a 40-ish-minute conversation. And we had a document that was more thorough, more detailed, more fact-based, and more honest than any incident review that I ever read before that.

So rolling that out across organizations was really, really fun. But then, I saw it go the other way, where they’d start saying, “Well, it’s psychologically safe”. And it’s turned inside this almost hippie loving, where nobody’s done anything wrong. There is no such thing as a mistake. And no, that’s not the point. The point is that we all make mistakes, not that they don’t exist. And we don’t point blame in a malicious way, but we can attribute a mistake to somebody. You just can’t do it by… And the language in some of these post-mortem documents that I was reading was so indirect. “The system post a software change began to fail, blah, blah, blah, blah, blah”. Because they’re desperately trying not to name anybody or name any teams or say that an action occurred. It was almost like the system was just running along and then the vibrations from the universe just knocked it out of whack.

And actually, when you got into it, one of the team pushed a code change. It’s like, “No. Team A pushed a code change. Five minutes later there was a memory leak issue that caused this outage”. And that’s not blaming anybody, that’s just stating the fact in a causal way.

So the thing I learned with that is whenever you are teaching about blameless post-mortem psychological safety, it’s crucial that you don’t lose the relationship between cause and effect. You have to show cause A, effect B, cause B, effect C, and so on. Everything has to be linked in that way in my opinion. Because that forces them to say, “Well, yes. We did push this code change, and yes, it looks like it did cause this”.

That will be the thing I think where most organizations get tripped up, is they really go all in on psychological safety. “Cool, we’re going to do everything psychologically safe. Everyone’s going to love it”. And they throw the baby out with the bath water as it were. And they missed the point, which is to get to the bottom of an issue quickly, not to not hurt anybody’s feelings, which is often sometimes a mistake that people make I think, especially in large organizations.

Shane Hastie: Circling back around to one of the comments you made earlier on. The agile backlash, what’s going on there?

Exploring the agile backlash [16:25]

Chris Cooney: I often try to talk about larger trends rather than my own experience, purely because anecdotal experience is only useful as an anecdote. So this is an anecdote, but I think it’s a good indication of what’s going on more broadly. When I was starting out, I was a mid-level Java engineer, and this was when agility was really starting to get a hold in some of these larger companies and they started to understand the value of it. And what happened was we were all on the Agile principles. We were regularly reading the Agile Manifesto.

We had a coach called Simon Burchill who was and is absolutely fantastic, completely, deeply, deeply understands the methodology and the point of agility without getting lost in the miasma of various different frameworks and planning poker cards and all the rest of it. And he was wonderful at it, and I was very, very fortunate to study under him in that respect because it gave me a really good, almost pure perspective of agile before all of the other stuff started to come in.

So what happened to me was that we were delivering work, and if we went even a week over budget or a week over due, the organization would say, “Well, isn’t agile supposed to speed things up?” And it’s like, “Well, no, not really. It’s more of just that we had a working product six weeks ago, eight weeks ago, and you chose not to go live with it”. Which is fine, but that’s what you get with the agile process. You get a much earlier working software that gives you the opportunity to go live if you get creative with how you can productionize it or turn into a product.

So that was the first thing, I think. One of the seeds of the backlash is a fundamental misunderstanding about what Agile is supposed to be doing for you. It’s not to get things done faster, it’s to just incrementally deliver working software so you have a feedback loop and a conversation that’s going on constantly. And an empirical learning cycle is occurring, so you’re constantly improving the software, not build everything, test everything, deploy it, and find out it’s wrong. That’s one.

The other thing I will say is what I see on Twitter a lot now, or X they call it these days, is the Agile Industrial Complex, which is a phrase that I’ve seen batted around a lot, which is essentially organizations just selling Scrum certifications or various different things that don’t hold that much value. That’s not to say all Scrum certifications are useless. I did one, it was years and years ago, I forget the name of the chap now. It was fantastic. He gave a really, really great insight into Scrum, for example, why it’s useful, why it’s great, times when it may be painful, times when some of its practices can be dropped, the freedom you’ve got within the Scrum guide.

One of the things that he said to me that always stuck with me, this is just an example of a good insight that came from an Agile certification was he said, “It’s a Scrum guide, not the Scrum Bible. But it’s a guide. The whole point is to give you an idea. You’re on a journey, and the guide is there to help you along that journey. It is not there to be read like a holy text”. And I loved that insight. It really stuck with me and it definitely informed how I went out and applied those principles later on. So there is a bit of a backlash against those kinds of Agile certifications because as is the case with almost any service, a lot of it’s good, a lot of it’s bad. And the bad ones are pretty bad.

And then, the third thing I will say is that an enormous amount of power was given to Agile coaches early on. They were almost like the high priests and they were sort of put into very, very senior positions in an organization. And like I said, there are some great Agile coaches. I’ve had the absolute privilege of working with some, and there were some really bad ones, as there are great software engineers and bad software engineers, great leaders and poor leaders and so on.

The problem is that those coaches were advising very powerful people in organizations. And if you’re giving bad advice to very powerful people, the impact of that advice is enormous. We know how to deal with a bad software engineering team. We know how to deal with somebody that doesn’t want to write tests. As a software function, we get that. We understand how to work around that and solve that problem. Sometimes it’s interpersonal, sometimes it’s technical, whatever it is, we know how to fix it.

We have not yet figured out this sort of grand vizier problem of there is somebody there giving advice to the king who doesn’t really understand what they’re talking about, and the king’s just taking them at their word. And that’s what happened with Agile. And that I think is one of the worst things that we could have done was start to take the word of people as if they are these experts in Agile and blah, blah, blah. It’s ultimately software delivery. That’s what we’re trying to do. We’re trying to deliver working software. And if you’re going to give advice, you’d really better deeply understand delivery of working software before you go and about interpersonal things and that kind of stuff.

So those are the three things I think that have driven the backlash. And now there’s just this fatigue around the word Agile. Like I say, I had the benefit of going to conferences and I’ve seen the word Agile. When I first started talking, it was everywhere. You couldn’t miss a conference where the word Agile wasn’t there, and now it is less and less prevalent and people start talking more about things like continuous delivery, just to avoid saying the word Agile. Because the fatigue is almost about around the word than it’s around the principles.

And the last thing I’ll say is there is no backlash against the principles. The principles are here to stay. It’s just software engineering now. We just call it what would’ve been Agile 10 years ago is just how to build working software now. It’s so deeply ingrained in how we think that we think we’re backlash against Agile. We’re not. We’re backlash against a few words. The core principles are parts of software engineering now, and they’re here to stay for a very long time, I suspect.

Shane Hastie: How do we get teams aligned around a common goal and give them the autonomy that we know is necessary for motivation?

Make it easy to make good decisions [21:53]

Chris Cooney: Yes. I have just submitted a talk to Cube put on this. And I won’t say anything just at risk of jeopardizing our submission, but the broad idea is this. Let’s say I was in a position, I had like 20 something teams, and the wider organization was hundreds of teams. And we had a big problem, which was every single team had been raised on this idea of, “You pick your tools, you run with it. You want to use AWS, you want to use GCP, you want to use Azure? Whatever you want to use”.

And then after a while, obviously the bills started to roll in and we started to see that actually this is a rather expensive way of running an organization. And we started to think, “Well, can we consolidate?” So we said, “Yes, we can consolidate”. And a working group went off, picked a tool, bought it, and then went to the teams and said, “Thou shalt use this, and nobody listened”. And then, we kind of went back to the drawing board and they said, “Well, how do we do this?” And I said, “This tool was never picked by them. They don’t understand it, they don’t get it. And they’re stacking up migrating to this tool against all of the deliverables they’re responsible for”. So how do you make it so that teams have the freedom and autonomy to make effective decisions, meaningful decisions about their software, but it’s done in a way that there is a golden path in place such that they’re all roughly moving in the same direction?

When we started to build out a project within Sainsbury’s was completely re-platforming the entire organization. It’s still going on now. It’s still happening now. But hundreds and hundreds of developers have been migrating onto this platform. It was a team in which I was part of. It’s started, I was from Manchester in the UK, we originally called it the Manchester PAS, Platform As a Service. I don’t know if you know this, but the bumblebee is one of the symbols of Manchester. It had a little bumblebee in the UI. It was great. We loved it. And we built it using Kubernetes. We built it using Jenkins for CI, CD, purely because Jenkins was big in the office at the time. It isn’t anymore. Now it’s GitHub Actions.

And what we said was, “Every team in Manchester, every single resource has to be tagged so we know who owns what. Every single time there’s a deployment, we need some way of seeing what it was and what went into it”. And sometimes some periods of the year are extremely busy and extremely serious, and you have to do additional change notifications in different systems. So every single team between the Christmas period for a grocer, Sainsbury’s sells an enormous amount of trade between let’s say November and January. So during that period, they have to raise additional change requests, but they’re doing 30, 40 commits a day, so they can’t be expected to fill up those forms every single time. So I wonder if we can automate that for them.

And what I realized was, “Okay, this platform is going to make the horrible stuff easy and it’s going to make it almost invisible; not completely invisible because they still have to know what’s going on, but it has to make it almost invisible”. And by making the horrible stuff easy, we incentivize them to use the platform in the way that it’s intended. So we did that and we onboarded everybody in a couple of weeks, and it took no push whatsoever.

We had product owners coming to us and saying one team just started, they’d started the very first sprint. The goal of their first sprint was to have a working API and a working UI. The team produced, just by using our platform, because we made a lot of this stuff easy. So we had dashboard generation, we had alert generation, we had metric generation because we were using Kubernetes and we were using Istio. We got a ton of HTTP service metrics off the bat. Tracing was built in there.

So in their sprint review at the end of the two weeks, they built this feature. Cool. “Oh, by the way, we’ve done all of this”. And it was an enormous amount of dashboards and things like that. “Oh, by the way, the infrastructure is completely scalable. It’s totally, it’s multi-AZ failover. There’s no productionizing. It’s already production ready”. The plan was to go live in months. They went live in weeks after that. It changed the conversation and that was when things really started to capitalize and have ended up in the new project now, which is across the entire organization.

The reason why I told that story is because you have to have a give and take. If you try and do it like an edict, a top-down edict, your best people will leave and your worst people will try and work through it. Because the best people want to be able to make decisions and have autonomy. They want to have kind of sense of ownership of what they’re building. Skin in the game is often the phrase that it’s banded around.

And so, how do you give engineers the autonomy? You build a platform, you make it highly configurable, highly self-serviced. You automate all the painful bits of the organization, for example, compliance of change request notifications and data retention policies and all that. You automate that to the hilt so that all they have to do is declare some config and repository and it just happens for them. And then, you make it so the golden path, the right path, is the easy path. And that’s it. That’s the end of the conversation. If you can do that, if you can deliver that, you are in a great space.

If you try to do it as a top-down edict, you will feel a lot of pain and your best people will probably leave you. If you do it as a collaborative effort so that everybody’s on the same golden path, every time they make a decision, the easy decision is the right one, it’s hard work to go against the right decision. Then you’ll incentivize the right behavior. And if you make some painful parts of their life easy, you’ve got the carrot, you’ve got the stick, you’re in a good place. That’s how I like to do it. I like to incentivize the behavior and let them choose.

Shane Hastie: Thank you so much. There’s some great stuff there, a lot of really insightful ideas. If people want to continue the conversation, where do they find you?

Chris Cooney: If you open up LinkedIn and type Chris Cooney, I’ve been reliably told that I am the second person in the list. I’m working hard for number one, but we’ll get there. If you look for Chris Cooney, if I don’t come up, Chris Cooney, Coralogix, Chris Cooney Observability, anything like that, and I will come up. And I’m more than happy to answer any questions. On LinkedIn is usually where I’m most active, especially for work-related topics.

Shane Hastie: Cool. Chris, thank you so much.

Chris Cooney: My pleasure. Thank you very much for having me.

Mentioned:

About the Author

.
From this page you also have access to our recorded show notes. They all have clickable links that will take you directly to that part of the audio.

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


DigitalOcean Holdings, Inc. Introduces Scalable Storage for Managed MongoDB Offering

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

DigitalOcean Holdings, Inc. introduced scalable storage for DigitalOcean’s Managed MongoDB offering, giving users the ability to scale their cloud storage capacity independently of other compute requirements. This new functionality will provide customers with greater flexibility to adapt to growing data demands and fluctuating workloads, without the unnecessary expense of added compute capacity. DigitalOcean?s Managed MongoDB is a fully managed database-as-a-service certified by MongoDB, built in close collaboration with MongoDB, and operated by DigitalOcean.

Prior to this update, customers who needed to add storage capacity to their managed databases were required to upgrade the whole database plan, including processor and memory. By making it possible to scale storage capacity independently, customers benefit from greater flexibility and unlock cost efficiencies as they scale their applications. This new offering provides users with: Independent Scaling: Unique ability to scale Managed MongoDB storage separately from compute resources; Cost Efficiency: Granular billing helps ensure customers only pay for the storage they use, reducing unnecessary expenses; Ease of Use: Automated, minimal-downtime provisioning of added storage along with a simple, transparent pricing model enhances the customer experience while optimizing costs.

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


DigitalOcean Introduces Scalable Storage for Managed MongoDB Offering | News | bakersfield.com

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

NEW YORK–(BUSINESS WIRE)–Nov 14, 2024–

DigitalOcean Holdings, Inc. (NYSE: DOCN), the simple scalable cloud, today introduced scalable storage for DigitalOcean’s Managed MongoDB offering, giving users the ability to scale their cloud storage capacity independently of other compute requirements. This new functionality will provide customers with greater flexibility to adapt to growing data demands and fluctuating workloads, without the unnecessary expense of added compute capacity.

This page requires Javascript.

Javascript is required for you to be able to read premium content. Please enable it in your browser settings.

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Robbins Geller Tapped To Lead Software Co. Investor Suit – Law360

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

By Emilie Ruscoe ( November 14, 2024, 5:06 PM EST) — A pair of pension funds represented by Robbins Geller Rudman & Dowd LLP has beaten out individual investors vying to lead a shareholder class action against MongoDB Inc. over the software company’s growth projections….

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Amazon DynamoDB reduces prices for on-demand throughput and global tables – AWS

MMS Founder
MMS RSS

Posted on nosqlgooglealerts. Visit nosqlgooglealerts

Amazon DynamoDB is a serverless, NoSQL, fully managed database with single-digit millisecond performance at any scale. Starting today, we have made Amazon DynamoDB even more cost-effective by reducing prices for on-demand throughput by 50% and global tables by up to 67%.

DynamoDB on-demand mode offers a truly serverless experience with pay-per-request pricing and automatic scaling without the need for capacity planning. Many customers prefer the simplicity of on-demand mode to build modern, serverless applications that can start small and scale to millions of requests per second. While on-demand was previously cost effective for spiky workloads, with this pricing change, most provisioned capacity workloads on DynamoDB will achieve a lower price with on-demand mode. This pricing change is transformative as it makes on-demand the default and recommended mode for most DynamoDB workloads.

Global tables provide a fully managed, multi-active, multi-Region data replication solution that delivers increased resiliency, improved business continuity, and 99.999% availability for globally distributed applications at any scale. DynamoDB has reduced pricing for multi-Region replicated writes to match the pricing of single-Region writes, simplifying cost modeling for multi-Region applications. For on-demand tables, this price change lowers replicated write pricing by 67%, and for tables using provisioned capacity, replicated write pricing has been reduced by 33%.

These pricing changes are already in effect, in all AWS Regions, starting November 1, 2024 and will be automatically reflected in your AWS bill. To learn more about the new price reductions, see the AWS Database Blog, or visit the Amazon DynamoDB Pricing page.
 

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


New – Amazon DynamoDB lowers pricing for on-demand throughput and global tables

MMS Founder
MMS RSS

Posted on nosqlgooglealerts. Visit nosqlgooglealerts

Over 1 million customers choose Amazon DynamoDB as their go-to NoSQL database for building high-performance, low-latency applications at any scale. The DynamoDB serverless architecture eliminates the overhead of operating and scaling databases, reducing costs and simplifying management, allowing you to focus on innovation, not infrastructure. DynamoDB provides seamless scalability as workloads grow from hundreds of users to hundreds of millions of users, or single AWS Regions to spanning multiple Regions.

Our continued engineering investments on how efficiently we can operate DynamoDB allow us to identify and pass on cost savings to you. Effective November 1, 2024, DynamoDB has reduced prices for on-demand throughput by 50% and global tables by up to 67%, making it more cost-effective than ever to build, scale, and optimize applications.

In this post, we discuss the benefits of these price reductions, on-demand mode, and global tables.

Price reductions

You can now get the same powerful functionality of DynamoDB on-demand throughput and global tables at significantly lower prices. Let’s dive into what this price drop means for you and how DynamoDB can power your applications at a new level of cost-efficiency:

  • On-demand throughput pricing has been reduced by 50% – DynamoDB on-demand mode is now even more attractive, offering you a fully managed, serverless database experience that automatically scales in response to application traffic with no capacity planning required. On-demand mode’s capabilities like pay-per-request pricing, scale-to-zero, and no up-front costs help you save time and money while simplifying operations and improving performance at any scale. On-demand is a game changer for modern, serverless applications because it instantly accommodates workload requirements as they ramp up or down, eliminating the operational complexity of capacity management and database scaling. With this pricing change, most provisioned capacity workloads on DynamoDB today will achieve a lower price with on-demand mode.
  • Global tables pricing has been reduced by up to 67% – Building globally distributed applications is now significantly more affordable. DynamoDB has reduced pricing for multi-Region replicated writes to match the pricing of single-Region writes, simplifying cost modeling and choosing the best architecture for your applications. For on-demand tables, this price change lowers replicated write pricing by 67%, and for tables using provisioned capacity, replicated write pricing has been reduced by 33%.

Whether you’re launching a new application or optimizing an existing one, these savings make DynamoDB an excellent choice for workloads of all sizes. You can now enjoy the power and flexibility of serverless, fully managed databases with global reach at an even lower cost—allowing you to focus more resources on driving innovation and growth.

DynamoDB on-demand

When we launched DynamoDB in 2012, provisioned capacity was the only throughput option available. Provisioned capacity requires you to predict and plan your throughput requirements. For provisioned tables, you must specify how much read and write throughput per second you require for your application, and you’re charged based on the hourly read and write capacity you have provisioned, not how much your application has consumed. In 2017, we added provisioned auto scaling to help improve scaling and utilization. Although it was effective, we learned that customers wanted a serverless experience where they don’t have to think about provisioned capacity utilization and how quickly auto scaling can respond to changes in traffic patterns. In 2018, we launched on-demand mode to provide a truly serverless database experience with pay-per-request billing and automatic scaling that doesn’t require capacity management and scaling configurations.

Both provisioned and on-demand billing modes use the same underlying infrastructure to achieve high availability, scale, reliability, and performance. The key differences are that on-demand is always 100% utilized due to pay-per-request billing and on-demand scales transparently, without needing to specify a scaling policy. As a result, many customers prefer the simplicity of on-demand mode to build modern, serverless applications that can start small and scale to millions of requests per second. Continually working backward from our customer feedback, in early 2024, we launched configurable maximum throughput for on-demand tables, an optional table-level setting that provides an additional layer of cost predictability and fine-grained control by allowing you to specify predefined maximum read or write (or both) throughput for on-demand tables. Recently, we introduced warm throughput to provide greater visibility on the number of read and write operations an on-demand table can instantaneously support, and also made it more straightforward to pre-warm DynamoDB tables for upcoming peak events, like new product launches or database migration, when throughput requirements can increase by 10 times, 100 times, or more.

While on-demand was previously cost-effective for spiky workloads, with this pricing change, most provisioned capacity workloads on DynamoDB today will achieve a lower price with on-demand mode. This pricing change is transformative because it makes on-demand the default and recommended mode for most DynamoDB workloads. Whether you’re running a new application or a well-established one, on-demand mode simplifies the operational experience, while providing seamless scalability and responsiveness to handle changes to your traffic pattern, making it an ideal solution for startups, growing applications, and established businesses looking to streamline costs without sacrificing performance.

The following are the key benefits of on-demand mode:

  • No capacity planning – On-demand mode eliminates the need to predict capacity usage and pre-provision resources. Capacity planning and monitoring can be time-consuming, especially as traffic patterns change over time. With on-demand, there is no need to monitor your utilization, adjust capacity, or worry about over-provisioning or under-provisioning resources. On-demand simplifies operations and allows you to focus on building features for your customers.
  • Automatic scaling – One of the greatest advantages of on-demand mode is its ability to automatically scale to meet your application demand. On-demand mode can instantly accommodate up to double the previous peak traffic on your table. If your workload drives more than double your previous peak on the table, DynamoDB automatically scales, which reduces the risk of throttling, where requests can be delayed or rejected if the table is unable to keep up. Whether traffic is surging for a major launch or fluctuating due to low weekly or seasonal demand, on-demand can quickly adjust based on actual traffic patterns to serve your workload. On-demand mode can serve millions of requests per second without capacity management, and once scaled, you can instantly achieve the same throughput again in the future without throttling.
  • Usage-based pricing – Unlike provisioned capacity mode, where you pay for a fixed amount of throughput regardless of usage, with on-demand mode’s simple, pay-per-request pricing model, you don’t have to worry about idle capacity because you only pay for the capacity you actually use. You are billed per read or write request, so your costs directly reflect your actual usage.
  • Scale to zero throughput cost – With DynamoDB, the throughput a table is a capable of serving at any given moment is decoupled from what you are billed. For example, an on-demand table may be capable of serving 20,000 reads and 20,000 writes per second (we call this warm throughput) based on your previous traffic pattern, but your application may only be consuming 1,000 reads and 1,000 writes per second (consumed throughput). In this scenario, you are only charged for the 1,000 reads and 1,000 writes that you actually consume, even though at any time, your application could scale up to the warm throughput of 20,000 reads and 20,000 writes per second without any scaling actions needed by the DynamoDB table. On the other hand, if you are driving zero traffic to your table, then with on-demand, you are not charged for any throughput; however, your application can readily consume the warm throughput that the table can serve. Therefore, your table maintains warm throughput for when your application needs it but can scale to zero throughput cost when you aren’t issuing any requests against the table.
  • Serverless compatibility – DynamoDB on-demand coupled with other AWS services, such as AWS Lambda, Amazon API Gateway, and Amazon CloudWatch, allows you to build a fully serverless application stack that can scale seamlessly and handle variable workloads efficiently without needing to manage infrastructure.

Global tables: Bringing data closer to your customers

Global tables provide a fully managed, multi-active, multi-Region data replication solution that delivers increased resiliency, improved business continuity, and 99.999% availability for globally distributed applications at any scale. Global tables automatically replicate your data across Regions, making it accessible to users around the world with low latency, high availability, and built-in resilience.

DynamoDB global tables are ideal for applications with globally dispersed users, including financial technology, ecommerce applications, social platforms, gaming, Internet of Things (IoT) solutions, and use cases where users expect the highest levels of availability and resilience.

The following are the key benefits of global tables:

  • High availability – Global tables are designed for 99.999% availability, providing multi-active, multi-Region capability without the need to perform a database failover. In the event that your application processing becomes interrupted in one Region, you can redirect your application to a replica table in another Region, delivering higher business continuity.
  • Flexibility – Global tables eliminate the undifferentiated heavy lifting of replicating data across Regions. With a few clicks on the DynamoDB console or an API call, you can convert any single-Region table to a global table. You also can add or delete replicas to your existing global tables at any time, providing you the flexibility to move or replicate your data as your business requires. Because global tables use the same APIs as single-Region DynamoDB tables, you don’t have to rewrite or make any application changes as you expand globally.
  • Fully managed, multi-Region replication – For businesses with global customers, performance and availability matter more than ever. With global tables, your data is automatically replicated across your chosen Regions, providing low-latency local access and enhanced user experience.
  • Global reach, local performance – Global tables enable you to read and write your data locally, providing single-digit millisecond latency for globally distributed applications at any scale. Updates made to any Region are replicated to all other replicas in the global table, locating your data closer to your users and improving performance for global applications.

Conclusion

We have made DynamoDB even more cost-effective by reducing prices for on-demand throughput by 50% and global tables by up to 67%. Whether you are developing a new application, expanding to a global audience, or optimizing your cloud costs, the new DynamoDB pricing offers increased flexibility and substantial savings.

These pricing changes are already in effect, in all Regions, starting November 1, 2024, and will be automatically reflected in your monthly AWS bill. We’re excited about what these changes mean for customers and the value you can realize from DynamoDB. For more details, see Pricing for Amazon DynamoDB.


About the authors

Mazen Ali is a Principal Product Manager at Amazon Web Services. Mazen has an extensive background in product management and technology roles, an MBA from Kellogg School of Management, and is passionate about engaging with customers, shaping product strategy, and collaborating cross-functionally to build exceptional experiences. In his free time, Mazen enjoys traveling, reading, skiing, and hiking.

Joseph Idziorek is currently a Director of Product Management at Amazon Web Services. Joseph has over a decade of experience working in both relational and nonrelational database services and holds a PhD in Computer Engineering from Iowa State University. At AWS, Joseph leads product management for nonrelational database services including Amazon DocumentDB (with MongoDB compatibility), Amazon DynamoDB, Amazon ElastiCache, Amazon Keyspaces (for Apache Cassandra), Amazon MemoryDB, Amazon Neptune, and Amazon Timestream.

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.