Podcast: From Code to Strategy: Drive Organizational Impact Through Strategic Conversations and User Focus

MMS Founder
MMS Mark Allen

Article originally posted on InfoQ. Visit InfoQ

Transcript

Shane Hastie: Good day folks. This is Shane Hastie for the InfoQ Engineering Culture podcast. Today, I’m sitting down with Mark Allen. Mark, welcome. Thanks for taking the time to talk to us.

Mark Allen: Thanks for having me.

Shane Hastie: My normal starting point in these conversations is, who’s Mark?

Introductions [00:55]

Mark Allen: So I’m Mark Allen. I currently work as head of engineering in Isometric, which is a large, fast-growing UK climate tech startup. We’re in the carbon removal space, so we issue carbon removal credits, and have a lot of technology that works on auditing and verifying very complex industrial carbon removals. But going right back to the beginning, I actually started out my career as an econometrician. I think the work I was doing, you’d called data science today, a lot of regressions, things like that and large economic models, but data science wasn’t coined when I started my career.

It was only two years into my career that someone came up with the term data science. So it was econometrics definitely back then. So did that for a number of years and transitioned from data science into data engineering, into software engineering, and did a whole range of different things in small companies, large companies, over about 10 years.

I transitioned into management about seven, eight years ago now, when I was working at Skyscanner. I had a great time there working on a number of different products. Moved to a company called Glovo, it’s like on-demand delivery, like DoorDash delivery but Southern Europe, where I graduated into a manager of managers. And by the time I left, I think about eight different teams were in my groups. I left to cofound a startup, we got VC backing. We built a popular-ish HR tool called Ourspace, workforce planning, that sort of thing. Didn’t quite get enough product market fit to raise Series A. And through the journey of doing that, I realized that I really wanted to work in things that had a meaningful positive impact.

So I joined Isometric and today I lead our engineering teams, coming up to about 20 people now. So that’s software engineering, it’s also data and also just starting on solutions and implementation engineering, although mixed and interesting thoughts to see how that goes, and maybe we’ll touch on that later. Lifelong builder. Before we started, I was just talking about how I was doing a lot of weird Raspberry Pi sprinkler systems at home this morning. So really keen on technology, but also really keen on how you organize companies to make them most effective and how you grow the careers of very senior people.

Shane Hastie: Meaningful, positive impact, as an engineer, what’s that mean to us?

Meaningful positive impact in engineering [03:19]

Mark Allen: It can mean a whole range of things, and I think, broadly, I’m very technology positive. I think in most cases technology has a positive impact, not in all, and I won’t call out any specific examples, but I think there’s also a spectrum, right? So I think about working in Skyscanner and I think liberalizing travel, and helping people find cheap flights and travel, experience of the cultures, largely positive. Some negative impacts over tourism, the environment, but largely, largely positive thing, I think. So I put that on the moderately positive spectrum, and I think it goes all the way to people doing amazing stuff to reduce loneliness through AI and solve deeply personal, serious problems.

At Isometric, we are really focused on the climate crisis, and so for me that would be the area where I’d start if I’m specifically looking at something where there is that positive impact. There’s a huge need for technological solutions in combating and mitigating climate change. So, yes, that’s how I think about meaningful impact. But I think, as I say, largely, most engineers are having a positive impact on people’s lives because ultimately people wouldn’t use and buy products if they weren’t enjoying them or getting some value out of them, in most cases.

Shane Hastie: We came across each other because you gave a talk at QCon London on strategic conversations. For an engineer, what is a strategic conversation?

Being engaged in strategic conversations [04:56]

Mark Allen: I think there’s strategic conversations that engineers, all engineers of every seniority are involved in probably every single day at work. So if you’re in a team, that could be how we decide to approach a refactor or how we’re going to break down a project into a series of tasks. If you are a principal engineer, a director of engineering, a CTO, they’ll probably have a much larger scope, and there’ll be conversations about what are our major technical investments over the next 12 months? Are we going to migrate back to on-prem, or what are we going to do? How are we going to staff all of our teams? Do we need to do a reorg?

There’ll be much broader scope conversations, but these conversations happen all the time. And my talk was how to go from the, I guess, smaller scoped conversations that you’re currently involved in and invited to, and get to seat at the table to participate in these larger ones where impact is likely going to be larger, both on the company and also potentially on the whole world.

Shane Hastie: So as an individual contributor, I might desire to step into those bigger conversations, but I don’t even know where to start. Where do we start? How do we start?

Mark Allen: It’s funny because these aren’t things that we usually get taught as engineers. It’s not why we get into software engineering, usually. We get into it to build stuff and see people use the thing that we’ve built. And so it’s a skills journey that people need to go on, and I think the place to start is even understanding and identifying where and what conversations are actually happening, and what the topics of those conversations are. And I would start by thinking about one level of scope more senior than me.

So if I’m a senior engineer in a team, I’m probably involved in most of the conversations that relate to that team. I want to be involved in the conversations that relate to the group of teams around me, a level above. So I would and I do speak to my manager, speak to peers and understand what’s on their mind, what big difficult strategic choices are coming up for them? What are they thinking about and then how can I contribute to them?

We all get invited to these all-hands-type meetings. It might be for your group of teams. It might be for the whole technology organization. It might be for the whole company, but there will be meetings where we all get invited to hear leaders speak.

Now, as a leader, I put so much time, and thought, and intention into what we actually say in those meetings because I want to signal to people the key strategic things we’re working on so that they then take them back to their work and focus their work around them, but also people who are interested can come forward and contribute to those things. So what I would say to people is, pay attention. Pay attention when leaders are speaking. See what they talk about. See what they focus on and use that to calibrate where you put your thoughts and put your time. So that’s where I would start.

Get out of the building and get to know your users [07:58]

Obviously, once you’ve identified a topic, there’s a lot more to do from there. So for me, the key things that one needs to do are to start building relationships with people currently working in areas you want to work in. I recall when I joined Glovo, I was working in the courier space, the rider space, so these people that do deliveries, the people we see out on the streets, on bikes and motorbikes with food in the back. I was working in this space, and I came in engineering, and I felt like I didn’t really know how the business operated that well.

So I wasn’t really able to input effectively into any strategic decision making. So I spent a couple of days actually doing deliveries myself. I went to the courier center where our operations team worked and where couriers would come every single day with questions about both the application that we were building, but just real life questions, like, can you lend me some money Glovo because I’m struggling to pay my bills and support my family this month?

I would just sit there, and speak to people, and just learn about the users and what their issues were. I went to our LiveOps center where we deal with real-time issues and so you’d have these huge dashboards. I literally went round the company, and went to every adjacent function, and spend time learning what they did.

And through doing that, I obviously got a lot of context about what’s strategically important, but I also built relationships, and that meant that when things came up and somebody needed somebody in engineering, I was just the logical person that people would go to. Yes, we know that guy, let’s bring him in. That’s not how I would like organizations to work, but it is. There is some reality that people knew I had context, people knew that I could input, and so that was very useful and it became my internal brand essentially.

Build your own brand [09:47]

I was the guy in engineering that knew all about this and that’s a really powerful thing that I advise people to think about. Think about your internal brand. How do people who are more senior than you perceive you? How do people in diagonal seniority positions, in other functions that collaborate with yours, how do they think about you and how do they perceive you?

And I actively jot down thoughts on this, words, and I ask people as well, what words they associate with me, what are the first things that come to mind, and really try and make them be things where I would want them to be. Because I think often, I recall when I joined Isometric and I think first six months, we have loads of hiring to do. I was trying to get to know the product, and after six months, if you’d asked anybody in the organization what words they thought of me, I think it would’ve been, hiring, strong, technical manager.

Maybe that would’ve been it. And that’s not super strategic, but first six months, and I think after 12 months it would’ve just been very different because I had very consciously focused on becoming a knowledge domain expert on a couple of really important strategic topics, reforestation being an example. I was one of the authors of our reforestation methodology and identifying these strategic topics, building these relationships and then building an internal brand around them was super helpful.

Obviously, there’s another part of this that once you build a brand, you do have to say, “Yes”, to the opportunities that start to come up, and you also have to say, “No”, to the opportunities that are probably not going to be a good use of your time, and you have to say, “No”, to the day-to-day things that you have previously been praised for.

Time is not free, you can’t create more time to do strategic work. You have to sacrifice something and this is the super hard part. All of us have things that we know we’re good at. It could be like you could be fantastic at just quickly identifying and fixing bugs in a microservice that your team own. You could be a really good at being a team-level manager, a team-level lead and making sure the team have great strategic direction. But if you want to grow, you have to figure out how to delegate those things.

You have to figure out other people to do them and you have to cut back on them. Stay away from the things you’ll get praised for, and go out there and into things you’re less comfortable with, and start taking them on, and saying, “Yes”, to opportunities probabilistically as well. So that’s a run-through of some of the topics that I covered in my talk.

Shane Hastie: These are generally not skills that are covered in any engineering training, and often in our professional development as engineers. How do I build those skills?

Developing strategic skills [12:35]

Mark Allen: I always think of these things as a virtuous circle. Firstly, you start small in and reinforce. There’s no quick path. You can’t just go and sit down with the head of sales for half an hour, and make friends with someone from marketing, and then suddenly you’re invited to a board meeting. That’s not going to happen, right? You have to start small and work on reinforcing these things. I’ve had a couple of really good coaches and mentors in my career, but I’ve been very intentional about saying to them, “This is the thing I want to focus on.” Selecting them because I think you are good at this and you can help me. Here’s the value I’m going to bring you. And really deliberate practice and intentional work on building these skills.

They say a great way of learning is teaching others as well. So once you get to six or seven out of 10, and you look around, and you see that you manage or collaborate with engineers who are maybe a little bit earlier on the journey, then you can also start helping them and working with them, and also learning from them because they’ll have ideas and input as well. Those are a couple of ways. Exposure’s also a way, just trying and learning by trying, which can be tough and can be hard.

I also mentor a number of people in my organization. Some of them report to me, some of them don’t, on this topic, and I’m also happy to give advice to people who also reach out to me as well. It happens from time to time that people message me on LinkedIn, send me an email. And I’m also happy to help people who want to grow in this part of the journey. I honestly have struggled to find really good books on this topic to recommend, which is unfortunate, but maybe a gap in the market, maybe should write something and see how it goes.

Shane Hastie: So coaching and mentoring, it is something that, as you said, it’s good to find somebody who’s maybe a little bit behind you on your journey, and help bring them along. But what does it take to coach and/or mentor somebody?

What it takes to coach and mentor others [14:47]

Mark Allen: I think it’s a reciprocal thing, and I almost think that more emphasis is on the mentee, the person being mentored, to be really intentional about it, what they want to get out of it. Not just go to somebody who is senior and successful and say, “Can you be my mentor and distil my knowledge?” I think you have to be very, this is the skill that I want to work on as the mentee. Do you think you could help me with this? And then as the mentor or the coach, you have to be upfront with yourself and figure out whether you can help the person or not.

I try and hold people to quite high levels of accountability. I try and refrain from just pontificating in sharing advice, and saying, I would share my thoughts and then say, “Do you think by the next time we meet”, one week, two weeks, four weeks, whatever the cadence is, “How do you think you can have practically applied this?” And try and set up a plan for them to actually put things into action.

I don’t find just theoretical mentoring, coaching, that effective or useful. So I think it takes that as well. It also takes resilience and a big heart. You won’t always get good results, and you’ll commit to something sometimes, and the other person won’t always reciprocate with their commitment. And the easy path then is to pull back from that or set up a massive gatekeeping against that. But I try and persist with it and try and commit to any given time having five to 10 people that I’m working with. Maybe not on a weekly cadence, but on specific topics. Yes.

Shane Hastie: Shifting focus a little bit, when we were chatting before the conversation, you mentioned the extreme user focus for an engineering team. What does that mean?

Having an extreme user focus for an engineering team [16:49]

Mark Allen: Yes. Yes, I mean, I think we live in a world now of product engineering in many companies where software engineers are not just asked to write code in a vacuum or according to requirements that are written by somebody else in great detail in a ticket, but are given much more high-level user expectations and then are required to break them down themselves, make decisions or provide recommendations on a lot of the implementation details. And they can only do that based on strong knowledge of the user and alignment with the kind of user value the company’s trying to bring.

So one of the things that we really do at Isometric is try and make sure every engineer has a deep understanding of the problem and a deep understanding of the user. And we see from that the ability to ship things quickly, that are the thing the user, radically increases.

You don’t have this back and forth about, oh, I’m stuck. I don’t know what to do. Engineers can make decisions that are right enough of the time to be worth them not having to have this conversation, stop, pause, go back to the designer, go back to the product manager, set up a new user call in seven days time to ask the user some questions.

Some of the ways we do this, so our engineering team spends a significant amount of time interacting with users. That could be being involved in Slack channels, asynchronously, helping them solve problems directly rather than going through a customer support flow. Engineers join calls with our top customers. In the beginning, all of our engineers were given a group of our top customers and they would be on calls with those customers every week, answering technical questions, going through the product, explaining APIs.

We had four people from our team go and run a series of workshops in San Francisco for San Francisco Climate Week. So engineers actually going to people removing carbon dioxide from the atmosphere, in their facilities, in their offices and sitting down for half a day, and understanding their problems, talking through the product, talking through the roadmap. Yes, we have engineers go on site visits and go to facilities in the UK and around Europe as well. We obviously have the product analytics that many people have, and we have engineers setting up dashboards in Metabase using BI tools to understand user journeys. And we obviously organizationally reward all this as well.

We have a culture when, when people show a huge amount of user focus, it’s recognized and when people are reluctant to engage with users and say, “Look, I’m..”. And this doesn’t really happen, but when people say, “I’m an engineer and it’s just my job to put on my headphones and write code”, we would be very intolerant of that and try and align them with what we’re trying to do as an organization. I think the proof in the pudding.

As we know, it’s incredibly hard to quantify engineering productivity in a meaningful way. But even looking at number of impactful production changes as a proxy metric, our top engineers ship 40 to 50 commits, changes to main every week, which is a number that I’m pretty proud of, particularly because, looking at it qualitatively, these are our substantive and meaningful, impactful changes as well. They’re not just fixing a typo in a README or merging a dependency update.

Shane Hastie: Some real get out of the building and go and meet people where they’re at. Another thing that I know from our conversation before that you have a passion about is on-call incident management and doing it well. What does that look like?

Excellence in on-call incident management [20:43]

Mark Allen: I’ve always been really fascinated by incident management. I remember working at Skyscanner, fantastic team, leading global availability. Principal engineer there, John Paris, fantastic. The team around him, really, really good. At Glovo was lucky enough to lead a lot of the changes we made to our incident management practice and bring it in line with best practices, introducing incident managers on call, that sort of thing.

And at Isometric, we use incident.io, which product I really like, and I’ve been working very hard with our most senior engineers to make sure that we are excellent at on-call. I think it comes back to that user focus. When a user experiences pain, holding yourself to incredibly high standards to both resolve the incident but also be able to speak to them and say, “Look, this isn’t going to happen again. Our standards are so high that we are going to get to the root cause of this and fix it”.

A thing that I’ve been thinking about a lot is that I think in all organizations, the way that you improve incident management is by narrowing the gap between the best and the worst people, like apps doing it. And best and worst are very reductive, but I think we’ve all worked with people that are complete on-call heroes.

They just revel in, there’s an emergency situation, get to the bottom of the problem quickly, fix it, go super deep, think about the root causes, implement the fixes, speak to customer and say, “We had this issue three minutes ago. You’ve not even noticed yet, but by the way, we’ve fixed it already and here’s our assessment of impact on you”. I’ve always met people who, in every organization, who are just really good at that. But I’ve also worked with a lot of people that it’s not their natural thing. It might be the pressure.

It might be that they’re just a bit earlier on in their careers or have lower tenure at the company and maybe don’t have enough of a understanding of the code base or parts of the platform to tackle on-call issues in a confident and fast way. So we do a lot to really focus on lifting those people up and improving their standards. So there’s a culture element, which is about a huge amount of recognition for people that do well on call, and then having them talk through their methodology and what they did.

So we have company awards at Isometric, as many companies do, and one of them is called Do It Right, which is one of our operating principles. Build things to a high standard, be rigorous, be robust. And across the whole company and only a third of the company work in technology, but across the whole company, it’s been great to see that award, which is only given out once every six months, be given out to engineers for doing great jobs in on-call incidents, specific on-call incidents.

So there’s that culture element of reinforcement and then the knowledge sharing, which we do every week. We have an on-call run through in an extensive doc, but then summarized by the engineers who were on call, about specific learnings for people, run throughs of how they solve the thing.

Sometimes supported by a Loom recording of here is what I did replayed, so other people can go and watch it. We have the engineers that we think are the best, working actively with other engineers on improving their own core skills. We obviously have run books and materials to support, great tooling incident.io, really good observability thanks to Datadog and Sentry. And, yes, it’s just something that we talk about a lot. I’m the head of engineering, I manage managers and very senior engineers, and my boss is the CTO, and both of us get stuck into incidents. We get involved, ask questions.

We both know how to figure out what’s going on, looking at logs, looking at traces and spans, then help get to the bottom of things. And I think that kind of leadership commitments as well, not in a micromanage-y way, but in a positive all hands on deck here. This is a super important thing for us to get right as an engineering organization, it’s also a really important thing to emphasize.

So I think a combination of those things, what we try and do on call. And of course the final thing I’d say, and maybe I didn’t say this because it’s obvious is, everyone in our engineering org is on call as well. So we don’t have a specialist SRE team. All the product building teams are also on call. You build it, you run it, that kind of ownership thing, really making people feel like owners of the production system and not just people who write code that somebody else then puts somewhere in the cloud somehow.

Shane Hastie: Mark, a lot of really interesting points and some great advice in here. If people want to continue the conversation, where do they find you?

Mark Allen: I’m on LinkedIn and I’m pretty good at responding. Obviously, a lot of my job is hiring and recruitment, so a lot of people messaging me on LinkedIn about positions. So I have to be quite on top of that. So you can definitely get in touch with me there. Sending over a short message about what I might be able to help with is the way I would go.

And then if I can, we might start talking by email, set up a call, something like that. I feel very privileged that in my career I’ve had people that have helped me on my journey, that didn’t have to and didn’t get paid for it a lot of the time. And so, yes, I really am committed to paying that back for people as well who want to get in touch.

Shane Hastie: Paying it forward. Thank you so much for taking the time to talk to us today.

Mark Allen: It’s been a pleasure. Thank you for inviting me.

Mentioned:

About the Author

.
From this page you also have access to our recorded show notes. They all have clickable links that will take you directly to that part of the audio.

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Wealth Enhancement Advisory Services LLC Sells 470 Shares of MongoDB, Inc. (NASDAQ:MDB)

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

Wealth Enhancement Advisory Services LLC reduced its holdings in shares of MongoDB, Inc. (NASDAQ:MDBFree Report) by 2.1% in the 1st quarter, according to its most recent filing with the Securities and Exchange Commission. The firm owned 21,411 shares of the company’s stock after selling 470 shares during the quarter. Wealth Enhancement Advisory Services LLC’s holdings in MongoDB were worth $3,755,000 as of its most recent SEC filing.

Several other hedge funds and other institutional investors also recently made changes to their positions in MDB. OneDigital Investment Advisors LLC boosted its stake in MongoDB by 3.9% in the 4th quarter. OneDigital Investment Advisors LLC now owns 1,044 shares of the company’s stock valued at $243,000 after purchasing an additional 39 shares during the period. Aigen Investment Management LP boosted its stake in MongoDB by 1.4% in the 4th quarter. Aigen Investment Management LP now owns 3,921 shares of the company’s stock valued at $913,000 after purchasing an additional 55 shares during the period. Handelsbanken Fonder AB boosted its stake in MongoDB by 0.4% in the 1st quarter. Handelsbanken Fonder AB now owns 14,816 shares of the company’s stock valued at $2,599,000 after purchasing an additional 65 shares during the period. O Shaughnessy Asset Management LLC boosted its stake in MongoDB by 4.8% in the 4th quarter. O Shaughnessy Asset Management LLC now owns 1,647 shares of the company’s stock valued at $383,000 after purchasing an additional 75 shares during the period. Finally, Fifth Third Bancorp boosted its stake in MongoDB by 15.9% in the 1st quarter. Fifth Third Bancorp now owns 569 shares of the company’s stock valued at $100,000 after purchasing an additional 78 shares during the period. 89.29% of the stock is currently owned by institutional investors.

Insider Activity

In related news, CAO Thomas Bull sold 301 shares of the firm’s stock in a transaction that occurred on Wednesday, April 2nd. The stock was sold at an average price of $173.25, for a total transaction of $52,148.25. Following the completion of the sale, the chief accounting officer now owns 14,598 shares of the company’s stock, valued at $2,529,103.50. This trade represents a 2.02% decrease in their ownership of the stock. The transaction was disclosed in a document filed with the SEC, which is available at the SEC website. Also, insider Cedric Pech sold 1,690 shares of the firm’s stock in a transaction that occurred on Wednesday, April 2nd. The stock was sold at an average price of $173.26, for a total value of $292,809.40. Following the sale, the insider now directly owns 57,634 shares of the company’s stock, valued at $9,985,666.84. This represents a 2.85% decrease in their position. The disclosure for this sale can be found here. Over the last quarter, insiders sold 50,382 shares of company stock valued at $10,403,807. Company insiders own 3.10% of the company’s stock.

Wall Street Analyst Weigh In

<!—->

MDB has been the subject of several recent research reports. Needham & Company LLC restated a “buy” rating and issued a $270.00 price target on shares of MongoDB in a research note on Thursday, June 5th. Truist Financial reduced their target price on MongoDB from $300.00 to $275.00 and set a “buy” rating on the stock in a report on Monday, March 31st. DA Davidson reissued a “buy” rating and set a $275.00 target price on shares of MongoDB in a report on Thursday, June 5th. Piper Sandler increased their target price on MongoDB from $200.00 to $275.00 and gave the stock an “overweight” rating in a report on Thursday, June 5th. Finally, Rosenblatt Securities reduced their target price on MongoDB from $305.00 to $290.00 and set a “buy” rating on the stock in a report on Thursday, June 5th. Eight analysts have rated the stock with a hold rating, twenty-five have issued a buy rating and one has given a strong buy rating to the stock. According to MarketBeat.com, the company currently has a consensus rating of “Moderate Buy” and a consensus price target of $282.47.

Check Out Our Latest Analysis on MongoDB

MongoDB Trading Up 1.2%

MongoDB stock opened at $209.20 on Friday. The company has a market capitalization of $17.09 billion, a PE ratio of -183.51 and a beta of 1.39. MongoDB, Inc. has a 12 month low of $140.78 and a 12 month high of $370.00. The stock has a 50-day moving average price of $188.81 and a two-hundred day moving average price of $219.23.

MongoDB (NASDAQ:MDBGet Free Report) last issued its quarterly earnings results on Wednesday, June 4th. The company reported $1.00 earnings per share for the quarter, beating the consensus estimate of $0.65 by $0.35. The company had revenue of $549.01 million during the quarter, compared to analysts’ expectations of $527.49 million. MongoDB had a negative return on equity of 3.16% and a negative net margin of 4.09%. The business’s quarterly revenue was up 21.8% on a year-over-year basis. During the same quarter in the prior year, the firm posted $0.51 earnings per share. As a group, analysts predict that MongoDB, Inc. will post -1.78 earnings per share for the current fiscal year.

MongoDB Profile

(Free Report)

MongoDB, Inc, together with its subsidiaries, provides general purpose database platform worldwide. The company provides MongoDB Atlas, a hosted multi-cloud database-as-a-service solution; MongoDB Enterprise Advanced, a commercial database server for enterprise customers to run in the cloud, on-premises, or in a hybrid environment; and Community Server, a free-to-download version of its database, which includes the functionality that developers need to get started with MongoDB.

Further Reading

Institutional Ownership by Quarter for MongoDB (NASDAQ:MDB)



Receive News & Ratings for MongoDB Daily – Enter your email address below to receive a concise daily summary of the latest news and analysts’ ratings for MongoDB and related companies with MarketBeat.com’s FREE daily email newsletter.

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


MongoDB’s (MDB) “Outperform” Rating Reaffirmed at William Blair – Defense World

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

William Blair reiterated their outperform rating on shares of MongoDB (NASDAQ:MDBFree Report) in a report released on Thursday morning,RTT News reports.

A number of other equities research analysts have also issued reports on the company. Cantor Fitzgerald increased their price target on MongoDB from $252.00 to $271.00 and gave the company an “overweight” rating in a research report on Thursday, June 5th. Morgan Stanley reduced their target price on shares of MongoDB from $315.00 to $235.00 and set an “overweight” rating on the stock in a report on Wednesday, April 16th. Mizuho decreased their price target on shares of MongoDB from $250.00 to $190.00 and set a “neutral” rating for the company in a research note on Tuesday, April 15th. Robert W. Baird dropped their price objective on shares of MongoDB from $390.00 to $300.00 and set an “outperform” rating on the stock in a research report on Thursday, March 6th. Finally, Macquarie reaffirmed a “neutral” rating and set a $230.00 price objective (up from $215.00) on shares of MongoDB in a research report on Friday, June 6th. Eight equities research analysts have rated the stock with a hold rating, twenty-five have issued a buy rating and one has issued a strong buy rating to the company’s stock. According to data from MarketBeat.com, the stock currently has a consensus rating of “Moderate Buy” and an average price target of $282.47.

Check Out Our Latest Stock Report on MongoDB

MongoDB Price Performance

<!—->

Shares of MongoDB stock opened at $209.20 on Thursday. The business has a 50-day moving average of $188.81 and a 200 day moving average of $219.23. MongoDB has a 12 month low of $140.78 and a 12 month high of $370.00. The stock has a market capitalization of $17.09 billion, a price-to-earnings ratio of -183.51 and a beta of 1.39.

MongoDB (NASDAQ:MDBGet Free Report) last issued its quarterly earnings results on Wednesday, June 4th. The company reported $1.00 earnings per share (EPS) for the quarter, topping the consensus estimate of $0.65 by $0.35. MongoDB had a negative net margin of 4.09% and a negative return on equity of 3.16%. The business had revenue of $549.01 million for the quarter, compared to analysts’ expectations of $527.49 million. During the same quarter in the prior year, the company earned $0.51 earnings per share. The business’s quarterly revenue was up 21.8% on a year-over-year basis. Equities research analysts anticipate that MongoDB will post -1.78 earnings per share for the current year.

Insider Buying and Selling at MongoDB

In other MongoDB news, CAO Thomas Bull sold 301 shares of MongoDB stock in a transaction that occurred on Wednesday, April 2nd. The stock was sold at an average price of $173.25, for a total value of $52,148.25. Following the sale, the chief accounting officer now owns 14,598 shares of the company’s stock, valued at approximately $2,529,103.50. The trade was a 2.02% decrease in their position. The sale was disclosed in a document filed with the Securities & Exchange Commission, which can be accessed through the SEC website. Also, CEO Dev Ittycheria sold 25,005 shares of the business’s stock in a transaction that occurred on Thursday, June 5th. The stock was sold at an average price of $234.00, for a total value of $5,851,170.00. Following the transaction, the chief executive officer now owns 256,974 shares in the company, valued at $60,131,916. The trade was a 8.87% decrease in their position. The disclosure for this sale can be found here. Insiders sold a total of 50,382 shares of company stock valued at $10,403,807 in the last ninety days. Insiders own 3.10% of the company’s stock.

Institutional Trading of MongoDB

Hedge funds have recently modified their holdings of the business. Swedbank AB increased its holdings in shares of MongoDB by 2.2% in the 1st quarter. Swedbank AB now owns 615,593 shares of the company’s stock valued at $107,975,000 after purchasing an additional 13,100 shares in the last quarter. Acadian Asset Management LLC grew its position in MongoDB by 181.8% during the first quarter. Acadian Asset Management LLC now owns 562,190 shares of the company’s stock worth $98,586,000 after buying an additional 362,705 shares during the period. IFM Investors Pty Ltd increased its holdings in MongoDB by 4.3% in the first quarter. IFM Investors Pty Ltd now owns 13,796 shares of the company’s stock valued at $2,420,000 after buying an additional 569 shares in the last quarter. UBS AM A Distinct Business Unit of UBS Asset Management Americas LLC lifted its position in shares of MongoDB by 11.3% during the 1st quarter. UBS AM A Distinct Business Unit of UBS Asset Management Americas LLC now owns 1,271,444 shares of the company’s stock valued at $223,011,000 after acquiring an additional 129,451 shares during the period. Finally, Woodline Partners LP boosted its stake in shares of MongoDB by 30,297.0% during the 1st quarter. Woodline Partners LP now owns 322,208 shares of the company’s stock worth $56,515,000 after acquiring an additional 321,148 shares in the last quarter. 89.29% of the stock is owned by institutional investors and hedge funds.

About MongoDB

(Get Free Report)

MongoDB, Inc, together with its subsidiaries, provides general purpose database platform worldwide. The company provides MongoDB Atlas, a hosted multi-cloud database-as-a-service solution; MongoDB Enterprise Advanced, a commercial database server for enterprise customers to run in the cloud, on-premises, or in a hybrid environment; and Community Server, a free-to-download version of its database, which includes the functionality that developers need to get started with MongoDB.

Further Reading

Analyst Recommendations for MongoDB (NASDAQ:MDB)



Receive News & Ratings for MongoDB Daily – Enter your email address below to receive a concise daily summary of the latest news and analysts’ ratings for MongoDB and related companies with MarketBeat.com’s FREE daily email newsletter.

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


The Great Data Reimagination: From Static to Agile in the AI Era – The New Stack

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

<meta name="x-tns-categories" content="AI Engineering / Databases“><meta name="x-tns-authors" content="“>


The Great Data Reimagination: From Static to Agile in the AI Era – The New Stack


<!– –>

As a JavaScript developer, what non-React tools do you use most often?

Angular

0%

Astro

0%

Svelte

0%

Vue.js

0%

Other

0%

I only use React

0%

I don’t use JavaScript

0%

2025-06-26 09:00:59

The Great Data Reimagination: From Static to Agile in the AI Era

sponsor-mongodb,sponsored-post,


AI Engineering

/


Databases

We’re in the middle of a fundamental change in how enterprise software works. In the next decade, your database will become your AI.


Jun 26th, 2025 9:00am by


Featued image for: The Great Data Reimagination: From Static to Agile in the AI Era

Featured image by Alex Shuper for Unsplash+.

Just five years ago, choosing the right kind of database to support their applications presented complexity for many developers: relational or NoSQL? Structured or unstructured? Flexible or predictable? They didn’t know exactly what their data would look like in six months, but what they did know was that it was certain to change. This led many to make a rebel’s choice to reject the rigid structure of SQL databases for something more fluid and adaptable.

Now, with AI developing faster and more furiously than anticipated, the conversation has radically shifted. It’s not simply about where to store data, but about understanding what’s happening beneath our applications, where the fundamental nature of how we store, process and make decisions with data is transforming in real time.

These changes are driving the great data reimagination, where companies must think about their data not as a static asset, but as an active participant in an intelligent platform that lets them innovate at AI speed.

The Data Architecture Identity Crisis

Organizations are currently facing $1.52 trillion in technical debt, and according to Gartner, by 2026, 80% of that debt will be due to architectural issues. For developers, technical debt consumes up to 42% of their time, hurting morale, contributing to turnover and slowing innovation, all of which hinder competitiveness in areas like AI, personalization and Internet of Things (IoT) usage.

“Today’s developers are building AI agents that need to remember conversations, search through millions of documents semantically and scale across multiple clouds simultaneously,” said Han Heloir, EMEA generative AI solutions architect at MongoDB. “Much of the architectural debt developers are facing stems from mismatches between object and relations systems, which kills agility, speed and performance.”

Developers have long known that rigid infrastructure can slow development. Schema-flexible (not to be confused with schemaless) approaches that allow rapid iteration are best suited for modern applications. Still, the data architecture identity crisis is real among technical leaders torn between two fundamentally different stories about how applications should meet the opportunities of AI.

PostgreSQL, for example, is revered by engineers for its reliability and SQL mastery. Conversely, flexible, AI-integrated platforms use document models that naturally map to application code and don’t require predetermined schemas. They allow for rapid iteration while maintaining governance, making them ideal for dynamic domains that require real-time data.

The two platforms are converging, however, with classical databases like PostgreSQL embracing JSON support and NoSQL-like flexibility, and some NoSQL vendors adding capabilities such as transactions, joins and vector search to power both flexible and structured use cases while simplifying architecture.

What’s key here is understanding your application’s competitive advantage and your customers’ needs. Financial trading platforms, medical records systems and regulatory compliance tools all demand schema-first thinking. In these cases, PostgreSQL’s “structure-first” philosophy is essential. But when your application’s competitive advantage comes from understanding semi-structured, dynamic or rapidly evolving data, a “build-first” philosophy offers a strategic advantage. Developers can quickly start writing applications without first designing a database schema.

The trick is to adopt polyglot persistence, using PostgreSQL for relational workloads and document databases for AI workloads. It’s about understanding which platform handles which responsibilities in your architecture.

The Adaptive Approach: Prioritizing Developer Speed

Keep in mind that, while the decision between traditional databases and adaptive platforms is not binary, adaptive approaches are best suited for AI development where developer velocity is indispensable for competitiveness.

That’s because an adaptive approach treats data as an active player in an application’s intelligence and agility, improving decision-making and user experience. Adaptive platforms act as intelligent data partners that can store, search, analyze and even reason about information, all while enabling the application to scale from one user to millions without developers having to think about defining infrastructure before they begin to build. These platforms are transformative for developers, allowing them to focus on what makes their applications unique rather than stitching together five different services to handle data, search, analytics and AI.

From a technical standpoint, adaptive platforms converge three capabilities: They remove impedance mismatch, provide a distributed architecture for horizontal scaling and integrate operational and analytical workloads. These platforms evolve with new AI workloads and can serve as both the operational database and the vector store for retrieval-augmented generation (RAG) applications.

For example, MongoDB’s approach to RAG consolidation unifies search, vector search, operational data and event-driven triggers. Instead of keeping customer data in MongoDB, vectors in Pinecone and search indexes in Elasticsearch, everything lives in one platform. It’s an AI-native approach that eliminates the overhead associated with traditional AI platforms by keeping operational data, vector embeddings, search indexes and analytics systems aligned.

When a customer’s profile updates, the vector embeddings automatically synchronize. As new knowledge is added to the system, it’s immediately available for both operational queries and semantic search. In other words, it makes intelligence intrinsic to the data layer itself.

“The advantages of a unified, AI-native platform are profound,” Helior said. “We’re seeing companies reduce their AI infrastructure from six or seven components down to MongoDB plus their LLM [large language model] provider. That’s not just cost savings. It’s architectural simplicity that speeds innovation.”

AI-Native Platforms Are the Future

We’re in the middle of a fundamental reimagining of how enterprise software will work in the next decade, where your database becomes your AI. The debate over databases is fading as future-looking organizations begin to adopt adaptive platforms that learn and evolve.

As the distinction between operational databases and adaptive platforms disappears entirely, data will increasingly become a collaborative partner in an infrastructure that acts as an intelligent organism where data, meaning and reasoning coexist seamlessly. This isn’t science fiction. It’s a natural progression toward AI-native design.

In the meantime, technical leaders can navigate uncertainty over the relational / adaptive divide by starting small, for instance, with one use case and one AI-enhanced feature rather than making a platform-wide decision. Then measure the results and let success drive expansion.

MongoDB is evolving beyond a database. It’s now an AI-native data platform that handles not just storage but also vector search, real-time analytics and multicloud scaling, enabling applications to innovate at AI speed. Learn more and give it a try.

YOUTUBE.COM/THENEWSTACK

Tech moves fast, don’t miss an episode. Subscribe to our YouTube
channel to stream all our podcasts, interviews, demos, and more.

Group
Created with Sketch.








Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


MongoDB (NASDAQ:MDB) Given Outperform Rating at William Blair – MarketBeat

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

MongoDB (NASDAQ:MDBGet Free Report)‘s stock had its “outperform” rating restated by William Blair in a research note issued to investors on Thursday,RTT News reports.

Several other research firms also recently issued reports on MDB. Truist Financial cut their price objective on MongoDB from $300.00 to $275.00 and set a “buy” rating on the stock in a report on Monday, March 31st. Rosenblatt Securities cut their price objective on MongoDB from $305.00 to $290.00 and set a “buy” rating on the stock in a report on Thursday, June 5th. Monness Crespi & Hardt upgraded MongoDB from a “neutral” rating to a “buy” rating and set a $295.00 price objective on the stock in a report on Thursday, June 5th. Stifel Nicolaus cut their price objective on MongoDB from $340.00 to $275.00 and set a “buy” rating on the stock in a report on Friday, April 11th. Finally, UBS Group upped their price target on MongoDB from $213.00 to $240.00 and gave the stock a “neutral” rating in a report on Thursday, June 5th. Eight equities research analysts have rated the stock with a hold rating, twenty-five have assigned a buy rating and one has given a strong buy rating to the company. Based on data from MarketBeat.com, the company has an average rating of “Moderate Buy” and an average price target of $282.47.

Read Our Latest Analysis on MDB

MongoDB Stock Up 1.2%

NASDAQ:MDB traded up $2.44 on Thursday, reaching $209.20. The company’s stock had a trading volume of 1,518,600 shares, compared to its average volume of 1,958,252. MongoDB has a 12-month low of $140.78 and a 12-month high of $370.00. The firm has a market cap of $17.09 billion, a price-to-earnings ratio of -183.51 and a beta of 1.39. The stock has a 50 day moving average price of $188.81 and a 200 day moving average price of $219.23.

MongoDB (NASDAQ:MDBGet Free Report) last posted its earnings results on Wednesday, June 4th. The company reported $1.00 EPS for the quarter, beating analysts’ consensus estimates of $0.65 by $0.35. The company had revenue of $549.01 million for the quarter, compared to the consensus estimate of $527.49 million. MongoDB had a negative return on equity of 3.16% and a negative net margin of 4.09%. The company’s revenue for the quarter was up 21.8% on a year-over-year basis. During the same quarter in the previous year, the business earned $0.51 earnings per share. Equities research analysts expect that MongoDB will post -1.78 EPS for the current fiscal year.

Insider Activity at MongoDB

In related news, Director Hope F. Cochran sold 1,175 shares of the company’s stock in a transaction dated Tuesday, April 1st. The stock was sold at an average price of $174.69, for a total value of $205,260.75. Following the completion of the sale, the director now directly owns 19,333 shares of the company’s stock, valued at $3,377,281.77. This represents a 5.73% decrease in their position. The sale was disclosed in a document filed with the Securities & Exchange Commission, which is available through the SEC website. Also, CAO Thomas Bull sold 301 shares of the company’s stock in a transaction dated Wednesday, April 2nd. The shares were sold at an average price of $173.25, for a total value of $52,148.25. Following the completion of the sale, the chief accounting officer now directly owns 14,598 shares of the company’s stock, valued at approximately $2,529,103.50. This trade represents a 2.02% decrease in their ownership of the stock. The disclosure for this sale can be found here. Insiders sold a total of 50,382 shares of company stock valued at $10,403,807 in the last three months. 3.10% of the stock is currently owned by company insiders.

Institutional Trading of MongoDB

Several institutional investors and hedge funds have recently added to or reduced their stakes in MDB. First Horizon Advisors Inc. raised its stake in MongoDB by 91.3% during the 4th quarter. First Horizon Advisors Inc. now owns 486 shares of the company’s stock valued at $113,000 after acquiring an additional 232 shares during the last quarter. IFP Advisors Inc raised its stake in MongoDB by 54.3% during the 4th quarter. IFP Advisors Inc now owns 1,632 shares of the company’s stock valued at $380,000 after acquiring an additional 574 shares during the last quarter. Amalgamated Bank raised its stake in MongoDB by 1.9% during the 4th quarter. Amalgamated Bank now owns 4,713 shares of the company’s stock valued at $1,097,000 after acquiring an additional 89 shares during the last quarter. Los Angeles Capital Management LLC purchased a new stake in MongoDB during the 4th quarter valued at approximately $8,763,000. Finally, GenTrust LLC purchased a new stake in MongoDB during the 4th quarter valued at approximately $4,446,000. 89.29% of the stock is owned by hedge funds and other institutional investors.

About MongoDB

(Get Free Report)

MongoDB, Inc, together with its subsidiaries, provides general purpose database platform worldwide. The company provides MongoDB Atlas, a hosted multi-cloud database-as-a-service solution; MongoDB Enterprise Advanced, a commercial database server for enterprise customers to run in the cloud, on-premises, or in a hybrid environment; and Community Server, a free-to-download version of its database, which includes the functionality that developers need to get started with MongoDB.

See Also

Analyst Recommendations for MongoDB (NASDAQ:MDB)

This instant news alert was generated by narrative science technology and financial data from MarketBeat in order to provide readers with the fastest and most accurate reporting. This story was reviewed by MarketBeat’s editorial team prior to publication. Please send any questions or comments about this story to contact@marketbeat.com.

Before you consider MongoDB, you’ll want to hear this.

MarketBeat keeps track of Wall Street’s top-rated and best performing research analysts and the stocks they recommend to their clients on a daily basis. MarketBeat has identified the five stocks that top analysts are quietly whispering to their clients to buy now before the broader market catches on… and MongoDB wasn’t on the list.

While MongoDB currently has a Moderate Buy rating among analysts, top-rated analysts believe these five stocks are better buys.

View The Five Stocks Here

The 10 Best AI Stocks to Own in 2025 Cover

Wondering where to start (or end) with AI stocks? These 10 simple stocks can help investors build long-term wealth as artificial intelligence continues to grow into the future.

Get This Free Report

Like this article? Share it with a colleague.

Link copied to clipboard.

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Stock Traders Purchase Large Volume of Call Options on MongoDB (NASDAQ:MDB)

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

MongoDB, Inc. (NASDAQ:MDBGet Free Report) was the target of unusually large options trading on Wednesday. Stock investors purchased 36,130 call options on the company. This is an increase of approximately 2,077% compared to the typical daily volume of 1,660 call options.

Analysts Set New Price Targets

MDB has been the topic of a number of research analyst reports. Barclays upped their target price on MongoDB from $252.00 to $270.00 and gave the stock an “overweight” rating in a research note on Thursday, June 5th. Needham & Company LLC restated a “buy” rating and issued a $270.00 target price on shares of MongoDB in a research note on Thursday, June 5th. Guggenheim upped their target price on MongoDB from $235.00 to $260.00 and gave the stock a “buy” rating in a research note on Thursday, June 5th. Wedbush restated an “outperform” rating and issued a $300.00 target price on shares of MongoDB in a research note on Thursday, June 5th. Finally, KeyCorp lowered MongoDB from a “strong-buy” rating to a “hold” rating in a research note on Wednesday, March 5th. Eight equities research analysts have rated the stock with a hold rating, twenty-four have assigned a buy rating and one has given a strong buy rating to the company. According to MarketBeat, MongoDB has a consensus rating of “Moderate Buy” and an average target price of $282.47.

View Our Latest Stock Report on MongoDB

MongoDB Price Performance

<!—->

Shares of MDB stock opened at $206.76 on Thursday. The stock has a market capitalization of $16.89 billion, a PE ratio of -181.37 and a beta of 1.39. MongoDB has a 1-year low of $140.78 and a 1-year high of $370.00. The business has a 50-day moving average of $188.81 and a 200 day moving average of $219.23.

MongoDB (NASDAQ:MDBGet Free Report) last issued its earnings results on Wednesday, June 4th. The company reported $1.00 earnings per share for the quarter, beating the consensus estimate of $0.65 by $0.35. The company had revenue of $549.01 million during the quarter, compared to analysts’ expectations of $527.49 million. MongoDB had a negative return on equity of 3.16% and a negative net margin of 4.09%. MongoDB’s quarterly revenue was up 21.8% compared to the same quarter last year. During the same period in the prior year, the business posted $0.51 EPS. Analysts anticipate that MongoDB will post -1.78 EPS for the current year.

Insider Buying and Selling

In related news, CAO Thomas Bull sold 301 shares of the stock in a transaction dated Wednesday, April 2nd. The stock was sold at an average price of $173.25, for a total transaction of $52,148.25. Following the sale, the chief accounting officer now owns 14,598 shares of the company’s stock, valued at approximately $2,529,103.50. This represents a 2.02% decrease in their position. The sale was disclosed in a document filed with the SEC, which is accessible through this hyperlink. Also, Director Hope F. Cochran sold 1,174 shares of the stock in a transaction dated Tuesday, June 17th. The stock was sold at an average price of $201.08, for a total value of $236,067.92. Following the sale, the director now directly owns 21,096 shares in the company, valued at $4,241,983.68. This trade represents a 5.27% decrease in their position. The disclosure for this sale can be found here. In the last quarter, insiders have sold 50,382 shares of company stock valued at $10,403,807. Corporate insiders own 3.10% of the company’s stock.

Institutional Investors Weigh In On MongoDB

Several institutional investors have recently bought and sold shares of MDB. Jericho Capital Asset Management L.P. purchased a new stake in MongoDB during the 1st quarter valued at about $161,543,000. Norges Bank purchased a new stake in MongoDB in the 4th quarter worth approximately $189,584,000. Primecap Management Co. CA lifted its holdings in MongoDB by 863.5% in the 1st quarter. Primecap Management Co. CA now owns 870,550 shares of the company’s stock worth $152,694,000 after buying an additional 780,200 shares during the period. Westfield Capital Management Co. LP purchased a new stake in MongoDB in the 1st quarter worth approximately $128,706,000. Finally, Vanguard Group Inc. lifted its holdings in MongoDB by 6.6% in the 1st quarter. Vanguard Group Inc. now owns 7,809,768 shares of the company’s stock worth $1,369,833,000 after buying an additional 481,023 shares during the period. 89.29% of the stock is currently owned by hedge funds and other institutional investors.

MongoDB Company Profile

(Get Free Report)

MongoDB, Inc, together with its subsidiaries, provides general purpose database platform worldwide. The company provides MongoDB Atlas, a hosted multi-cloud database-as-a-service solution; MongoDB Enterprise Advanced, a commercial database server for enterprise customers to run in the cloud, on-premises, or in a hybrid environment; and Community Server, a free-to-download version of its database, which includes the functionality that developers need to get started with MongoDB.

Further Reading



Receive News & Ratings for MongoDB Daily – Enter your email address below to receive a concise daily summary of the latest news and analysts’ ratings for MongoDB and related companies with MarketBeat.com’s FREE daily email newsletter.

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Cloudflare Expands AI Capabilities with Launch of Thirteen New MCP Servers

MMS Founder
MMS Craig Risi

Article originally posted on InfoQ. Visit InfoQ

Cloudflare has unveiled thirteen new Model Context Protocol (MCP) servers, enhancing the integration of AI agents with its platform. These servers allow AI clients to interact with Cloudflare’s services through natural language, streamlining tasks such as debugging, data analysis, and security monitoring.

An MCP server is a specialized type of server introduced by Cloudflare as part of their infrastructure to support AI agents in executing, debugging, and managing tasks securely and efficiently.

The concept of MCP servers is built around the idea of providing AI agents (such as those used in autonomous workflows or natural language interfaces) safe, controlled access to the tools and data they need to operate effectively. These servers don’t just run arbitrary workloads; they are tightly scoped, auditable environments that expose specific capabilities to AI models.

The new MCP servers from Cloudflare introduce several key features designed to enhance the capabilities of AI agents interfacing with cloud infrastructure. The thirteen new servers and what they do are described below, as detailed in Cloudflare’s blog post.

Cloudflare Documentation Server provides direct access to the most current Cloudflare Developer Documentation. It serves as a reliable reference point for developers working with Cloudflare products, offering guidance on APIs, Workers, networking configurations, Zero Trust solutions, and more. This server helps streamline development and troubleshooting by making technical documentation readily available within the development workflow.

Workers Bindings Server is designed to support the development of Cloudflare Workers applications by enabling integration with core primitives like storage (KV, R2, Durable Objects), secrets, and AI models. It allows developers to interact with these bindings as they would in production, making it easier to test and build serverless applications that depend on Cloudflare’s edge compute environment.

Workers Observability Server offers debugging and performance monitoring tools for Cloudflare Workers applications. It allows developers and operators to access logs, analytics, and error traces, providing essential insights into how Workers behave during execution. This observability helps troubleshoot bugs, optimize performance, and maintain reliable production applications.

Container Server creates sandboxed development environments on demand, giving developers isolated containers to run and test their code. These environments simulate realistic conditions without affecting local setups or production systems. It’s a convenient tool for experimentation, CI/CD workflows, and running ephemeral services safely and cleanly.

Browser Rendering Server enables headless browser functionality for fetching and rendering web pages. It can convert page content to markdown or capture screenshots, making it useful for automation workflows, content extraction, UI validation, and generating previews. This server is especially valuable for developers building tools that need to interact with web UIs programmatically

Radar Server connects to Cloudflare Radar, a global intelligence platform that provides insights into internet traffic trends, regional outages, domain scans, and overall web health. This server allows developers and network analysts to query real-time or historical data to assess internet stability, understand global events, or identify suspicious activity across the web.

The Logpush Server provides status summaries and health checks for Logpush jobs, Cloudflare’s mechanism for streaming logs to external storage destinations like AWS S3 or Google BigQuery. It ensures that critical observability and compliance data are flowing correctly, helping teams quickly identify failures or misconfigurations in their logging pipelines.

AI Gateway Server allows teams to search logs and trace prompt-response cycles made through Cloudflare’s AI Gateway. It provides detailed telemetry on AI requests, including token usage, response times, and prompt contents. This is valuable for teams building or monitoring AI-driven applications, especially those concerned with privacy, cost, and quality of AI interactions.

AutoRAG Server is used to manage and query documents stored in AutoRAG, Cloudflare’s retrieval-augmented generation systems. It lets developers list ingested documents and search across them to verify availability and correctness. This is crucial for AI applications that rely on external documents to provide accurate, context-rich responses to user queries.

Audit Logs Server grants access to audit logs that capture user actions and configuration changes across Cloudflare services. These logs are essential for security reviews, compliance reporting, and change tracking, offering a transparent record of activity within an organization’s Cloudflare environment. Reports can be generated and filtered for easier review.

DNS Analytics Server offers tools to analyze DNS resolution performance and troubleshoot configuration issues. It provides data such as query volumes, response times, and error rates, helping network administrators identify bottlenecks, optimize routing, and ensure DNS infrastructure is functioning efficiently and securely.

Digital Experience Monitoring (DEM) Server delivers insights into the end-user experience of web and SaaS applications. Using real-user and synthetic monitoring data, it helps organizations track performance metrics like latency, uptime, and error rates across geographic locations. This server supports IT teams in ensuring application reliability and user satisfaction.

Lastly, Cloudflare One CASB Server functions as a Cloud Access Security Broker for Cloudflare’s Zero Trust suite. It scans connected SaaS applications for security misconfigurations and potential compliance issues. This helps organizations secure their data and users by identifying risky configurations in platforms like Google Workspace or Microsoft 365, aligning with modern zero-trust principles.

The MCP server code and documentation can be found in Cloudflare’s mcp-server-cloudflare GitHub repo. These servers are accessible to any MCP client supporting remote connections, including platforms like Claude.ai. This development signifies a step towards more seamless integration between AI agents and cloud services, promoting efficiency and automation in various operational tasks.

About the Author

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


The Rise of Energy and Water Consumption Using AI Models, and How It Can Be Reduced

MMS Founder
MMS Ben Linders

Article originally posted on InfoQ. Visit InfoQ

Artificial intelligence’s (AI) energy and water consumption has become a growing concern in the tech industry, particularly for large-scale machine learning models and data centers. Sustainable AI focuses on making AI technology more environmentally friendly and socially responsible.

Zorina Alliata and Hara Gavriliadi gave a talk about sustainable AI at OOP Conference.

The energy and water usage depend on the particular AI system, its size, and deployment method, as Gavriliadi explained:

Estimates from Gartner suggest that AI and data centers account for 2-3% of global electricity use, which can rise dramatically in the coming years.

AI’s water usage for cooling is significant, with a single AI conversation potentially using up to 500ml of water.

This rapid growth in resource consumption highlights the need for more sustainable AI practices, energy-efficient technologies, and improved resource management in the tech sector, Gavriliadi mentioned.

Alliata mentioned that model complexity requires more computational power and energy to train and operate these models. She said the data center expansions and requirements for cooling contribute significantly to their overall energy usage and water consumption. As AI tools and applications become more integrated into everyday online experiences and business operations, the cumulative energy demand increases substantially, she added.

Sustainable AI focuses on the long-term effects of AI, including environmental and societal effects, Gavriliadi said. She mentioned that techniques such as sparse modeling, hardware optimization, and responsible AI practices are crucial in achieving this balance between technological advancement and environmental stewardship.

There are cutting-edge methods for lowering AI’s energy footprint, Alliata explained:

The development of more energy-efficient chips and cooling systems is essential for hardware optimization, as it lowers the power consumption of the actual AI infrastructure components.

We now study quantum and neuromorphic architectures, photonic systems, and high-performance computing clusters of servers that process information differently, to speed up the compute power significantly.

Simplifying the computational procedures and algorithmic advancements, such as developing more effective training and inference algorithms, can lower energy consumption, Alliata said. She mentioned algorithm enhancements such as transfer learning, which uses pre-trained models to reduce training time and, consequently, the overall energy demand, and model distillation, which shrinks the size of AI models without significantly reducing performance. Modularity is also crucial, she said; the use of interchangeable parts makes upgrades and repairs simple, prolonging hardware life and cutting down on waste.

Alliata mentioned that creating biodegradable AI with organic electronics and environmentally friendly packaging reduces waste, and incorporating green energy by using renewable sources to power AI infrastructure is crucial for sustainability in general.

Gavriliadi said there are several tools for estimating the environmental impact of AI solutions. These include carbon calculators that estimate emissions based on energy consumption and location-specific grid data, energy profilers that monitor and analyze energy consumption patterns during AI model execution, and offset estimators that calculate the number of trees needed to offset AI-related carbon emissions:

AWS offers a way to measure the carbon footprint of your workload, and optimize workloads on their platform which can lower the carbon footprint by up to 99%.

Sustainable IT requires a cultural and mindset shift, Gavriliadi said. She advised developing a strategy for AI applications across business, technology, and sustainability, to set, monitor, and assess progress with metrics to calculate carbon intensity and check power usage effectiveness, and train and educate employees on sustainable IT practices, she concluded.

InfoQ interviewed Zorina Alliata and Hara Gavriliadi about sustainable AI.

InfoQ: How much energy and water does artificial intelligence consume?

Zorina Alliata: AI training and inference calls for significant computational capability with high energy consumption. A 2019 study, for instance, calculated that training a single AI model can emit as much carbon as five cars in their lifetimes. In another study, the International Energy Agency estimated that AI training consumed as much energy as a small country.

The development of the OPT-175B model resulted in an estimated 75 tCO2e, which doubled to 150 tCO2e when including baselines and downtime.

According to carbon emissions and large neural network training, GPT-3 has used an estimated 552 tCO2e or 1,287 MWh in energy consumption. This is equivalent to the electricity consumed by 121 U.S. households in an entire year.

Hara Gavriliadi: Data centers also require a lot of water for cooling systems. According to the 2023 Amazon sustainability report, AWS data centers use 0.18 liters of water per kilowatt-hour. In a blog post about its commitment to climate-conscious data center cooling, Google said that 4.3 billion gallons of water were used worldwide in its data centers for 2021. This number reflects their whole activities, not only artificial intelligence, but also provides an idea of the magnitude of the issue.

InfoQ: What’s your advice to companies that want to work toward sustainable IT?

Gavriliadi: Companies that want to work toward sustainable IT should first align their IT purchasing with their sustainability goals. They should then focus on measuring, predicting, and reducing carbon emissions associated with their IT infrastructure and cloud workloads. Companies should also implement environmental best practices for cloud computing.

At AWS, using the “Sustainability Pillar” of the Well-Architected Framework can help guide these efforts. Companies can also benefit from using digital tools and data analytics to analyze and optimize their energy consumption.

About the Author

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Presentation: Beyond Durability: Database Resilience and Entropy Reduction with Write-Ahead Logging at Netflix

MMS Founder
MMS Prudhviraj Karumanchi Vidhya Arvind

Article originally posted on InfoQ. Visit InfoQ

Transcript

Karumanchi: We are going to be talking about write-ahead log, which is a system that we have built over the past several months to enhance durability of our existing databases. I’m Prudhvi. I’m an engineering lead on the caching infrastructure at Netflix. I’ve been with Netflix for about six years. I’m joined by Vidhya, who is an engineering lead on the key-value abstractions team. We both have a common interest of solving really challenging problems, so we got together and we came up with this.

The Day We Got Lucky

How many of you have dealt with data corruptions in your production environments? How many of you have dealt with data loss incidents, or you thought you had a data loss? It makes me happy to see you, that you’re also running into some of these issues. What you see here is a developer who just came, they logged into their laptop, and a pager goes off at 9 a.m. in the morning. The page says, customer has reported data corruption.

Immediately the engineer starts an incident channel, starts a war room, a bunch of engineers get together to understand what is happening. We get to see that one application team, they have issued an ALTER TABLE command to add a new column into a database on an existing table. That has led to corrupting data or corrupting some of the rows in their database. The result was a data corruption. We got a handle on the situation. We all have backups. It’s not a big deal. We can restore to a scene backup, and you can reconstruct your database and life can go on. Is it really that simple? What happens to your reads that are happening during this time? Are you going to be still serving corrupted data? Even if you decide to do a data restore, what happens to the mutations that are happening since the time you took the snapshot to the time you have restored? What is happening to these mutations? We got lucky because we had caches for this particular use case, which had an extended TTL of a few hours.

The caching team was engaged to extend the TTLs because the engineers on the database side are still trying to reset the database. We also got lucky that the application team was doing dual writes to Kafka. They have had the full state of all the mutations that are landing onto this. We were able to go back in time, replay the data, except for those offending ALTER TABLE commands, and we were back in business.

That was a lucky save. This led to a retrospection for the entire team to see how we can handle these issues better if at all they come up in the future. We wanted to do a reality check. What if we were not really lucky on that particular day? What if this app team did not have caches which had a larger TTL? What if they did not have Kafka? We are pretty sure that we would have had a data loss. How can we guarantee protection not just for this app, but for all the critical applications that we have at Netflix? There could be a lot of unknown failure modes that can come up. How can we prepare ourselves for those? I’m pretty sure the next incident won’t be as lucky as the one that we got away with.

Scale Amplifies Every Challenge

At Netflix scale, it basically amplifies every challenge. To give you a high-level perspective, this is a diagram, on the far left, what you’re seeing is our edge or cloud gateway. All the middle circles that you see are a bunch of middle-tier services.

On the far right, what you see is either our caches or databases, basically our persistence layers. There are these humongous amount of interactions happening. Even if there is a single failure that happens in the persistence layer, it could have a larger than impact to the rest of the system. We want to essentially minimize the failure in the persistence domain so that the middle-tier services wouldn’t see the pain of that. All the data reliability challenges that we have faced at Netflix, they have either caused a production incident, they costed a ton of engineering time and money to get out of those situations. We have also had issues where we had the same data that is getting persisted to two different databases like Cassandra and Elasticsearch, and we have to come up with a system to ensure that the integrity of the data is met on those two systems. We ended up building bespoke solutions for some of these problems.

Outline

I want to get into the agenda, where we’ll be talking about Netflix architecture at high level. Then we’ll be talking about data reliability challenges that we have at length. We will be presenting with WAL (Write-Ahead Log) architecture, and how it is going to solve these data reliability challenges that we talk about. We’ll leave you with some of the failures that WAL system itself could run into.

Netflix Architecture – 10,000 Foot View

On the left-hand side, what you’re seeing is a bunch of clients, your typical mobile devices, your browsers, your television sets, which would be issuing requests. We are deployed completely in AWS, at least on the streaming side. There is OpenConnect component, which is our CDN appliances. The talk is not going to talk about anything to do with CDNs. It is all in the purview of AWS or the control plane side of Netflix. When you receive a request, we have Zuul, which is our internal cloud gateway, which is responsible for routing the requests to different API layers. Once Zuul routes that request to a particular API, that will take that request on a different journey for that API.

The most important one here is the playback, which is when you start clicking that play button once you have chosen the title. It will hit a lot of middle-tier services, and eventually all the middle-tier services would hit the stateful layer, which is having a bunch of caches and databases. These are some of the technologies that we have that power the stateful layer, Cassandra, EVCache, with Data Gateway being the abstraction sitting on top of these. This is by no means an exhaustive list of our stateful layer, but just a sample set that we could fit in in the slide. Let’s take a closer look on the two components that are interacting with the stateful layer. We have the middle-tier service, which is either talking to the databases directly or they talk via the Data Gateway.

Netflix has made a significant effort over the past few years to migrate most of the database workloads happen via Data Gateway, which is our key-value abstraction. We did that deliberately because most of the time application developers don’t know how to use the databases correctly. We wanted to take on the challenge or problem and solve it behind Data Gateway and just expose a simple key-value semantics to the application. Life is good. We don’t need to talk anything here.

Data Reliability Challenges

Vidhya: Data reliability challenges are hard, especially at Netflix scale. We wanted to walk you through some of those challenges and walk you through WAL architecture, and then bring you back to these challenges again and see how we solve these challenges. The first challenge we want to talk about is data loss. When you write anything to a database and database is 100% available, this is a great thing to have. We don’t have any challenges. It’s streaming the data through the database. Life is happy. What user sees is data is getting durable into the database. It sees when you read the data, the data is readable and visible to the client. Life is great. When the database is available, either it’s partially unavailable or it’s fully unavailable.

Example, an operator goes and does a truncate table, which I’m sure, remove -rf or a truncate is an easy operation to do in a database. When that happens and you fully lose the data, then any amount of retries is not going to help with the situation. You really have a data loss. The same thing Prudhvi talked about earlier with an ALTER TABLE statement. When the ALTER TABLE statement corrupted the database, some of the writes which were going in was further corrupting the data. Really what we are looking for in that case is how to root cause it.

If we root cause it, how do we stop further corruption from happening? If we can stop the corruption, we can get into a better state sooner. If we can’t even identify the problem, that’s a bigger problem to fix. That’s a data corruption problem. Data entropy, it’s a common problem. It’s not enough to just do point queries. Like, if you just reach a primary store and it can only support primary ID only queries, or partition queries, then that’s not enough in all the cases. Sometimes you want to add an indexer, like Elasticsearch, and do indexing queries, the secondary index queries. When that happens, you want to write to multiple databases and read from both those databases, one or the other.

When secondary is down, the data is durable because you wrote to primary. It’s not really visible because some of the queries don’t work. Any amount of retries might not get you into a synchronized state because we lost some of the data being synced to the secondary. We can deal with that problem by doing some asynchronous repair. What really you’re doing is copying the data from primary and synchronizing it to the secondary. That’s an easy fix. Visibility to the customer is unclear. Here, they’re sitting and wondering, what really is happening? Where is my data? When will I get the data in sync to the secondary? The other problem you have is when the primary itself is down, it is very close to the data loss problem we talked about earlier. We can’t sync from secondary back to the primary here because we don’t know if secondary is really in a good state for us to sync the data.

The next problem that I want to talk to you about is multi-partition problem. This is very similar to entropy problem except it happens in one database instead of two databases like we talked about in the earlier entropy problem. One database because you take a mutation but that mutation has a batch which mutates two different IDs in the system. Those two different IDs can go to two different nodes, and one mutation can go through, whereas the other mutation which has to happen in another node does not really happen. Database is sitting there and retrying that mutation. All this while the customer is wondering what is happening, where is my data. It got an acknowledgment it’s durable, it wrote to the commit logs, but further, there is nothing that the customer has visibility over. You have to read logs or connect with the operator to find more issues. Data replication in our systems is also something that we want to deal with along with the data reliability problems.

Some of the systems that we support like EVCache which uses Memcached and RocksDB that we use internally, does not support replication by default. When that happens, we need a way or a systematic approach for replication to happen. Think about where we have in-region replication as well as cross-region replication. Netflix often does region failovers, and when region failovers happen, we want the cache to be up-to-date, warm, for the queries to be rendered through cache. EVCache does cross-region replication.

Some of the challenges we face are, how do we speed up the customer? How does the traffic that is coming in not affect the traffic that is being done, or the throughput of the consumer itself? Those are some of the data reliability challenges we want to talk about. There is much more that we can talk about. These are the main ones. Taking these challenges, what are the effects of it? Accidental data loss caused some production incident. System entropy cost some teams time and money. Now you have to sit and write some automation for you to sync from primary to secondary. Multi-ID partition really questions data integrity. The customer is asking you, where is my data? When will it get synced to the secondary? How do I deal with timeouts? Data replication needs some systematic solution.

WAL Architecture

With those challenges, how do we really solve this problem at scale? Especially with Netflix scale, these challenges enhance itself. Write-ahead log is one of the approaches that we took. What is write-ahead log giving us? It ensures that whatever data we modify is traceable, verifiable, restorable, and durable. Those are the challenges we are trying to solve. We want to talk about the internals of how we built write-ahead log.

Karumanchi: Is everyone excited to solve these challenges? Just a quick recap. We have the client applications interacting with database and caches. I just extended that notion to client app could be interacting to a queue or it could be interacting with another upstream application. We took some inspiration from David Wheeler who said, all problems in computer science can be solved by adding another level of indirection. We just inserted write-ahead log in between and said, we can solve this. Let’s zoom in a bit with write-ahead log. We have what we call as message processor, which would receive all the incoming messages coming from client application. You have message consumer which would be processing all the messages. We also maintain a durable queue. We also see control plane. We’ll talk more about control plane and how it fits into write-ahead log in later slides.

Control plane in simplest terms is the configuration which write-ahead log uses to take on a particular persona. We’ll be talking about different personas that write-ahead log itself can take. The request would contain something like namespace, payload, and a few other fields will get into the API itself in later slides. The most important thing in this slide is namespace, which says, playback, and message processor would ask control plane, give me the configurations that is relevant to this namespace called playback. Message processor would know what is the queues or what are the other components that it needs to work with, and adds it to the queue. Message consumer would be consuming from the queue and sending it to the destination.

This is the same slide that we saw before, but we also wanted to make sure that we maintain separation of concerns with processor and consumer deployed on two independent machines, and the queue themselves could be either Kafka or SQS, just to put a name to the queue. Some of you here might be wondering, this pretty much looks like Kafka. I have a producer. I have a consumer. What is the big deal? Why do you need to build a system like write-ahead log? What is the value add of this? If I refresh your memory on some of the problems that Vidhya has alluded to earlier, this architecture might solve some of the problems that we saw, but the multi-partition mutations or the system entropy problem that Vidhya has discussed where a single mutation itself could be from client’s application, it could be landing in different partitions on the database itself, or a system where a single mutation could land on two distinct databases. Those problems will not be solved by this, and we’ll see why.

You have the client application, and imagine we put a name to the individual mutations, so imagine you have a transaction where you have multiple chunks that need to be committed, and then the final marker message to indicate a typical two-phase commit. In the architecture that we just saw here, because we are just dealing with a single queue, we start to see problems happening right from this layer itself. Because the chunks themselves can arrive out of order, and also the consumer must be responsible for buffering all these chunks before it sends to the database or whatever that final target could be. If the final marker message arrives before the other chunks, because you’re dealing with multiple transactions, you could be creating a head-of-line blocking situation.

Memory pressure is real, and we use JVM, and we can easily see our nodes getting into out-of-memory situations very fast. What if the database itself is unavailable or if it is under stress for some amount of time? Even then, we need to do some retries. Within the architecture that we just saw earlier, you can do it, but it’s very hard. Another thing that I wanted to call out was, we are dealing with database mutations, and if you end up putting all the payload in the Kafka or any queue that you end up picking, you’re adding significantly high data volume on the underlying Kafka clusters. The size of your Kafka clusters can grow up pretty fast, and you’re dealing with complex ordering logic, and there is also memory pressure that we just spoke about, and really hard to handle retries in an architecture model.

Let’s refine it a little bit more, and what we see here is, I see two new components, cache and a database. Why do we need to have cache and database? The thought process here was, especially in multi-partition mutations, the chunks that we saw earlier, we want to just park them in cache and the database, and only add the final marker message which basically indicates all the chunks that belong to that transaction are committed before itself. The only piece of message that goes into the queue is the metadata indicating all the chunks relevant to this particular marker have been committed. The job on the consumer side becomes extremely easy, because all consumer needs to do at this point is get all the rows or all the chunks that are stored in cache or the database, and then send it to the final destination, which is any of these components that we see.

The reason why we use cache here is all of this is immutable data, and it would significantly boost the read performance. Let’s replay the same architecture that we saw. With WAL processor, whenever we have these chunks coming in, all of them, they get stored in durable store first, only the marker message ends up going into the queue, so you’re not putting a ton of pressure on the queues itself. WAL consumer receives that final marker message. It will fetch all the chunks that belong to that transaction, and then you’re sending it to the final destination. We do clean up because all of the data that is maintained within WAL is just for that duration of time.

One of the things that we kept talking about is we build this as an abstraction. All we do is expose one simple API and that basically masks or hides all the implementation details of what is happening underneath the covers of this. It also allows us to scale these components independently because we have split processor, consumer, caches, and databases, and everything can scale up or down independently. We spoke about control plane which is responsible for the dynamic configuration, and it would allow WAL to change its behavior.

We’ll also run through some of the examples of control plane configurations here. We have not built everything from scratch, like the control plane or some of these abstraction notions are something that the teams have been working over several years in making sure that you can come up with any new abstraction very fast, because we have something called Data Gateway agent or Data Gateway framework. There is a blog post if you want to go and read about it on how it is done. We are basically building on top of that. Finally, target is something where the payload that you have issued is supposed to eventually land. We also have backpressure signals baked into the message consumer side. In case any of these targets are not performing fast enough we have a method to backoff and retry.

Let’s look into the API itself. API looks pretty simple. We have namespace, which is again an index into the configuration itself. We have lifecycle which indicates the time at which the message was written, and if somebody wants delayed queue semantics where, I want to deliver this message after 5 seconds or 10 seconds, you could dictate that in your proto itself. Payload is opaque to WAL. We really don’t want to know what is that payload itself. Finally, we also have the target information which indicates, what is my destination for this? Is it a Cassandra database, or is it a Memcached cache, or is it another Kafka queue, or is it another upstream application? This is how a typical control plane configuration looks like. In here we are looking at a namespace called pds, and it is backed by just a Kafka queue. That’s the cluster and topic where all the messages would end up going into.

Every WAL abstraction that we have is going to be backed by a dead letter queue by default, because as Vidhya has mentioned, there could be some transient errors but there could be some hard errors as well. None of our application teams need to know anything about this. All they need to do is just toggle a flag here, I want to enable WAL for my application, and they get all of this functionality. On the left-hand side what I’m showing is again one of the WAL which is backed by SQS queue. We do use SQS to support delayed queue semantics. So far, we haven’t seen situations where SQS couldn’t perform for our delayed queue needs, but if a time or a need comes, we would probably want to build something of our own.

On the right-hand side what we see is what will help with the multi-partition mutations where we need the notion of the queue and we also need a database and cache. In there, what we put in is DGWKV, which stands for Data Gateway Key-Value, which abstracts all the cache and database interactions for us. We talked about target. Target, if you look at again the proto definition, we did not leak the semantics or the target details in the API itself. We kept it as a string value, so you could put in any arbitrary target information, but obviously you need to work with the WAL team to make sure the target relevant code is written in the message consumer.

How WAL Addresses Challenges

Vidhya: That’s a very good deep dive into our WAL architecture. With these, can we solve our challenges that we talked about earlier, is the question. Let’s look at it, data loss. When the database is down all you need to do is enable writing to WAL. As soon as you start writing to WAL and the database becomes available again, you can replay those mutations back to the database. In this way, we are not losing any writes. We are not corrupting the database. We are being in a very safe space. That does come with tradeoffs, where you are saying now that data writes are eventually consistent. It’s not immediately consistent. The database has to be available for you to be doing read-your-writes. There’s also challenges about transient errors versus non-transient errors. Transient errors can be retried and the mutation can be applied later, but non-transient errors like, for example, if you have a timestamp that is invalid for the system that you’re mutating, that’s a problem. You have to fix the timestamp before you mutate it again.

For those cases we have DLQ, we write it to DLQ and mutate those after fixing or massaging the data. Corruption. We corrupted the data, now what do we do? We add WAL. We start writing to WAL. That way we are protecting the writes that are coming in from some other system or application. We create a new database. Restore the database using point-in-time backups. That’s what we talked about earlier. We had backups. Now we point our microservice to the new database. That’s great but we still lost some data, that is in WAL now. We replay those mutations. This way we replayed the mutation after fixing the system and we did not lose the data.

Again, there is some manual work that needs to be done. There are tradeoffs there. It’s eventually consistent. We need point-in-time recovery. It takes time if you don’t have automation. That all adds up to the time that you’re parking the data in WAL, and consistency or eventual consistency requirements. Sometimes you don’t really require point-in-time recovery. An example would be a TTL data. If the data is already TTL’d out, you don’t really need to recover that. You just have to point to a new database or clean off the database and restart replaying the data. The next two set of problems that I talked about, the system entropy and multi-partition mutations, I want to club them into one solution. It is very similar. Either it goes to one database with two mutations in place or two databases. The problem appears same.

You first write to WAL, no matter what, and then mutate using WAL consumer into your databases. This way you don’t need to do asynchronous repairs. WAL is prepared to do those asynchronous repairs and retries. It will back off when you get the signal from the database that its database is not available. You need to provide those signals to WAL so that it can back off as well. That’s something that you might want to think about. Those also have tradeoffs. You have an automated way of repairing the data now. You don’t have to manually intervene and do those repairs. The system itself will take care of that. It will also help in a secondary index feature, developing some features like secondary indexes or multi-partition mutations. It has eventual consistency as your consistency requirements. That’s the problem there. Data replication. We talked about extensively some systems do not support data replication by default.

You can either write to WAL and let WAL replicate the data to another application. We made WAL so pluggable that now the WAL target can be another application, another WAL or to another queue. It is totally agnostic of what the target is. It’s all in the configuration. We can manipulate the configuration of namespaces to write to different upstream services.

Here, I’m writing to an upstream app itself which is mutating the database. You can also choose to write to another WAL which will mutate the database. When the region failover happens, your data is already synced to the WAL or the database in the other side of the region. It gives you uninterrupted service. It will also help in data invalidations, in some cases when the writes are in one region and your cache is stale in the other region. Cross-region replication is expensive. You really want to think about cross-region replication in some cases. It’s a pluggable architecture as I told you. The target can be anything that you choose. It could be a HTTP call. It could be a database. It could be another WAL. That pluggability gives us more knobs to turn.

WAL is not cheap. It adds cost to your system. You really have to think about, is WAL really necessary for your use case? It adds cost. If durability is very important, that cost should be handled. If durability is not important for some non-critical use cases and you can take some data loss, that’s totally fine. At Netflix we use this for very critical use cases where data loss cannot be acceptable. It adds latencies and consistency is loosely tied. If you want a strict consistency, WAL is not something you want to use as your primary. WAL does not help in reads. That’s very obvious. It only helps when you have data loss during writes.

The other part of the problem we saw earlier when Prudhvi talked about data loss is we had caches. We had to extend TTL on those caches so we can hold the data during data loss, furthermore. Think about it, really, if you really need it, do add it. Adding an extra layer of indirection is great but it’s not needed in some cases. Too many levels of indirection is also not good.

Failure Domains

WAL also has failure modes. We want to talk about that. WAL is not agnostic to failures. It’s not fail-safe. Netflix uses abstractions to deal with these problems, and how, I’m going to talk about it. The few failure scenarios I want to talk about is traffic surges, slow consumers, and non-transient errors. When you provision a cluster and you know it’s 1000 RPS, you only expect 1000 RPS. You don’t want to provision extra and cost more money for your org. You provision for the request that you’ve been asked for. It does happen that sometimes we get a surge of traffic. When you get the surge of traffic, we want an easy operatable datastore. You don’t want to scramble during that time. How do I fix this problem? How do I expand the cluster? Expanding a cluster is one way of mitigating the issue.

For example, if you have a queue, you want to increase the resources in the queue. That takes time. You might have to add more brokers. That might take 5 minutes, 10 minutes. You might also move some of the data from that database around so that you can spread the data equally as well. All of that costs time.

When that happens, the easiest solution that we can do is to add another separate processor and a queue and split the data 50-50. When you split the data, it is easy for us to expand itself. That’s some of the mitigation strategies we want to employ here. That’s traffic surges. When traffic surges happen, we also have a slow consumer. It’s one consumer consuming both the queues and dealing with a slow database or something like that. You part the data. Now you have a slow consumer, how do we deal with that problem? Either you can add a consumer layer that consumes from both the queues, or if you only have a single layer of queue and proxy, you can add more nodes to it and deal with that problem by consuming more.

The caveat here is that your database has to be well-provisioned enough or the target has to be well-provisioned enough for you to deal with that surge of messages that is coming in. That’s why we use backpressure signals to look at it and carefully tune our knobs. The system automatically tunes to the CPU and the disk that is in place. Non-transient errors, we talked a little bit about it before. This time, we want to talk to you about how do we really deal with the non-transient errors. It can cause head-of-line blocking when you retry the data multiple times and you’re sitting there waiting for data to get better. One way is when the database is down, you want to part the data and not pause the consumption.

The second way is to add a DLQ when non-transient error happens, and a DLQ will take care of head-of-line blocking issues for you. It’ll sit there retrying or massaging the data before retrying. All that costs time, and the existing data that has non-transient errors or it can be applied sooner doesn’t deal with latencies when you do that in a separate system. Again, it requires some manual intervention sometimes. When non-transient errors are not easily applicable, the user can look at it and deal with the problem.

Key Takeaways

With all of these problems, we have shown how WAL is helping us, but we also wanted to give you a takeaway on what are the key things we considered while developing the system. We talked about pluggable targets. We have namespaces as well, which helps you in the configuration. We can expand the scope of the namespace to multiple queues. We can use abstractions to deal with a lot of failure modes. That pluggable architecture was core for our design consideration. We had a lot of these building blocks in place already, like control plane, key-value abstraction, which already we had in place. That had helped us build this pluggable architecture. Separation of concerns: your throughput that is incoming should not affect the throughput that the consumer has. From the beginning, we thought about, how do I grow this system that is complex individually without being blocking in each of these components. Systems fail, please consider your tradeoffs carefully.

Questions and Answers

Participant 1: When you commit to write to log, and finally it’s going to get to your database or cache or wherever that is, there must be delay. How are you handling that? Is your application ok with that kind of delay? How do you make sure the reads are not dirty when they’re going to go to primary before your message consumer is actually pushing that data to the database?

Karumanchi: Obviously, this is an asynchronous system, how do you manage delays?

I think in one of the slides that Vidhya has shared, definitely this is not going to ensure that the applications will get read-your-write consistency guarantees. What we are promising is, with this, your data will eventually be there. The key-value abstraction that we have built, it has essentially two flags that indicate whether the data was durable and visible. If the response says that data is visible, which means it is visible in the read path. If the user gets an explicit response from that write API saying durable is true but visible is false, that data may or may not be visible.

Vidhya: Eventual consistency is what we are talking about.

Karumanchi: The write API explicitly states that.

Participant 2: If the client application can be a consumer of the queue and query the metadata itself, would it not solve the problem of the tradeoff of eventual consistency where you’re just reading the metadata, you’re not even going to the cache or the DB within the WAL, just to query the metadata. If it exists, you can pull it. If not, you can go back to the DB.

Karumanchi: I think it’s basically the decision between, do we want to abstract the details of the cache and database to the application or do we want to make it a leaky abstraction? I think that was the choice. The approach that you mentioned definitely would help in increasing the performance. No question about that. The downside of that is you’re going to be leaking the details of, these are the underlying caches and databases that this WAL abstraction is supporting. Those are the tradeoffs that we had to make. Maybe we would lean on to the approach that you mentioned for some of the use cases down the lane. For now, we are pretty firm on the application should not know anything about what is happening under the hoods of WAL.

Vidhya: That is a great idea. The systems we’re dealing with is mostly in a failed state. We’re adding extra layer of durability for us to maintain critical applications. A read-your-write is important. Then, what you are mentioning is an option that you can wait for all the data to be mutated before reading that data. That might add latencies. If those latencies are ok, then you can do that. One of our systems, Hollow, does that right now.

Participant 3: From the architecture, it looks like you or any consumer will only enable WAL when there is a problem. Then, when the problem is resolved, application would still keep on updating the database as-is. When there’s a downtime, you have certain transactions which are pending in WAL to be updated in the database. When the problem is resolved, there are competing transactions on the same row, for example. Are these use cases or this particular architecture just solves for eventual consistency? Or, if you have incremental update and you want to take each update to the database, do you care about sequencing or orders?

Vidhya: I don’t want to put transaction as the word there. Transaction is not the word. We don’t want to support transaction. That’s a big word. Sequencing is great. We wanted to use the idempotency token that the client provides as a mechanism of deduplicating the data. If you’re looking at, if X is A, then make X as Y. That kind of system cannot be supported without the sequencing that you are mentioning and all the bells and whistles that a transactional system supports. WAL is not for that. WAL is for just data reliability at scale, and eventual consistency.

Participant 4: Have you just moved the failure domain from your persistent DB to WAL DB? What happens if WAL DB fails?

Karumanchi: Those are some of the failure domains that Vidhya did allude to. Definitely there is a point of, yes, it can fail, especially the database or caching layer itself is running into the same issue that we ran with. The hope or the idea is, systems failing concurrently, the probability is extremely rare. It’s like we are basically hedging on that.

See more presentations with transcripts

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


2,082 Shares in MongoDB, Inc. (NASDAQ:MDB) Bought by Kentucky Retirement Systems …

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

Kentucky Retirement Systems Insurance Trust Fund bought a new stake in shares of MongoDB, Inc. (NASDAQ:MDBFree Report) during the 1st quarter, according to the company in its most recent 13F filing with the Securities and Exchange Commission. The institutional investor bought 2,082 shares of the company’s stock, valued at approximately $365,000.

A number of other institutional investors and hedge funds have also recently modified their holdings of MDB. Strategic Investment Solutions Inc. IL purchased a new position in shares of MongoDB in the 4th quarter worth about $29,000. NCP Inc. purchased a new stake in shares of MongoDB during the fourth quarter valued at approximately $35,000. Coppell Advisory Solutions LLC grew its holdings in MongoDB by 364.0% in the 4th quarter. Coppell Advisory Solutions LLC now owns 232 shares of the company’s stock worth $54,000 after acquiring an additional 182 shares during the last quarter. Smartleaf Asset Management LLC lifted its holdings in MongoDB by 56.8% in the 4th quarter. Smartleaf Asset Management LLC now owns 370 shares of the company’s stock worth $87,000 after buying an additional 134 shares during the period. Finally, J.Safra Asset Management Corp lifted its stake in MongoDB by 72.0% in the fourth quarter. J.Safra Asset Management Corp now owns 387 shares of the company’s stock worth $91,000 after acquiring an additional 162 shares during the period. 89.29% of the stock is currently owned by institutional investors.

MongoDB Price Performance

Shares of MDB opened at $206.76 on Thursday. The firm has a market capitalization of $16.89 billion, a P/E ratio of -181.37 and a beta of 1.39. MongoDB, Inc. has a one year low of $140.78 and a one year high of $370.00. The business has a fifty day moving average of $188.81 and a 200 day moving average of $219.23.

MongoDB (NASDAQ:MDBGet Free Report) last released its quarterly earnings results on Wednesday, June 4th. The company reported $1.00 earnings per share (EPS) for the quarter, beating analysts’ consensus estimates of $0.65 by $0.35. MongoDB had a negative return on equity of 3.16% and a negative net margin of 4.09%. The business had revenue of $549.01 million during the quarter, compared to the consensus estimate of $527.49 million. During the same period in the previous year, the company posted $0.51 EPS. The business’s quarterly revenue was up 21.8% compared to the same quarter last year. Analysts expect that MongoDB, Inc. will post -1.78 EPS for the current fiscal year.

Insider Activity

In other news, CFO Srdjan Tanjga sold 525 shares of the firm’s stock in a transaction on Wednesday, April 2nd. The shares were sold at an average price of $173.26, for a total transaction of $90,961.50. Following the sale, the chief financial officer now directly owns 6,406 shares in the company, valued at $1,109,903.56. This trade represents a 7.57% decrease in their position. The sale was disclosed in a document filed with the Securities & Exchange Commission, which is available at this hyperlink. Also, CAO Thomas Bull sold 301 shares of MongoDB stock in a transaction on Wednesday, April 2nd. The shares were sold at an average price of $173.25, for a total transaction of $52,148.25. Following the transaction, the chief accounting officer now owns 14,598 shares in the company, valued at approximately $2,529,103.50. This trade represents a 2.02% decrease in their position. The disclosure for this sale can be found here. Insiders sold a total of 50,382 shares of company stock valued at $10,403,807 in the last 90 days. Company insiders own 3.10% of the company’s stock.

Analyst Ratings Changes

A number of equities analysts recently issued reports on the stock. Rosenblatt Securities decreased their price target on shares of MongoDB from $305.00 to $290.00 and set a “buy” rating for the company in a research note on Thursday, June 5th. Guggenheim upped their target price on MongoDB from $235.00 to $260.00 and gave the stock a “buy” rating in a report on Thursday, June 5th. Barclays raised their target price on MongoDB from $252.00 to $270.00 and gave the company an “overweight” rating in a research note on Thursday, June 5th. Bank of America upped their price target on MongoDB from $215.00 to $275.00 and gave the stock a “buy” rating in a research note on Thursday, June 5th. Finally, Mizuho decreased their price objective on MongoDB from $250.00 to $190.00 and set a “neutral” rating for the company in a report on Tuesday, April 15th. Eight equities research analysts have rated the stock with a hold rating, twenty-four have issued a buy rating and one has issued a strong buy rating to the company. According to data from MarketBeat, the company currently has an average rating of “Moderate Buy” and a consensus price target of $282.47.

Read Our Latest Analysis on MDB

About MongoDB

(Free Report)

MongoDB, Inc, together with its subsidiaries, provides general purpose database platform worldwide. The company provides MongoDB Atlas, a hosted multi-cloud database-as-a-service solution; MongoDB Enterprise Advanced, a commercial database server for enterprise customers to run in the cloud, on-premises, or in a hybrid environment; and Community Server, a free-to-download version of its database, which includes the functionality that developers need to get started with MongoDB.

Recommended Stories

Want to see what other hedge funds are holding MDB? Visit HoldingsChannel.com to get the latest 13F filings and insider trades for MongoDB, Inc. (NASDAQ:MDBFree Report).

Institutional Ownership by Quarter for MongoDB (NASDAQ:MDB)

This instant news alert was generated by narrative science technology and financial data from MarketBeat in order to provide readers with the fastest and most accurate reporting. This story was reviewed by MarketBeat’s editorial team prior to publication. Please send any questions or comments about this story to contact@marketbeat.com.

Before you consider MongoDB, you’ll want to hear this.

MarketBeat keeps track of Wall Street’s top-rated and best performing research analysts and the stocks they recommend to their clients on a daily basis. MarketBeat has identified the five stocks that top analysts are quietly whispering to their clients to buy now before the broader market catches on… and MongoDB wasn’t on the list.

While MongoDB currently has a Moderate Buy rating among analysts, top-rated analysts believe these five stocks are better buys.

View The Five Stocks Here

20 High-Yield Dividend Stocks that Could Ruin Your Retirement Cover

Almost everyone loves strong dividend-paying stocks, but high yields can signal danger. Discover 20 high-yield dividend stocks paying an unsustainably large percentage of their earnings. Enter your email to get this report and avoid a high-yield dividend trap.

Get This Free Report

Like this article? Share it with a colleague.

Link copied to clipboard.

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.