India’s 4 million developers might find their data platform in MongoDB – DQIndia

MMS Founder
MMS RSS

In an insightful conversation, Sachin Chawla, Vice President for India and ASEAN at MongoDB, offered a comprehensive look into the burgeoning data platform landscape across India and Asia. He detailed the key trends driving adoption, MongoDB’s strategic role in supporting businesses from agile startups to sprawling enterprises, and the company’s unwavering commitment to nurturing developer talent in the region.

Our discussion delved into the evolving demands of modern applications, the rapid surge in AI adoption, and the critical factors of scalability, resilience, and security that define contemporary data platforms.

The Evolving Landscape: Apps, AI, and Developer Focus

The conversation kicked off with an overview of the current data platform scene in India and Asia.

“India, as you can see, is home to more than 4 million developers, and we see both India and Asia as regions for high growth,” Sachin began. He highlighted several driving forces. “Firstly, customers are building a lot of modern applications and simultaneously modernising legacy applications. Secondly, there’s a significant impetus on AI. More and more customers are building AI-infused applications and experimenting with different AI approaches. While it is the start of this revolution, most customers are currently doing basic AI applications, but we also have those building more sophisticated applications for things like customer recommendations.”

He also emphasised the growing pressure on software development. “Thirdly, there’s a strong focus on developer productivity. The time given to developers to develop and release applications is shrinking, so they have to do more with less. These are common trends we observe across both markets.”

MongoDB’s customer base in the region reflects this diversity. “Our customer base is broad,” Sachin explained. “In India, it ranges from small, single developers to early-stage startups like Ubuys and RFPIOs, to large-scale startups like Zepto, which is one of our largest clients and a public case study. Zepto’s order tracking, for instance, is entirely built on MongoDB. We also work with ISVs (Independent Software Vendors) like Intellect AI, a large FSI ISV that has built its multi-agent AI platform, Purple Fabric, on MongoDB to automate and augment operations, risks, and compliance. Then there are large enterprises like banks and Tata Neu, whose application is also built on MongoDB.”

Nurturing the Startup Ecosystem: Beyond Technology

India’s vibrant startup scene is a key focus for MongoDB, which offers more than just its core technology.

“We do a lot to help startups and their ecosystem,” Sachin affirmed. “Beyond the technology itself, we focus on how they can adopt and grow with it. We have consultants and advisors who work day in, day out with them on architecture design. We also drive a lot of ‘developer groundswell.’ For example, we organise developer days across different cities, offering hands-on labs and design reviews.”

MongoDB is also heavily invested in skill development. “We have set a goal to upskill half a million students in universities on MongoDB to build a strong developer base, and currently, over 200,000 students have already taken MongoDB courses,” he shared. “We also host weekly training sessions, two to three-day programs where we invite customers and their developers to our offices for training and certification. Additionally, we have a MongoDB community with champions from various organisations. There’s a significant effort to develop and upgrade skills.”

Common Patterns and the Power of the MongoDB Platform

When asked about the types of applications emerging from Indian startups and which MongoDB platforms resonate most, Sachin highlighted the database’s versatility.

“We are a general-purpose database, so you can use us for various use cases,” he explained. “You can put any kind of data on us – structured and unstructured data, including video, attachments, and audio. Being general-purpose means we can handle any workload type: transactional data, geospatial data (like tracking a food delivery rider), or time-series data for IoT devices.”

He gave specific examples: “Customers use us for diverse applications, from IoT to AI use cases. For example, RFP IO uses our vector search and AI capabilities to help customers respond to RFPs more efficiently by automatically categorizing questions related to security or performance. So, our applications range from payments, order management, and e-commerce order tracking to AI solutions and recommendation engines, and even building operational data layers. It’s very broad.”

The evolution of the MongoDB platform itself is a key differentiator. “On the platform side, we have evolved significantly,” Sachin noted. “The database is the core, where all data is stored. For modern applications, you need Google-like search. Instead of plumbing another search engine like Elasticsearch, we offer Atlas Search, which is native to the platform. For AI, you need vector search to vectorize data… With us, Vector Search is embedded natively.”

He further detailed the integration benefits: “Thirdly, customers need embedding models to train their models with their own data. We acquired Voyage AI, which provides these embedding models. The platform has become very important because it integrates all these components natively, eliminating the need for customers to manage multiple disparate systems. Customers struggle with managing one system, let alone five! So, we see them starting with one database, then quickly adopting full-text search, vector search, and embedding models, as well as Atlas 3, which we launched last year.”

Voyage AI: A Strategic Acquisition for Trustworthy AI

The acquisition of Voyage AI has been pivotal in enhancing MongoDB’s offerings for AI-driven applications.

“When using Large Language Models (LLMs), they are probabilistic,” Sachin explained. “If a company, especially in healthcare or financial services, asks a query, they need a more accurate output. This requires training the LLM with their internal data, which is where an embedding model comes in. Voyage AI is that embedding model, now natively integrated into our platform. Voyage AI is arguably one of the best embedding models globally.”

He highlighted its crucial benefits: “It helps in two crucial ways: A) it assists in training the model, and B) it significantly reduces hallucinations, which are a common issue with AI models. It also helps with re-ranking results to ensure the most relevant output. All of this is now native within our platform. Customers tell us this truly solves the ‘plumbing’ problem and provides an embedding model that reduces hallucinations and facilitates re-ranking. Furthermore, Voyage AI comes with pre-packaged embedding models for specific industries like FSI or healthcare, which is another significant benefit.”

Enterprise Demands: Scalability, Resilience, and Security

Indian enterprises prioritize specific requirements when adopting a data platform.

“Of course, cost and scalability are crucial, but I’d add two more factors: resiliency and security,” Sachin asserted. “There’s a lot of discussion around providing a resilient architecture and ensuring security. Another key aspect is performance, specifically performance at scale. The scale we talk about in India is on a different level.”

He provided a compelling example: “Take Zepto as an example. They were previously on a monolithic architecture using SQL. When they moved their application from SQL to MongoDB, the latency dropped by 40%, and they could handle six times more traffic with that reduced latency. So, performance at scale is extremely important.”

Regarding resilience, Sachin elaborated, “We provide a three-node architecture (active-passive-passive). If the active node goes down, a passive node automatically takes over. You can deploy these across three different availability zones within AWS or other cloud providers. We offer the platform across all three major cloud providers (AWS, GCP, Microsoft Azure), and you can even have instances of the same application in two different clouds, providing immense residency and flexibility.”

Upskilling the Next Generation: Partnerships in Academia

MongoDB’s commitment to developer upskilling extends significantly into academia.

“There are two main pieces to this,” Sachin explained. “Firstly, we have MongoDB University, where anyone can go and take a course today. Secondly, we have partnered with AICTE (All India Council for Technical Education) and various universities across India. Our collaboration with universities involves not just training students but also a ‘train the trainer’ approach, where we train professors. We’ve also partnered with GeeksforGeeks. So, there are three or four initiatives working in tandem to reach and train these students.”

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.

Podcast: Mandy Gu on Generative AI (GenAI) Implementation, User Profiles and Adoption of LLMs

MMS Founder
MMS Mandy Gu

Transcript

Introductions [00:27]

Srini Penchikala: My name is Srini Penchikala. I am the lead editor for AI, ML, and the data engineering community at the InfoQ website, and I’m also a podcast host. Thank you for tuning in today to this podcast. In today’s episode, I’ll be speaking with Mandy Gu, who is currently a senior software development manager at Wealthsimple. She leads the machine learning and data engineering initiatives at the organization.

We will discuss the topics of generative AI and the large language models, or LLMs, especially how these technologies are used in real-world projects. We’ll also talk about how different user profiles in the organizations influence the adoption of LLMs. Hi, Mandy. Thank you for joining me today. Can you introduce yourself and tell our listeners about your career and what areas you’ve been focusing on recently?

Mandy Gu: Thanks for the introduction. Excited to be here. So a little bit about myself, as Srini mentioned, I’m currently at Wealthsimple, where I’m supporting our data engineering and machine learning engineering teams. One of the things that we’ve been focused on for the past few months was building the infrastructure and internal tools to support gen AI adoption and usage throughout the company.

Implementation of AI Programs [01:36]

Srini Penchikala: Yes, thanks, Mandy. So let’s talk about that. Can you talk about your experience in establishing AI programs in organizations? What typically the challenges are: organizational, technical, and sometimes people-related. So, skill sets. Can you talk about what your experience has been and what you can share with our listeners?

Mandy Gu: Yes, so the past year and a half to two years has been this really interesting time ever since ChatGPT blew up and ever since we pushed the boundaries of accessibility and just democratizing AI access. Things have changed very rapidly in the industry, and our AI strategy has really been centered around three things.

So the first is building up a lot of the LLM foundations and platform to support future work. And then from there we find use cases within the company to apply this technology to boost employee productivity. And in doing so, we are also developing our confidence and guardrails and skill sets to bring this technology to our clients, where we can create a more delightful client experience by optimizing workflows using LLM. Most of my focus has been on the platform side and building up the reusable building blocks and then supporting the teams in accelerating AI adoption for employee productivity and also to find those use cases to optimize the client experience.

I think one fairly meta challenge in rolling out these AI programs just has been how quickly the industry has been moving. We have pretty much a new state-of-the-art foundational model every two weeks or so, and OpenAI and all of these other companies are constantly releasing new things. So it’s been a fairly intense process of just keeping up with these new changes in the industry and finding the best ways to apply it internally while dealing with a lot of different user profiles and toeing the right balance between moving quickly and ensuring that we are making the safe way the default.

Srini Penchikala: Can you talk more about those reusable building blocks? Are they common solutions that other teams across the organization can reuse or?

Mandy Gu: Yes, so on a high level we have two sets of the reusable building blocks: the first are the foundational models or access to the foundational models, and then the second is some of the components we use to facilitate other teams to build multi-stage retrieval systems. So for the foundational models, we’re hosting a mixture of both open-source models and some of the foundational models from fully managed services such as AWS Bedrock.

So the way that we kind of tie this together is we’ve adopted a lightweight serving framework, LightLLM, and we’ve used this as a proxy server in front of all of these models. And then on top of this, we’ve been able to provide programmatic access that’s integrated with the rest of our stack so that any developer or any of our other services can just seamlessly integrate and communicate with these models for whatever they need to do.

We’ve also built a custom front end to recreate the consumer-like ChatGPT and cloud desktop experience so that we’re able to ensure that our employees are using these technologies but in a safe and contained way. So that’s the first set of the reusable building blocks that we offer. And then the second is that we have a lot of the platform pieces for both facilitating orchestration to update and read from our vector database and integrating that with our various knowledge bases so that we can build internal reg tools on top of that for both employee productivity and, in some cases, this has also been leveraged for back-office workflows.

Business and Technical Use Cases [05:05]

Srini Penchikala: Sounds good. Thank you, Mandy. So for these solutions, can you talk about what are some of the business and technical use cases where the team leveraged machine learning solutions?

Mandy Gu: Most of the use cases so far have been internal, and a lot of them have been used just by various people within the business for their day-to-day tasks. We actually have fairly impressive adoption statistics; close to two-thirds of the entire company use LLMs in their day-to-day, and most of their interaction is through our LLM gateway. So this is the front end that we maintain that talks to all of these foundational models and just lets our end users pick and choose which models they want to talk to and base their chat experience there. When we look deeper into what some of these use cases are, a lot of the times it is for content generation. So people would write something and then ask the LLM to either proofread it, augment it, or to continue or to generate content based on a prompt.

We also have a lot of use cases for information retrieval, just people asking questions, and we’ve been able to integrate a lot of these models with our internal knowledge bases, and that way people have been able to leverage it to ask questions about our company documentation and various of our internal knowledge bases as well. In terms of some of the workflows that we’ve been able to optimize, the most prominent one is for our client experience agents. So we’ve built a workflow where we’re taking the Whisper model to transcribe a lot of our client calls, and in doing so we’re able to take this transcription and augment a lot of our current machine learning use cases in this space to provide classifications that we then use to triage these client calls to the appropriate teams.

Generative AI Program and LLM Gateway [06:48]

Srini Penchikala: And also you mentioned about LLMs and the models. Can you talk about the Generative AI programs implemented internally to improve operational efficiency and streamline modeling tasks? Because gen AI brings more power to the business requirements, right?

Mandy Gu: Yes, I mean the biggest program that we’ve rolled out–this is something we’ve been working on for quite a long time now–is our own LLM gateway. So the motivation of having our own LLM gateway as opposed to leveraging one of the consumer products out there is so that we can make sure that all of this interaction with the LLMs stay within our own cloud tenant and that we’re leveraging the foundational models that’s hosted as a part of our VPC. So what this LLM gateway offers is, it’s an internal web URL that all of our employees can go onto, and we have been refining this front end to resemble as closely as possible the consumer products that OpenAI and Anthropic offers, and there’s a chat interface.

And then we also provide a prompt library and tools and ways of integrating, like, just getting knowledge from several of our integrations, including with our internal knowledge bases. And the way that we’ve positioned this program is we want this to be everyone’s first touchpoint with LLMs. We discourage the use of ChatGPT and these other products internally, or at least for work-related purposes, because of some of the security risks that come with it and the need for a lot of our employees to work with sensitive information. So that’s been one of the programs that we are working on.

Srini Penchikala: That’s good. So this LLM gateway looks like it provides an entry point to different teams who could be using different large language models and maybe different vector databases, right? So is that one of the responsibilities of this LLM gateway?

Mandy Gu: Yes, that’s basically it. The motivation is really getting as much exposure to these technologies as possible, but doing so with the right guardrails in place.

Typical Gen AI application development Lifecycle [08:44]

Srini Penchikala: Okay. And also, what are some best practices in terms of the life cycle of a typical gen AI application development? We don’t want teams to go off in their own direction and sometimes either duplicate the efforts or deviate from the standards. So do you have any recommendations on what are the best practices?

Mandy Gu: I think the investment in the platform and in these reusable building blocks will ensure that we can bake in the best practices and how we want different teams to work with these technologies as a default. So for us, internally, the way that we leverage our LLM gateway and the proxy server in front of our APIs, this is how we point people to interact with the models.

And this will avoid, for instance, somebody directly interacting with the OpenAI SDK and potentially exposing our sensitive information that way. We’ve also been able to configure and choose a lot of the optimal configurations and ways of interacting with these models as well as just prompt templates, and we offer them as configurations for the end users. Those are some of the ways that we’ve been able to ensure that we’re operating from a high standard when using these technologies.

Srini Penchikala: Does the gateway also help with any–what are they called? Hallucinations or the accuracy of the responses, or is it more of just a privacy and security type of checkpoint?

Mandy Gu: The main value proposition is more from a privacy standpoint, but we do offer some ways of dealing with the risk. We file these under specific models, but they’re actually just leveraging some of the models that we provide. But for all of the models integrated with our internal knowledge base, one thing that we’ll do is actually return where it gets the information from.

So this provides a way for the end user to kind of fact-check the answer and build confidence that way. And by offering the ability to just ground the answers against context that we’ve curated and verified internally, that does allow us to ensure that the LLM is at least reading this information from the right place. Outside of that, we’ve experimented and evaluated different prompt templates, and we’ve been able to find some effective ways of just instructing the LLM to return more reliable answers, although that’s not a safe proof measure.

Srini Penchikala: So do you run all the components of the AI solutions on-prem? Because this LLM gateway, would it also work against some API in the cloud or?

Mandy Gu: When we first started, everything was models that we served completely in-house. A few months ago, or actually closer to a year ago, we adopted AWS Bedrock. And since then we’ve actually been migrating more of our models from this internal self-hosted stack onto AWS Bedrock. Bedrock does give us the ability to serve these models within our VPC. So in that aspect it’s pretty much the same thing, but we have been migrating things to Bedrock just so that it’s easier to manage.

Srini Penchikala: You mentioned that one of the best practices is to invest in a platform, right? So you mentioned AWS Bedrock. Can you talk about any other specific technology or infrastructure changes that were put in place for this gen AI application stack? What are the technologies and tools that you used?

Mandy Gu: I mean, I’ll say of the foundational models, there was quite a bit that we had to stand up for our retrieval systems. So we adopted a vector database, and then we also built a lot of integrations between that and our orchestration platform so that we could just ensure that the indexes were being kept up to date, that all of our knowledge base was being indexed on a periodic basis.

So from an infrastructure perspective, those were the two main things that we focused on. And then outside of that, we’ve been building just a lot of integrations between the various knowledge bases and different systems that this may have to talk to. One new development that’s really taking shape this year is the MCP servers, having the ability to talk to an API or an SDK through like a language server. And one thing we’ve been working on over the past few months is building up some of the infrastructure to support that.

AI Agents [12:41]

Srini Penchikala: Yes, MCP has been getting a lot of attention. Also, the trend that’s been getting a lot of attention are the AI agents. Mandy, do you have any comments on AI agents? What should they be used for, and when the teams or developers should not use that? It could be overkill or not the best solution.

Mandy Gu: That’s a good question. I think it’s very hard to say, and this is a space that’s also changing very rapidly. There’s definitely been certain spaces where AI agents have proven to be more effective than others. I think this is definitely an area that’s worth keeping an eye on because with a lot of the advancements in integrating reinforcement learning with the way that we interact with LLMs, we’ve seen some massive leaps in terms of ability and relevant generations.

So there’s been a lot of developments in this space. I think the best thing we could do is to just make sure that we’re evaluating and we have a well-defined success criterion in mind because regardless of the AI technology we’re working with, there’s always going to be use cases where it’s really good and use cases where it’s not so good. And that’s something that’s been quite in flux with gen AI.

User Profiles and Adoption of LLMs [13:48]

Srini Penchikala: Yes, so we have to wait to see how it goes. In terms of the adoption of LLMs and the AI programs. The different organizations will definitely experience different levels of maturity and commitment from the employees and the leadership. So can you talk about, I know you mentioned about this earlier, the different user profiles and how different user types can influence the usage of insight, the LLMs? And what insights you can share, our listeners will be interested in who may have some similar situations in their own projects, like how do they need to manage the users and user profiles to get the best out of LLM adoption?

Mandy Gu: So I think with all new technologies, there’s about three to four different user profiles. And certainly this has been true for LLMs and generative AI.

From my experience, LLMs have been a very divisive topic. There’s those who love it, there’s those who hate it, and there’s those who just absolutely cannot stand it, especially in a workplace setting. I mean, starting from one end of the spectrum, you have the advocates; you have the people who are really excited about these technologies.

They’re the ones who are proactively experimenting with the new foundational models, the new tools that come out. And they’ll be really proactive in just finding ways of applying these technologies, like, “Hey, maybe I can use this as a part of this work that my team does”. I mean, I think the risks of working with these people is that they may set unrealistic expectations about the benefits of LLMs or downplay the other risks, such as security or privacy, but these are the people who are going to be very engaged, who’s going to be very eager to adopt new tools and try new things.

And then on the other hand, there’s a lot of people who are detractors of generative AI. They’re very distrustful or critical of this technology, and they’re going to be very focused on the negative sides, whether it’s the ethical concerns, the security concerns, or the environmental concerns of training these models. And they’re not going to be as receptive to org-wide mandates to adopt LLMs.

I find with these people, giving them the space to address their anxieties, being very transparent about your expectations of them, I think that’s going to work well in getting them to a place where they’re still skeptical but not as much of a detractor. And I think for most of the organization, they’re really going to fall somewhere in the middle.

There’s going to be people who are very curious but may not be completely sold on this new technology. And then there’s going to be people who are more skeptical, and I think for this group, they’re going to need some faster feedback loops to see more value. And again, that transparency from leadership to address any AI anxieties.

Srini Penchikala: Definitely. Also, can you talk from your experience, any specific metrics or indicators to bring all these different users together? Like you said, some team members may want to see some metrics before they can fully embrace the AI program side, so do you have any recommendations on that?

Mandy Gu: Specifically for these users? I think it’s going to be fairly specific. If you’re a developer, the metrics that you care about are going to be very different than if you’re someone who works in operations. I think if they’re able to see the metrics that are related to their work, so maybe for a developer how much faster they’re completing tickets or minimizing the number of touch points to work with other teams, for instance, I think those are some of the metrics that they would like to see.

And then these metrics will likely be quite different for other teams as well. I actually found that… Well, one, it’s actually really hard to put together these metrics and to really show the value this way, but I’ve found with a lot of people who are a little bit more skeptical that, and this doesn’t always work, but it’s really making sure that they try the technology at least once and giving it a chance. Taking something that they do day to day and then applying it with some of these new tools, I find that’s usually a good way of showing people, like, “Hey, this actually works”, or “This may actually help with my day-to-day”.

AI Education and Training [17:35]

Srini Penchikala: It definitely will be different for each different type of user. Also, can you talk about any education or training programs you may have developed to improve the skill sets of developers who are interested in using AI right away or even to make the others aware of the value of AI solutions compared to the traditional software development solutions? Do you have any recommendations on what kind of education and training programs the other companies should consider?

Mandy Gu: Yes, so there’s a few different programs that we’re running right now just across the organization, and some of them have definitely been very successful. So we’ve been having, within different pockets of the organization, just weekly demos, and they can be something really small. This is a workflow that I’m now using an LLM to do, or maybe I’m trying a new tool, or maybe I’m trying just a new way of doing things.

And that has been a really good way of getting people to see the value of some of these new technologies. And it’s been a great source of inspiration as well, seeing where your teammates are getting value with and on the other hand, where they may be struggling. So one of the rituals is having these regular demos in place. We’ve also been able to leverage asynchronous communication via Slack and other forums quite effectively.

So again, very much focused on just sharing some of the work that’s come out, but not just the good things. Even just sharing the AI fails and the WTF moments with these new tools. I think that brings a certain human side to it. We’ve also had a lot of our leadership team, including our CTO, kind of just come in and do “ask me anything” and provide that open space for people to ask questions. And that I find has just been a really effective way in addressing a lot of the anxieties around AI because for people who are distrustful of this technology, without the necessary context, they will think the worst-case scenario, but once they actually hear about the vision, the strategy, it’s a lot easier to address those anxieties.

Lessons Learned [19:31]

Srini Penchikala: You mentioned about WTF moments, right? So can you talk about some of the lessons learned or any other insights you can share on some of the roadblocks the companies typically run into when they’re trying to embrace machine learning and AI solutions and when they’re trying to establish the programs? What kind of lessons learned you can share?

Mandy Gu: I think one thing that comes to mind is just given how fast this space moves, it’s going to be really hard to keep up with the latest and greatest technologies. And this was something we struggled with for a long time because we would be building something fairly cool or bringing a technology internally, and then two weeks later, OpenAI would release a new model or a new way of doing things, and then at that point we would ask ourselves, “Do we continue what we’re doing, or do we just pivot to the next new thing?” Oftentimes it was hard to fight back the urge of pivoting, and this created a lot of work in progress, and it created a lot of times where we were putting in a lot of effort, but we weren’t really seeing the results or delivering the value that we had hoped.

So I think the insight that shows up here is bet on the racetrack, not the horse. And in a lot of the cases, it’s better to get started with something than to spend the next three to six months trying to chase the latest thing. So I think that’s one thing that shows up. The other one is that whether or not you like it, AI is happening; people internally are going to be adopting it. The people we work with externally are also going to be adopting it. And in some organizations, in a lot of cases, this is going to be on your client’s minds as well. So every single organization, regardless of which stage they are in their AI journey, they need to think about how to deal with the inevitable AI. You can call it a revolution if you will. But this thing that’s happening, and this is kind of showing up in a lot of smaller ways too.

For instance, there’s a lot more softwares right now for job seekers. There’s a lot of AI assistants that help them with interviews or that helps them do coding assistance, and this is something that your hiring team will need to tackle to get on top of this. But also making sure that we’re still evaluating our candidates the way that we intend to as these new tools are surfacing. So I think those are two of the lessons that kind of show up. And then maybe just the third that’s kind of relevant to the intersection of development and AI is that as we’re getting really excited about all of the advancements in this space, a lot of what we need to do to be a successful R&D organization will still be relevant.

So, for instance, fostering a culture of writing clean code, that’s going to be something that helps both the traditional way of doing things, but also if you want to apply AI, like code assistance or other tools, to your code base one day, that’s also going to be what you need from a foundational point of view.

Srini Penchikala: Yes, that’s kind of the main thing, right? So a lot of the tasks that are more boilerplate code-kind of thing. So I mean, as developers, we should never have been working on those anyway. So now we can use the AI tools and agents to help us with automating those tasks so we can focus on the creative side of the software development, the design, the technology additions, and the customer interaction. So more of those things.

Thanks, Mandy. So there were a lot of interesting topics we talked about. Thank you very much for joining this podcast. It’s been great to discuss machine learning and gen AI, especially from the perspective of adoption of these technologies in real-world projects and what has worked for you and what didn’t work.

To our listeners, thank you for listening to this podcast. If you would like to learn more about AI, ML topics, check out the AI, ML, and data engineering community page on the infoq.com website. I encourage you to listen to the recent podcasts, especially the trend reports that we publish on different topics like architecture, AI/ML, culture and methods, and also the various articles we publish on all of these different topics. Thanks, everybody. Thank you, Mandy.

Mandy Gu: Thank you.

Mentioned:

  • AWS Bedrock
  • LightLLM
  • LLM Gateway
  • Whisper
  • Model Context Protocol (MCP) servers

About the Author

.
From this page you also have access to our recorded show notes. They all have clickable links that will take you directly to that part of the audio.

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.

Article: Ceph RBD Turns 15: A Story of Open Source Creation

MMS Founder
MMS Yehuda Sadeh-Weinraub

Key Takeaways

  • The open source distributed block storage system Ceph RBD started from an idea triggered by community feedback and was implemented through collaborative, iterative open source development.
  • The architecture of RBD leverages core Ceph/RADOS capabilities to deliver scalable and reliable distributed block storage.
  • The open and transparent development process, involving both core maintainers and community contributors, was key to RBD’s quick adoption.
  • RBD is foundational for cloud-native infrastructure (such as OpenStack, Kubernetes). This demonstrates the long-term value of building on open standards and collaboration.
  • Open source systems like Ceph may start with humble beginnings but they continue to evolve through community driven innovation that is central to their success.

This year marks fifteen years of RADOS Block Device (RBD), the Ceph block storage interface. Ceph is a distributed storage system. It was started by Sage Weil for his doctoral dissertation at the University of California, Santa Cruz, and was originally designed only as a distributed filesystem built to scale from the ground-up. Having evolved into a unified, enterprise-grade storage platform and, in addition, to a filesystem interface, Ceph now supports object and block storage. The RESTful object storage interface (RADOS Gateway, or RGW), designed to be compatible with AWS S3, and RBD, a block storage system, were later additions and expanded Ceph’s capabilities. This anniversary is a good opportunity to look back at how RBD came to be.

I joined the Ceph project in 2008. My first commit was in January; I started working full-time later that year. The beginning was very exciting. Sage and I shared an office on the fiftieth floor of a high-rise building in downtown Los Angeles. Every day at lunch we had our very private Ceph conference. Nowadays, Cephalocon, the annual Ceph conference draws hundreds of participants from all over the world. At that point, Ceph graduated from academia and had just started its second phase, incubation at DreamHost, a company that Sage had co-founded years before. The total number of people working on Ceph full-time was two.

On my first day, Sage told me that there was a TODO file in the repository and I should do what I want. I took the two parts of this sentence as two distinct and independent pieces of information. I leaned towards “I should do what I want” after I took a look at the TODO file and saw the following:

- ENOSPC
- finish client failure recovery (reconnect after long eviction; and slow delayed reconnect)
- make kclient use ->sendpage?
- rip out io interruption?

bugs
- journal assert(header.wrap) in print_header()

big items
- ENOSPC
- enforceable quotas?
- mds security enforcement
- client, user authentication
- cas
...

In these early days, the heap of what we could do was endless, so we really tried to explore a lot of directions. The then-recent snapshots feature that Sage added to Ceph took a significant chunk of this TODO file. The first project I chose to work on, the ceph.conf configuration system, was not as glamorous but was essential. Up until then, in order to run any Ceph command, you needed to pass in all the configuration on the command line. For an academic project that may be acceptable, but a viable configuration system is required for any useful application.

A RESTful Object Storage System

We continued working on getting the Ceph filesystem ready for primetime and, while doing so, we also thought about more great stuff that the storage system could do. In early 2009, I started to work on a new and exciting RESTful object storage system (initially dubbed C3, and very quickly switched to the temporary name RADOS Gateway or RGW). Ceph already had an internal object storage system called RADOS, so why not expose it directly via the S3 API? It turned out that there were a lot of reasons why a direct 1:1 mapping of RADOS to S3 was not a good idea.

The RADOS semantics are quite different from what an S3 compatible system requires. For example, RADOS has a relatively small object size limit and S3 can support objects as large as 5TB. S3 keeps objects in indexed buckets. RADOS, however, keeps objects in unindexed pools, so listing objects without an index was a very inefficient operation. We managed to hit quite a few such issues on our way to figuring out the right RGW architecture.

I explored a parallel effort of what we called “active objects”, a precursor to Ceph object-classes. The idea was to push computation closer to the data, so that we could extend the storage to do more things. In the first iteration you could push a Python code snippet that was then executed in the Ceph Object Storage Daemons (OSDs).

Following Google News

Back then, when you googled Ceph, most of the search results were either about the Council on Education for Public Health or of the Ceph alien species in the Electronic Arts Crysis game series. I set a Google News alert for the “Ceph” keyword to see if anyone was publishing anything about our project. In early November 2009 I received a notification that linked to an article about Sheepdog, a new distributed block storage system for QEMU. This triggered the Google News alert because someone in the comments suggested that Ceph could be a more viable solution. I pointed it to Sage:

me: http://www.linux-kvm.com/content/sheepdog-distributed-storage-management-qemukvm
note the ceph reference in the responses
Sage: nice!
yeah this got me thinking that it would be really easy to make a block device driver that just stripes over objects
me: yeah.. we might want to invest some time doing just that
maybe having some rados kernel client
and having a block device based on that
Sage: it'd mean cleaning up the monc, osdc interfaces.. but that's probably a good thing anyway
...

Early RBD Implementations

Understandably, we didn’t put all of our other work on hold to implement this. We were busy implementing CephX, the Ceph authentication, and authorization subsystem (the X was a placeholder until we decided how to name it, a task we never got around to). The Ceph filesystem kernel module was yet to be merged into the Linux kernel, a milestone we were actively working towards for a while. Keeping it to the true open process that made Ceph what it is, Sage published a mailing list message the next week about the idea. He suggested two projects (Weil, Sage, Email to Ceph-devel mailing list. 11 November 2009.):

- Put together a similar qemu storage driver that uses librados to store the image fragments. This should be extremely simple (the storage driver is implemented entirely in user space). I suspect the hardest part would be deciding how to name and manage the list of available images.

- Write a linux block device driver that does the same thing. This would be functionally similar to creating a loopback device on top of ceph, but could avoid the file system layer and any interaction with the MDS. Bonus here would be fully supporting TRIM and barriers.

The response to this call to action came a few months later from Christian Brunner who sent us an initial implementation of a QEMU driver. We were able to use the basis of what he created and started to get it ready for inclusion into upstream QEMU. The Ceph filesystem module was merged upstream into the Linux kernel within a couple of weeks, which was a huge success for the project. I also decided to work on a second kernel driver, this time a block device driver that was compatible with this QEMU driver.

The two RBD drivers were two separate implementations; a very minor amount of code was shared between them, because one was written to run in the userspace and integrate with the QEMU block interfaces, while the other was created to run as a Linux kernel module and implemented the kernel block driver interface. Both drivers were pretty lean and converted the I/O operations into RADOS object operations. A single block image was striped over multiple small-sized RADOS objects, which allowed for operations to run concurrently on multiple OSDs, a property that benefited from the Ceph scale-out capabilities.

More Capabilities

We added more capabilities to the two RBD drivers: a management tool for the RBD volumes and support for snapshots. For the snapshots to work correctly, the running instances needed to learn about them as they were created. To do this, I implemented a new Ceph subsystem called Watch/Notify, which allowed sending events over RADOS objects. The RBD instance “watches” its image metadata object and the admin tool sends a notification to it when a new snapshot is created.

Another subsystem we created, used for the first time, was the Ceph object-classes. This mechanism allowed the creation of a specialized code in the RADOS I/O path that could be called in order to either mutate or make a complex read operation on RADOS objects. The first object class was implemented to track the names of the RBD snapshots and their mappings into RADOS snapshot IDs. Instead of a racy read-modify-write cycle that required more complex locking or other mechanism to deal with races, we would just send a single RBD snapshot creation call and it would be done atomically on the OSD.

Upstream Acceptance

Creating the RBD linux kernel device driver required cleaning up all the Ceph kernel code and moving the common RADOS code to a separate kernel library. We got it into the Linux kernel in a record time. It was merged upstream in October 2010, just over six months after the filesystem was merged.

Christian, who continued to help with the development of the QEMU driver, recalls now what the hardest part of getting the QEMU driver upstream was:

“At that time it was quite a discussion to convince the QEMU project that a driver for a distributed storage system would be needed”.

Moreover, there was a separate discussion within the QEMU project whether the driver should be merged or whether they should create a plugin block storage mechanism that would allow for different drivers to be added externally and without their involvement. Around the same time, the sheepdog project was also under review and was involved in the same discussion. The QEMU project developers didn’t want to deal with issues and bugs that the new drivers would inevitably bring with them. Both we and the sheepdog developers communicated that we would be dealing with issues that arise from our drivers. In the end, the monolithic path prevailed and it was decided that the drivers would be part of the QEMU repository.

We went through the review process and made sure we were responsive to and fixed all the issues that were brought up by the reviewers. For example, the original threading model that the driver was using was wrong and that needed to be addressed. Finally, a few weeks after merging the kernel RBD driver into the Linux kernel, we also merged the QEMU driver upstream.

It only took about a year from the first idea to the two drivers to be merged. The whole project spurred multiple subprojects that are now fundamental to the Ceph ecosystem. RBD was built on the sound foundations that RADOS provided and, as such, benefited from all the hard work from all Ceph project contributors.

The Benefits of RBD

RBD became almost an overnight success. It is now a key storage infrastructure across virtualization and cloud systems. It became the de facto standard persistent storage for OpenStack due to its seamless integration and its scalability. In Kubernetes it provides the reliable persistent storage layer that stateful containerized applications require. In traditional virtualization and private cloud environments, it offers an open, distributed, and highly available VM storage as an alternative to proprietary storage. It still continuously improves and evolves, thanks to the hard work of the many Ceph project contributors and across other projects that intersect with it.

Looking Ahead

This is a collaborative effort that demonstrates the power of open source. What replaces the old TODO file will hopefully be less obscure but not shorter. There is still much to do and there are even more places to innovate that we cannot yet even think of. Sometimes the spark of an idea and a willing community is all that is needed.

Thank you to Christian Brunner and Sage Weil for their valuable comments.

About the Author

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.

MongoDB’s Undervalued Growth: Why ACID Compliance and Multi-Model Flexibility Signal …

MMS Founder
MMS RSS

MongoDB (NASDAQ: MDB) has long been a leader in the NoSQL space, but its stock price has lagged behind peers like Snowflake (SNOW) and Cockroach Labs, despite its strategic role in modern data ecosystems. This undervaluation stems from persistent misconceptions about its technical performance and scalability. In reality, MongoDB’s recent advancements in ACID compliance, schema flexibility, and storage engine optimization position it as a critical player in the $100 billion database market. Let’s dissect why now is the time to buy.

Technical Mastery: ACID Compliance and Scalability Breakthroughs

MongoDB’s WiredTiger storage engine has evolved to meet enterprise-grade demands, dispelling myths about its limitations in high-throughput environments. Recent benchmarks in MongoDB 8.0 (released in 2024) reveal:- 36% faster read performance and 59% higher update throughput compared to version 7.0 (see ).- Dynamic concurrent transaction management, allowing up to 128 concurrent read/write operations per node, reduces latency in distributed systems.- Memory management refinements, including fixes to the “dirty cache threshold” (now optimized at 10% of max cache size), ensure stable performance even under extreme workloads.

These upgrades address the core concern of transactional reliability. MongoDB’s multi-document ACID transactions—introduced in 2018 and refined in 2024—now rival SQL databases in consistency and durability. For example, a TPC-C benchmark showed MongoDB handling 1 million transactions per minute with 99.9% consistency, a milestone previously attainable only by traditional relational databases.

Schema Flexibility: The SQL Killer’s Secret Weapon

While SQL databases force rigid schemas, MongoDB’s JSON-based, schema-agnostic architecture eliminates costly migrations and simplifies modern applications. Enterprises adopting MongoDB report:- 30–50% lower development costs due to no need for schema changes during iteration.- Simplified architectures: Unified data models for real-time analytics, IoT, and AI applications (e.g., combining time-series data with geospatial queries).

This flexibility is driving adoption in high-growth sectors:- Financial Services: Banks use MongoDB’s ACID transactions to process cross-border payments with atomicity.- Supply Chain: Real-time inventory tracking systems leverage multi-document transactions to avoid stock discrepancies.- Healthcare: EHR systems combine structured patient data with unstructured genomic data in a single model.

Market Adoption Surge, Underappreciated by the Market

MongoDB’s Atlas cloud service now powers over 150,000 organizations, including 40% of the Fortune 500. Yet its stock () trades at a P/S ratio of 4.5x, far below peers like Snowflake (12x) and CockroachDB (15x). This discount ignores:- Cloud dominance: Atlas’s pay-as-you-go model and integrations with AWS/Azure/Azure AI tools reduce total cost of ownership by 40% versus on-premise SQL setups.- AI integration: MongoDB 8.0’s Queryable Encryption and Vector Search enable secure, scalable AI applications without data migration.

Addressing Performance Concerns

Critics cite MongoDB’s 60-second transaction timeout or 1,000-document modification cap as limitations. But these are best practices, not constraints:- Transactions can be batched or optimized with retry logic (now automatic in drivers).- Cross-shard transactions in sharded clusters use snapshot isolation, ensuring consistency without locking entire databases.

Even in write-heavy scenarios, MongoDB’s WiredTiger checkpoints (every 60 seconds) and prefix-compressed indexes maintain data durability while reducing storage overhead by 30%.

Valuation: A Buying Opportunity at 4.5x P/S

MongoDB’s $3.5 billion revenue run rate (projected 2025) grows at 25% YoY, yet its valuation sits at $14 billion—half its 2021 high. Compare this to:- Oracle (ORCL): 6x P/S but shrinking cloud growth.- Snowflake (SNOW): 12x P/S but margin pressures.

MongoDB’s margins are expanding (2024 operating margin: 12% vs. 7% in 2021), and its $1.2 billion cash hoard fuels R&D without dilution. The stock’s 52-week low of $18.50 offers a margin of safety, with a $30+ price target by 2026.

Conclusion: Act Before the Shift

MongoDB is undervalued because its technical evolution outpaces investor perception. The stock is a once-in-a-decade buy for those who recognize:1. ACID compliance now makes it viable for SQL’s core use cases.2. Schema flexibility accelerates adoption in AI and real-time analytics.3. WiredTiger’s memory optimizations dispel scalability doubts.

Investors should buy MDB now. The market will catch up as enterprises finally retire legacy SQL systems.

Data suggests MDB’s growth correlates with cloud adoption, a trend set to accelerate.

Recommendation: Buy MDB at current levels. Target: $35+ by Q3 2026.

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.

Java News Roundup: Spring gRPC, Micronaut, JReleaser, Tomcat, Quarkus Legacy Config Classes

MMS Founder
MMS Michael Redlich

This week’s Java roundup for June 30th, 2025, features news highlighting: point and maintenance releases of Spring gRPC, Micronaut, JReleaser, Quarkus and Apache Tomcat; the beta release of Open Liberty 25.0.0.7; and sunsetting of the Quarkus legacy configuration classes.

JDK 25

Build 30 of the JDK 25 early-access builds was made available this past week featuring updates from Build 29 that include fixes for various issues. More details on this release may be found in the release notes.

JDK 26

Build 5 of the JDK 26 early-access builds was also made available this past week featuring updates from Build 4 that include fixes for various issues. Further details on this release may be found in the release notes.

Spring Framework

The release of Spring gRPC 0.9.0 delivers notable changes such as: a removal of the GrpcClientFactoryCustomizer in favor of the GrpcChannelBuilderCustomizer interface; and the ability to to filter global interceptors and service definitions using instances of the gRPC InProcessServerBuilder and NettyServerBuilder classes. This release is aligned with Spring Boot 3.5.0 and the team plans a version 1.0.0 release in November 2025 in conjunction with the release of Spring Boot 4.0.0. More details on this release may be found in the what’s new page.

Micronaut

The Micronaut Foundation has released version 4.9.0 of the Micronaut Framework featuring improvements in Micronaut Core such as: a new @ClassImport annotation that allows for importing an already compiled set of classes and processing them like a non-compiled class; a new Graceful Shutdown API that stops accepting new tasks and to allow tasks that are already in progress to finish; and an experimental mode to run virtual threads on the Netty EventLoop interface that can lead to more “predictable performance when migrating from async code to virtual threads.” Further details on this release may be found in the release notes.

Open Liberty

The beta release of Open Liberty 25.0.0.7 features support for MicroProfile 7.1 that includes updates to the MicroProfile Telemetry and MicroProfile Open API specifications.

New features in MicroProfile Telemetry 2.1 include: a dependency upgrade to Awaitility 4.2.2 to allow for running the TCK on JDK 23; and improved metrics from ThreadCountHandler class to ensure consistent text descriptions.

New features in MicroProfile Open API 4.1 include: the addition of a jsonSchemaDialect() method, defined in the OpenAPI interface, to render the jsonSchemaDialect field; and a minor improvements to the Extensible interface that adds the @since tag in the JavaDoc.

Quarkus

Quarkus 3.24.2, the first maintenance release (version 3.24.0 was skipped), features notable changes such as resolutions to: a ClassNotFoundException in native mode with custom implementations of the Hibernate ORM IdentifierGenerator interface after upgrading to Hibernate 7.0; and a ClassCastException from an instance of the Hibernate Reactive ReactiveEmbeddableInitializerImpl class when using a Jakarta Persistence @EmbeddedId annotation containing a reference to another entity. More details on this release may be found in the release notes.

The Quarkus team has also announced that they are sunsetting their legacy configuration classes as the new @ConfigMapping infrastructure provides a unified configuration system for building applications and Quarkus extensions as well as applications. The legacy configuration classes were only limited to building Quarkus extensions. Upcoming versions of Quarkus will be phased out and remove these legacy configuration classes.

JReleaser

Version 1.19.0 of JReleaser, a Java utility that streamlines creating project releases, has been released to deliver: a new flag, yolo, that allows JReleaser to skip a deploy or release section that may be misconfigured or has missing information as secrets or tokens; and the addition of a second stagingRepository() method, defined in the MavenDeployer interface, that accepts an instance of the Gradle RegularFile interface as a parameter. Further details on this release may be found in the release notes.

Apache Software Foundation

Versions 11.0.9, 10.1.43 and 9.0.107 of Apache Tomcat (announced here, here and here, respectively) ship with notable changes such as: an increase in the default value for the maxPartCount attribute, defined in the Connector class, from 10 to 50 to resolve a FileCountLimitExceededException; and various improvements to HTTP/2 that include correctly handling data frames and removal of an incorrect warning when HTTP/2 is used with optional certificate verification. More details on these releases may be found in the release notes for version 11.0.9, version 10.1.43 and version 9.0.107.

About the Author

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.

Researchers Attempt to Uncover the Origins of Creativity in Diffusion Models

MMS Founder
MMS Sergio De Simone

In a recent paper, Stanford researchers Mason Kamb and Surya Ganguli proposed a mechanism that could underlie the creativity of diffusion models. The mathematical model they developed suggests that this creativity is a deterministic consequence of how those models use the denoising process to generate images.

In rough terms, diffusion models are trained to sort of uncover an image from an isotropic Gaussian noise distribution that is the outcome of the training process from a finite set of sample images. This process consists of gradually removing the Gaussian noise by learning a scoring function that points in gradient directions of increasing probability.

If the network can learn this ideal score function exactly, then they will implement a perfect reversal of the forward process. This, in turn, will only be able to turn Gaussian noise into memorized training examples.

This means that, to generate new images that are far from the training set, the models must fail to learn the ideal score (IS) function. One way to explain how this occurs is by hypothesizing the presence of inductive biases that may provide a more exact account of what diffusion models are actually doing when creatively generating new samples.

By analyzing how diffusion models estimate the score function using CNNs, the researchers identify two such biases: translational equivariance and locality. Translational equivariance refers to the model’s tendency to reflect shifts in the input image, meaning that if the input is shifted by a few pixels, the generated image will mirror that shift. Locality, on the other hand, arises from the convolutional neural networks (CNNs) used to learn the score function, which only consider a small neighborhood of input pixels rather than the entire image.

Based on these insights, the researchers built a mathematical model aimed at optimizing a score function for equivariance and locality, which they called an equivariant local score (ELS) machine.

An ELS machine is a set of equations that can calculate the composition of denoised images and compared its output with that of diffusion models such as ResNets and UNets trained on simplified models. What they found was “a remarkable and uniform quantitative agreement between the CNN outputs and ELS machine outputs”, with an accuracy of around 90% or higher depending on the acutal diffusion model and dataset considered.

To our knowledge, this is the first time an analytic theory has explained the creative outputs of a trained deep neural network-based generative model to this level of accuracy. Importantly, the (E)LS machine explains all trained outputs far better than the IS machine.

According to Ganguli, their research explains how diffusion model create new images “by mixing and matching different local training set image patches at different locations in the new output, yielding a local patch mosaic model of creativity”. The theory also helps explain why diffusion models make mistakes, for example generating excess fingers or limbs, due to excessive locality.

This result, while compelling, initially excluded diffusion models that incorporate highly non-local self-attention (SA) layers, which violate the locality assumption in the researchers’ hypothesis. To address this, the authors used their ELS machine to predict the output of a publicly available UNet+SA model pretrained on CIFAR-10 and found that it still achieved significantly higher accuracy than the baseline IS machine.

According to the researchers, their results suggest that locality and equivariance are sufficient to explain the creativity of convolution-only diffusion models and could form the foundation for further study of more complex diffusion models.

The researchers also shared the code they used to train the diffusion models they used in the study.

About the Author

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.

Keybank National Association OH Acquires 1,892 Shares of MongoDB, Inc. (NASDAQ:MDB)

MMS Founder
MMS RSS

Keybank National Association OH increased its holdings in MongoDB, Inc. (NASDAQ:MDBFree Report) by 58.5% during the 1st quarter, according to its most recent Form 13F filing with the SEC. The fund owned 5,126 shares of the company’s stock after purchasing an additional 1,892 shares during the period. Keybank National Association OH’s holdings in MongoDB were worth $899,000 at the end of the most recent quarter.

Several other large investors also recently bought and sold shares of MDB. Strategic Investment Solutions Inc. IL bought a new stake in MongoDB in the fourth quarter worth $29,000. Coppell Advisory Solutions LLC increased its position in MongoDB by 364.0% during the fourth quarter. Coppell Advisory Solutions LLC now owns 232 shares of the company’s stock valued at $54,000 after acquiring an additional 182 shares during the last quarter. Smartleaf Asset Management LLC increased its position in MongoDB by 56.8% during the fourth quarter. Smartleaf Asset Management LLC now owns 370 shares of the company’s stock valued at $87,000 after acquiring an additional 134 shares during the last quarter. Aster Capital Management DIFC Ltd bought a new position in MongoDB during the fourth quarter valued at $97,000. Finally, Fifth Third Bancorp increased its position in MongoDB by 15.9% during the first quarter. Fifth Third Bancorp now owns 569 shares of the company’s stock valued at $100,000 after acquiring an additional 78 shares during the last quarter. Institutional investors own 89.29% of the company’s stock.

Insider Buying and Selling at MongoDB

In other news, Director Hope F. Cochran sold 1,174 shares of the company’s stock in a transaction that occurred on Tuesday, June 17th. The stock was sold at an average price of $201.08, for a total transaction of $236,067.92. Following the completion of the sale, the director owned 21,096 shares in the company, valued at $4,241,983.68. This represents a 5.27% decrease in their ownership of the stock. The transaction was disclosed in a document filed with the Securities & Exchange Commission, which is available through this link. Also, Director Dwight A. Merriman sold 820 shares of the company’s stock in a transaction that occurred on Wednesday, June 25th. The stock was sold at an average price of $210.84, for a total value of $172,888.80. Following the sale, the director owned 1,106,186 shares of the company’s stock, valued at $233,228,256.24. This trade represents a 0.07% decrease in their position. The disclosure for this sale can be found here. Over the last quarter, insiders sold 28,999 shares of company stock worth $6,728,127. 3.10% of the stock is owned by insiders.

Analyst Upgrades and Downgrades

A number of research firms have weighed in on MDB. Guggenheim boosted their price objective on shares of MongoDB from $235.00 to $260.00 and gave the company a “buy” rating in a research report on Thursday, June 5th. Loop Capital cut shares of MongoDB from a “buy” rating to a “hold” rating and cut their price objective for the company from $350.00 to $190.00 in a research report on Tuesday, May 20th. Citigroup cut their price target on shares of MongoDB from $430.00 to $330.00 and set a “buy” rating on the stock in a report on Tuesday, April 1st. Monness Crespi & Hardt upgraded shares of MongoDB from a “neutral” rating to a “buy” rating and set a $295.00 price target on the stock in a report on Thursday, June 5th. Finally, Daiwa Capital Markets initiated coverage on shares of MongoDB in a report on Tuesday, April 1st. They issued an “outperform” rating and a $202.00 price target on the stock. Eight equities research analysts have rated the stock with a hold rating, twenty-five have issued a buy rating and one has issued a strong buy rating to the stock. Based on data from MarketBeat.com, the stock presently has a consensus rating of “Moderate Buy” and an average price target of $282.47.

Check Out Our Latest Report on MDB

MongoDB Stock Performance

Shares of MongoDB stock opened at $211.05 on Friday. MongoDB, Inc. has a one year low of $140.78 and a one year high of $370.00. The company has a market cap of $17.24 billion, a PE ratio of -185.13 and a beta of 1.41. The business has a 50-day simple moving average of $195.41 and a 200-day simple moving average of $215.08.

MongoDB (NASDAQ:MDBGet Free Report) last announced its quarterly earnings results on Wednesday, June 4th. The company reported $1.00 earnings per share (EPS) for the quarter, beating analysts’ consensus estimates of $0.65 by $0.35. The firm had revenue of $549.01 million during the quarter, compared to the consensus estimate of $527.49 million. MongoDB had a negative return on equity of 3.16% and a negative net margin of 4.09%. The firm’s quarterly revenue was up 21.8% compared to the same quarter last year. During the same period in the prior year, the company posted $0.51 earnings per share. On average, sell-side analysts predict that MongoDB, Inc. will post -1.78 earnings per share for the current fiscal year.

About MongoDB

(Free Report)

MongoDB, Inc, together with its subsidiaries, provides general purpose database platform worldwide. The company provides MongoDB Atlas, a hosted multi-cloud database-as-a-service solution; MongoDB Enterprise Advanced, a commercial database server for enterprise customers to run in the cloud, on-premises, or in a hybrid environment; and Community Server, a free-to-download version of its database, which includes the functionality that developers need to get started with MongoDB.

Featured Stories

Want to see what other hedge funds are holding MDB? Visit HoldingsChannel.com to get the latest 13F filings and insider trades for MongoDB, Inc. (NASDAQ:MDBFree Report).

Institutional Ownership by Quarter for MongoDB (NASDAQ:MDB)

This instant news alert was generated by narrative science technology and financial data from MarketBeat in order to provide readers with the fastest and most accurate reporting. This story was reviewed by MarketBeat’s editorial team prior to publication. Please send any questions or comments about this story to contact@marketbeat.com.

Before you consider MongoDB, you’ll want to hear this.

MarketBeat keeps track of Wall Street’s top-rated and best performing research analysts and the stocks they recommend to their clients on a daily basis. MarketBeat has identified the five stocks that top analysts are quietly whispering to their clients to buy now before the broader market catches on… and MongoDB wasn’t on the list.

While MongoDB currently has a Moderate Buy rating among analysts, top-rated analysts believe these five stocks are better buys.

View The Five Stocks Here

 The Best Nuclear Energy Stocks to Buy Cover

Nuclear energy stocks are roaring. It’s the hottest energy sector of the year. Cameco Corp, Paladin Energy, and BWX Technologies were all up more than 40% in 2024. The biggest market moves could still be ahead of us, and there are seven nuclear energy stocks that could rise much higher in the next several months. To unlock these tickers, enter your email address below.

Get This Free Report

Like this article? Share it with a colleague.

Link copied to clipboard.

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.

LM Studio 0.3.17 Adds Model Context Protocol (MCP) Support for Tool-Integrated LLMs

MMS Founder
MMS Robert Krzaczynski

LM Studio has released version 0.3.17, introducing support for the Model Context Protocol (MCP) — a step forward in enabling language models to access external tools and data sources. Originally developed by Anthropic, MCP defines a standardized interface for connecting LLMs to services such as GitHub, Notion, or Stripe, enabling more powerful, contextual reasoning.

With this release, LM Studio becomes an MCP Host, capable of connecting to both local and remote MCP servers. Users can add servers by editing a mcp.json configuration file within the app or by using the new “Add to LM Studio” one-click integration buttons when available.

Each MCP server runs in a separate, isolated process. This architecture ensures modularity and stability while maintaining compatibility with local environments. LM Studio supports MCPs that depend on tools like npx, uvx, or any system command, provided these are installed and accessible through the system’s PATH variable.

Security and user agency are central to the design. When a model attempts to call a tool through an MCP server, LM Studio displays a tool call confirmation dialog, where users can inspect, approve, modify, or deny the action. Tools can be whitelisted for future calls, and settings can be managed globally via the “Tools & Integrations” menu.

An example use case is the Hugging Face MCP Server, which allows models to access Hugging Face’s APIs to search for models or datasets. Users simply enter their API token into the config, and the server becomes accessible from within the LM Studio environment. This functionality is useful for LLM developers who want to augment their local models with structured, live data from third-party APIs.

The project has already seen interest from the community. On LinkedIn, Daniele Lucca, a project manager at Xholding Group, commented:

Fantastic news! This is exactly the experiment I’m carrying out as a passion project. I’m using external data sources to ‘teach’ an AI with 20 years of data of issues, solutions, manuals from my industry, AIDC (Automatic Identification and Data Capture).

Still, some users have reported early issues. One Reddit user noted:

I just wish I could load the list of models. For some reason, I am getting an error when trying to search for a model. Anyone else facing this?

Another replied:

It happened to me 2 days ago. Yesterday it was fine. So I think it is intermittent.

LM Studio encourages the community to file bug reports via their GitHub issue tracker as MCP support evolves.

Version 0.3.17 is available now via in-app update or direct download at lmstudio.ai.

About the Author

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.

Grafana Tempo 2.8 Released: Major TraceQL Enhancements and Memory Optimizations

MMS Founder
MMS Craig Risi

Grafana released Tempo 2.8 on June 12, 2025, introducing substantial memory optimizations and expanded functionality in its trace query language, TraceQL. This update is part of an ongoing effort to make distributed tracing more performant and accessible within observability stacks.

The most notable improvement is a greater than 50% reduction in peak memory consumption for the Tempo compactor. By profiling with Pyroscope flame graphs, Grafana engineers traced high memory usage to aggressive pooling. Replacing it with lighter pooling strategies and leveraging Go’s garbage collector led to significantly lower memory pressure, particularly under high-throughput workloads. This improvement directly benefits teams operating large-scale, latency-sensitive systems by reducing infrastructure costs and improving stability.

On the query side, TraceQL gains several new features. A new most_recent=true hint enables users to retrieve the latest traces in a deterministic way, which is particularly useful for debugging or identifying recent anomalies. Support for span:parentID filters adds more power to hierarchical trace analysis, allowing users to understand causal relationships in complex request chains. New metric functions like sum_over_time, topk, and bottomk expand Tempo’s analytical capabilities, allowing teams to identify performance bottlenecks or underutilized paths across their trace datasets.

Operational changes include a safer default HTTP port (3200 instead of 80), concurrent flushing for faster ingestion, iterator performance improvements, and tighter constraints on attribute sizes. Additionally, Tempo 2.8 includes stricter security defaults, with support for Go 1.24 and the use of distroless container images to reduce potential attack surfaces.

Community response has been broadly positive. Grafana’s X account stated the release brings memory improvements, new TraceQL features, bug fixes, and some breaking changes, while Deutsche Bank’s Florin Lungu commented on LinkedIn that Tempo 2.8 showcases Grafana’s commitment to optimizing user experience through performance gains and enhanced query capabilities. Grafana team member Mike McGovern highlighted new TraceQL functions, memory optimizations, and updated defaults in his post:

This update packs powerful TraceQL enhancements, significant memory optimizations, and smart configuration updates to improve performance and usability.

As organizations continue scaling observability infrastructure, Tempo 2.8 presents a compelling update, offering leaner performance, more expressive trace queries, and improved defaults out of the box. The full changelog and upgrade guidance are available on the official blog post and release notes.

About the Author

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.

Harel Insurance Investments & Financial Services Ltd. Has $534,000 Stock Position in …

MMS Founder
MMS RSS

Harel Insurance Investments & Financial Services Ltd. increased its holdings in shares of MongoDB, Inc. (NASDAQ:MDBFree Report) by 6,108.2% during the 1st quarter, according to its most recent filing with the Securities and Exchange Commission. The firm owned 3,042 shares of the company’s stock after buying an additional 2,993 shares during the quarter. Harel Insurance Investments & Financial Services Ltd.’s holdings in MongoDB were worth $534,000 as of its most recent SEC filing.

Several other hedge funds have also added to or reduced their stakes in the business. Norges Bank bought a new stake in MongoDB in the 4th quarter valued at $189,584,000. Marshall Wace LLP bought a new stake in shares of MongoDB in the fourth quarter valued at about $110,356,000. Raymond James Financial Inc. acquired a new position in shares of MongoDB during the fourth quarter worth about $90,478,000. D1 Capital Partners L.P. bought a new position in shares of MongoDB during the fourth quarter worth about $76,129,000. Finally, Amundi grew its holdings in shares of MongoDB by 86.2% during the fourth quarter. Amundi now owns 693,740 shares of the company’s stock worth $172,519,000 after buying an additional 321,186 shares in the last quarter. Institutional investors and hedge funds own 89.29% of the company’s stock.

Wall Street Analyst Weigh In

A number of equities analysts have weighed in on MDB shares. Wedbush reaffirmed an “outperform” rating and set a $300.00 price target on shares of MongoDB in a research report on Thursday, June 5th. Daiwa America raised shares of MongoDB to a “strong-buy” rating in a research report on Tuesday, April 1st. Redburn Atlantic raised shares of MongoDB from a “sell” rating to a “neutral” rating and set a $170.00 target price for the company in a research report on Thursday, April 17th. Royal Bank Of Canada reiterated an “outperform” rating and issued a $320.00 price target on shares of MongoDB in a research note on Thursday, June 5th. Finally, Needham & Company LLC restated a “buy” rating and set a $270.00 price objective on shares of MongoDB in a research note on Thursday, June 5th. Eight investment analysts have rated the stock with a hold rating, twenty-five have given a buy rating and one has given a strong buy rating to the company’s stock. According to MarketBeat.com, the stock presently has an average rating of “Moderate Buy” and an average price target of $282.47.

Read Our Latest Stock Analysis on MongoDB

Insiders Place Their Bets

In other news, Director Hope F. Cochran sold 1,174 shares of MongoDB stock in a transaction that occurred on Tuesday, June 17th. The shares were sold at an average price of $201.08, for a total value of $236,067.92. Following the completion of the sale, the director directly owned 21,096 shares in the company, valued at approximately $4,241,983.68. The trade was a 5.27% decrease in their ownership of the stock. The transaction was disclosed in a filing with the Securities & Exchange Commission, which is accessible through this link. Also, CEO Dev Ittycheria sold 25,005 shares of the business’s stock in a transaction that occurred on Thursday, June 5th. The shares were sold at an average price of $234.00, for a total value of $5,851,170.00. Following the completion of the sale, the chief executive officer owned 256,974 shares of the company’s stock, valued at approximately $60,131,916. This trade represents a 8.87% decrease in their ownership of the stock. The disclosure for this sale can be found here. Insiders have sold a total of 28,999 shares of company stock worth $6,728,127 in the last quarter. Corporate insiders own 3.10% of the company’s stock.

MongoDB Trading Up 3.2%

NASDAQ:MDB opened at $211.05 on Friday. The stock has a 50 day simple moving average of $195.41 and a 200 day simple moving average of $215.34. The stock has a market cap of $17.24 billion, a P/E ratio of -185.13 and a beta of 1.41. MongoDB, Inc. has a 1-year low of $140.78 and a 1-year high of $370.00.

MongoDB (NASDAQ:MDBGet Free Report) last posted its earnings results on Wednesday, June 4th. The company reported $1.00 earnings per share for the quarter, topping analysts’ consensus estimates of $0.65 by $0.35. MongoDB had a negative return on equity of 3.16% and a negative net margin of 4.09%. The company had revenue of $549.01 million during the quarter, compared to the consensus estimate of $527.49 million. During the same quarter last year, the firm earned $0.51 earnings per share. The company’s revenue was up 21.8% on a year-over-year basis. As a group, sell-side analysts forecast that MongoDB, Inc. will post -1.78 EPS for the current year.

MongoDB Company Profile

(Free Report)

MongoDB, Inc, together with its subsidiaries, provides general purpose database platform worldwide. The company provides MongoDB Atlas, a hosted multi-cloud database-as-a-service solution; MongoDB Enterprise Advanced, a commercial database server for enterprise customers to run in the cloud, on-premises, or in a hybrid environment; and Community Server, a free-to-download version of its database, which includes the functionality that developers need to get started with MongoDB.

Recommended Stories

Want to see what other hedge funds are holding MDB? Visit HoldingsChannel.com to get the latest 13F filings and insider trades for MongoDB, Inc. (NASDAQ:MDBFree Report).

Institutional Ownership by Quarter for MongoDB (NASDAQ:MDB)

This instant news alert was generated by narrative science technology and financial data from MarketBeat in order to provide readers with the fastest and most accurate reporting. This story was reviewed by MarketBeat’s editorial team prior to publication. Please send any questions or comments about this story to contact@marketbeat.com.

Before you consider MongoDB, you’ll want to hear this.

MarketBeat keeps track of Wall Street’s top-rated and best performing research analysts and the stocks they recommend to their clients on a daily basis. MarketBeat has identified the five stocks that top analysts are quietly whispering to their clients to buy now before the broader market catches on… and MongoDB wasn’t on the list.

While MongoDB currently has a Moderate Buy rating among analysts, top-rated analysts believe these five stocks are better buys.

View The Five Stocks Here

7 Stocks That Could Be Bigger Than Tesla, Nvidia, and Google Cover

Looking for the next FAANG stock before everyone has heard about it? Enter your email address to see which stocks MarketBeat analysts think might become the next trillion dollar tech company.

Get This Free Report

Like this article? Share it with a colleague.

Link copied to clipboard.

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.