Google’s Agent2Agent Protocol Enters the Linux Foundation

MMS Founder
MMS Sergio De Simone

Article originally posted on InfoQ. Visit InfoQ

Recently open-sourced by Google, the Agent2Agent protocol is now part of the Linux Foundation, along with its accompanying SDKs and developer tools.

The Agent2Agent protocol will be the cornerstone of a wider Agent2Agent project formed by Google, AWS, Cisco, Microsoft, and others. The project aims to foster interoperability for AI agents and break down the silos that are limiting collaboration between them, says the company.

By providing a common language for AI agents to discover each other’s capabilities, securely exchange information, and coordinate complex tasks, the A2A protocol is paving the way for a new era of more powerful, collaborative, and innovative AI applications.

Using the Agent2Agent protocol, agents can discover each other’s capabilities, negotiate how to interact, and collaborate securely on long-running tasks. The protocol is particularly focused on preserving each agent’s internal state, including its prompt.

The protocol is based on JSON-RPC 2.0 over HTTP and uses server-sent events for real-time streaming between agents. Agents know about each other through “agent cards” that describe agent capabilities and provide connection info. In the future, agent cards will include also authorization schemes and optional credentials. Other areas of future development include client-initiated interactions and dynamic UX negotiation within tasks, such as adding audio/video formats after the initial negotiation phase, i.e. after the agents have started their conversation.

According to Google, the Agent2Agent protocol has seen significant adoption, with over 100 companies supporting it. Since its original announcement, the protocol has raised some controversy due to its overlap with Anthropic’s Model Context Protocol (MCP).

Reddit commenter Impressive-Owl3830 expressed concern that this overlap might prevent the two protocols from coexisting, with MCP already having “taken off”. Another redditor, Specialist_Apricot74, noted this announcement “puts to rest the threat of the triple E threat (Embrace, Extend, Extinguish)” and could help Agent2Agent to differentiate itself from MCP by reducing its overlap and specializing in at least one task that MCP cannot do.

Google says Agent2Agent is ideal when agents are developed and deployed independently, come from different teams, require dynamic discovery and composition, and need to support third-party integration or frequent changes, such as adding or removing agents at any time.

If you are interested in Agent2Agent, a great starting point is Google’s unofficial Python Notebook, which illustrates how you can set up a system with three agents, one searching the web for current trending topics, another performing deep analysis, and the last orchestrating the first two to provide insights.

About the Author

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


MongoDB Trades at a P/S of 7.08X: Should You Still Buy MDB Stock? – July 1, 2025 – Zacks

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

We use cookies to understand how you use our site and to improve your experience.

This includes personalizing content and advertising.

By pressing “Accept All” or closing out of this banner, you consent to the use of all cookies and similar technologies and the sharing of information they collect with third parties.

You can reject marketing cookies by pressing “Deny Optional,” but we still use essential, performance, and functional cookies.

In addition, whether you “Accept All,” Deny Optional,” click the X or otherwise continue to use the site, you accept our Privacy Policy and Terms of Service, revised from time to time.

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Apple’s Illusion of Thinking Paper Explores Limits of Large Reasoning Models

MMS Founder
MMS Anthony Alford

Article originally posted on InfoQ. Visit InfoQ

Apple Machine Learning Research published a paper titled The Illusion of Thinking, which investigates the abilities of Large Reasoning Models (LRMs) on a set of puzzles. As the complexity of the puzzles increases, the researchers found that LRMs encounter a “collapse” threshold where the models reduce their reasoning effort, indicating a limit to the models’ scalability.

For their experiments, Apple researchers chose four puzzle problems, including Tower of Hanoi, and a variety of LRMs and standard LLMs, including o3-mini and DeepSeek-R1. Each puzzle’s complexity could be varied; for example, the Tower of Hanoi puzzle can have a variable number of disks. They found that as complexity increased, model behavior went through three regimes: in the first, with simple problems, both reasoning and non-reasoning models performed similarly well. In the second, medium complexity regime, the reasoning models with their Chain-of-Thought (CoT) inference performed better than LLMs. But in the high complexity regime, both groups’ performance “collapsed to zero.” According to Apple, 

In this study, we probe the reasoning mechanisms of frontier LRMs through the lens of problem complexity….Our findings reveal fundamental limitations in current models: despite sophisticated self-reflection mechanisms, these models fail to develop generalizable reasoning capabilities beyond certain complexity thresholds….These insights challenge prevailing assumptions about LRM capabilities and suggest that current approaches may be encountering fundamental barriers to generalizable reasoning.

LRMs such as o3 and DeepSeek-R1 are LLMs that have been fine-tuned to generate step-by-step instructions for itself before producing a response to users; in essence, the models “think out loud” to produce better answers. This allows the models to outperform their “standard” LLM counterparts on many tasks, especially coding, mathematics, and science benchmarks.

As part of their experiments, the Apple team analyzed these reasoning traces generated by the models. They noted that for simpler problems, the models would often “overthink:” the correct solution would appear early in the trace, but the models would continue to explore incorrect ideas. In medium complexity problems, however, the models would explore incorrect solutions before finding the correct one.

Apple’s paper sparked a wide debate in the AI community. Gary Marcus, a cognitive scientist and critic of the current state of AI, wrote about the research, saying:

What the Apple paper shows, most fundamentally, regardless of how you define [Artificial General Intelligence (AGI)], is that LLMs are no substitute for good well-specified conventional algorithms. (They also can’t play chess as well as conventional algorithms, can’t fold proteins like special-purpose neurosymbolic hybrids, can’t run databases as well as conventional databases, etc.)

Open source developer and AI commentator Simon Willison pointed out:

I’m not interested in whether or not LLMs are the “road to AGI”. I continue to care only about whether they have useful applications today, once you’ve understood their limitations. Reasoning LLMs are a relatively new and interesting twist on the genre. They are demonstrably able to solve a whole bunch of problems that previous LLMs were unable to handle, hence why we’ve seen a rush of new models from OpenAI and Anthropic and Gemini and DeepSeek and Qwen and Mistral….They’re already useful to me today, whether or not they can reliably solve the Tower of Hanoi….

Apple acknowledges several limitations of their research, noting in particular that their experiments mostly relied on “black box” API calls, leaving them unable to examine the inner state of the models. They also agree that the use of puzzles means that their conclusions may not generalize to all reasoning domains.

About the Author

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Presentation: A Framework for Building Micro Metrics for LLM System Evaluation

MMS Founder
MMS Denys Linkov

Article originally posted on InfoQ. Visit InfoQ

Transcript

Linkov: Who here has changed a system prompt before and led to issues in production? You’ll have this experience soon. You run all these tests, hopefully you have evaluations before you change your models, and they all pass. Then things are going well until somebody pings you in the Discord server that everything is broken. One scenario that happened that led to this whole concept of a micro metric was, we released a change in our system prompts for how we interact with models. Somebody was prompting a model in a non-English language. They were having a conversation with their end user in a non-English language.

By conversation turn number five, the model responds in English, and this customer is very mad why their chatbot is responding in a English language when it’s been talking in German the whole time. They’re very confused, and so are we. Building LLM platforms, or any kind of platform is challenging. Who here has worked on a platform? The company I work at, Voiceflow, is an AI agent building platform. We’ve been around for six years. I’ve been there for three and a half. It’s been interesting to allow people to build different kinds of AI applications, starting with more traditional NLU intent entity applications, to things that now focused on LLMs.

For scale, we support around 250,000 users. We have a lot of projects, a lot of different variety on those projects. We have lots of different languages. We have lots of different enterprise customers. When we’re doing rollouts, it’s not quite at the scale of some of the companies you’ve seen here, but it’s at a pretty decent scale.

What Makes a Good LLM Response?

We get into this question of, when you’re building an LLM application, what actually makes a good LLM response? It’s a pretty philosophical question, because it’s actually hard to get people to agree what good means.

The first one is that, what makes LLMs very attractive, but also very misleading is that, generally, they sound pretty good. They sound pretty convincing. The second part that I mentioned already is that people often do not agree on what is good, so you have this constant pressure of LLMs make up plausible things, and people do not know what’s good. Sometimes people don’t read the responses. You have many different options for scoring. You likely have used some of these approaches before. We have things like doing regex matches or exact matches on outputs. We have things like doing cosine similarity between different phrases, outputs, and golden datasets. You might use an LLM as a judge, or you might use more traditional data science metrics.

Lesson 1: The Flaws of One Metric

Let’s start it off with some lessons that we’ve learned. The first one is the flaw of a single metric. We’ll start off with semantic similarity. It’s a thing that powers RAG generally, the way where you search for similar phrases. Here’s a list of three phrases that I’m going to compare to the phrase, I like to eat potatoes, and three different models that are being used. The first one is the OpenAI, most recent model. Then we have two open-source models that rank quite highly on different embedding-based metrics. Who here can guess what the top matching phrase is to, I like to eat potatoes? Who thinks it’s, I am a potato. Who thinks it is, I am a human? Who thinks it is, I am hungry? All three models thought it was the first one.

Apparently, when you say I like to eat potatoes and I am a potato, you get into some weird dynamic there. When you train these models that do comparisons of cosine similarity or any kind of semantic similarity, there are flaws to it. As we talked about, I am hungry is probably the closest one, and I am human, because humans eat potatoes. Metrics don’t work all the time. Then we go on to LLM as a judge. This is quite popular, especially GPT-4. A lot of synthetic data is generated with LLMs, and it’s a common technique to verify when we’re too lazy to actually read the responses ourselves.

The problem is that these models have significant bias associated with them on what they score results with. This is a paper that was released in 2023 talking about how GPT-4 and human agreement is misaligned whenever prompts are shorter versus longer. GPT-4 really likes long prompts, and then GPT-4 does not like short prompts. We’ve seen this bias through a number of different studies. This is an interesting concept, because these models are trained in a certain way to mimic certain human tendencies or certain preferences that might emerge after the training.

Now we go into the question, what about humans? Are humans reliable judges? This is a good way to control for this type of topic. I want to take you through standardized exams. Who here has written a standardized exam before? There was some research done almost 20 years ago on the SAT Essay, where the researcher found that if you simply looked at the length of the essay, it correlated very well with how examiners scored an essay.

The essay that determined how high school students in the U.S. go on to university was almost purely based on the length of the essay, ignoring facts or other pieces of information. Humans are great judges. What does it mean to be better? What does it mean to be good? Who would rather watch a YouTube video about cats or LLMs? If we look at two highly performing results, we see that these baby cats videos have 36 million views, versus this very good lecture by Karpathy, has 4 million views. We say, “Cats are better than LLMs. Obviously, we should serve only cat content to people”. That’s generally what social media is like. Now we have this concept that views, accuracy, or all these metrics by themselves are not enough. They have flaws. You could probably get to that within your own reasoning.

If we talk about how we give instructions to people, we generally give pretty specific instructions for some tasks, but vaguer instructions for others. Who here has worked in fast food before? I used to work at McDonald’s. It was a character-building experience. When we get different instructions, some are specific and some are vague. This is the nature of human nature. Sometimes they’re the right amount of information, sometimes they’re not. For example, when I worked at McDonald’s, the amount of time you should have cooked the chicken nuggets was very specific. It was in the instruction manual. There are beepers that went off everywhere if you did not lift the chicken nuggets.

At the same time, you’d come into work and your manager would be like, mop the floor. If you hadn’t mopped the floor before, you ask a follow-up question or you make a mess. There are things in the middle. All these different things talk about the ambiguity of what instructions are actually like for humans. Who here is a manager? When doing performance reviews, it’s important to give specific feedback. We heard this probably in a lot of the engineering track talks about, how do you manage a good team? These are some of the questions that you might be asked in a McDonald’s performance review, or any fast food. I always got in trouble for giving too much swirls on the ice cream cone, and that’s probably why the machine was broken. It is what it is. We have specific things that you got feedback for, and then you’d get some review.

Metrics for LLMs, you can think of them as being fairly similar, not because LLMs are human or they’re becoming some human entity, but because it’s a good framework to think about how vague or specific you should be. Who’s got this kind of feedback on a performance review? It’s awful. What am I supposed to do with this? “You’re doing great”. Off to your next meeting.

One of the things that we do at Voiceflow, completely separate from LLMs, is that, for engineering, our performance reviews are quite specific, maybe too specific. I’ll go through all 13 categories and rate people based on our five different levels, and provide three to five examples based on the work. It might be a lot, but that’s the specific feedback that I think is appropriate to give as a manager to people within your team. Similar for large language models, if somebody just says there was a hallucination, I’ll be like, “Great. What am I supposed to do with this information?”

Lesson 2: Models as Systems

Let’s go on to the next lesson, models as systems. Who here has done general observability work or platform work before writing metrics, traces, logs? Do the same things for large language model systems. You do this because you need to observe results and see how your system is actually behaving. You don’t just put something in production, close your eyes and run away. You could do that, but you’re not going to have a good day when you get paged. There are different types of observability events. You have logs, so typically like what happened. You have metrics, how much of that thing happened. A little less verbose. Then you have traces, trying to figure out why something happened. It goes through this level of granularity. You typically have metrics that are not as granular or not as verbose, going all the way down to traces.

Focusing on metrics, there’s different dimensions of defining metrics. You’re going to see a lot of these 2D graphs in the presentation. Let’s talk about two types of metrics for LLMs: we can have model degradation and we can have content moderation. For model degradation, you might have this chart on latency, saying, it takes a relatively low amount of time to figure out if a provider is failing, or your inference point is failing. If you want to do model response scoring, that might take a little bit longer, so still on the order of magnitude of maybe seconds.

Then you go to something that’s offline, like choosing the best model. This might be a week’s long decision, or if you work at an enterprise, months. On the content moderation side, likewise, if you’re facing a spam attack, you probably want to do this in some online fashion. You don’t want to run a batch job next week to be like, there is somebody doing some weird stuff on our platform. You need to figure out what the purpose of your metric is, how much latency it’ll take, and how do you define an action going forward.

Going to some more details on this. Here we have four types of applications. I’ve defined them as real-time metrics versus async. Something that’s real-time, you might want to know if there is a model degradation event that is happening. For example, events are timing out, or the model is just returning garbage, that happens. While, if you’re doing model selection, you can do that on an async basis either running evaluations or having philosophical debates. Same thing with guardrails. You can have guardrails both run online or offline, in this case, real-time or async, and so forth. You can make a million metrics, you can define all of them, but at the end of the day, your metrics should help drive business decisions or technical decisions for the next three months. We talk about this in a mathematical sense, as an analogy of metrics should give you magnitude and direction. It shouldn’t just tell you, you need to do this thing. It should give you a sense of importance.

Lesson 3: Build Metrics That Alert of User Issues

The lesson here is, let’s build metrics that alert of user issues, whether immediate or things that will hurt your product in the long term. If you’re building a successful product or a business, whether internal or external, if your product doesn’t work, users are going to leave. Going back to my example of, my LLM isn’t responding in the right language, we have this message from a few users panicking from our community, and we’re like, let’s verify this before it spreads to our enterprise customers in terms of the rollout. We could not reproduce it. We tried. We got one instance where it produced the wrong language response. We’re like, “We see it in the logs. Something weird is happening. We can’t get it to work, because when you look at your experiment, it’s going to break”.

Then we ended up putting in a monitor and a guardrail for this, saying that, double check what language the model responded in, make sure it’s the intended language. We had just a model that would detect this over the course of some milliseconds, and then it would send a retry if it detected a difference. This generally worked. We chose the online method of doing a retry, rather than storing it for later and then trying to do something else. This goes back to the question of, are you going to be doing this online or offline, and is this going to be, in a programming sense, a synchronous or asynchronous call?

If you’re doing content moderation and somebody says something silly into your platform, you might want to not respond, or you might want to flag. It really depends on your business. When you’re trying to make this decision of how is the metric going to affect your business, just go through the scenario, backpropagate through, calculate the gradients of what’s going to happen.

Generally, when you’re building a product, whether internal or external, you want to get your customers’ trust. First, you have to build a product that works, sometimes. Next, you want to do nice things for your customer, so sales 101. You live on this island of customer trust if people are buying your product, and if things break, for example, models responding in the wrong language, you lose trust. Your product is no longer working in your customers’ eyes, and your customers are mad that their customers are complaining. You can do things. You can refund what’s happening. You’re like, “Sorry. I’m going to give you your money back for the pain we caused you”. You could fix the issue, for example, adding an auto retry.

Then you can write an RCA, a root cause analysis to say, “This is why this broke. We’re communicating to you that we fixed it and it shouldn’t happen again”. Whether or not this actually gets you back to the island of customer trust depends on your customer, but you should use this metric and your process of engineering to actually go through and get back the customer’s trust and make sure that your product works as it’s supposed to work. The more complex systems that you’re building, the more complex the observability is. This is something that they have to be very aware of. Just because you make a very complex LLM pipeline, you use all the recent, most modern forms of RAG, you can’t keep track of all of them. It’s going to be harder to debug and figure out what’s going wrong. Going through some simple RAG metrics, you can break RAG down into the two components, the retrieval portion and then the generation portion.

For the retrieval part, you want to make sure you have the right context. You have the correct information. It’s relevant. You don’t have too much superfluous information that’s going to damage your generation, with some kind of optimization of precision and recall, if you know what the ranking should be. Then, on the generation side, there’s different ways to measure that. Some sample ones are correct formatting, the right answer, and no additional information. These are just a few, but recognize that RAG, because of its multiple components, will have different metrics for different parts. If we get more specific in order, you can have accuracy, faithfulness, which is a retrieval metric, correct length, correct persona, or something super specific, like your LLM does not say delve, even though that seems to be very hard.

Lesson 4: Focus on Business Metrics

We’re on lesson 4 now. We’ve come up with some metrics. Hopefully, in your head, you’re thinking about your use case of what you’re building, but at the end of the day, it needs to bring business value. For example, what is the cost of a not safe for work response from your LLM? Everybody’s business is going to be different. Depending who you’re selling to, depending on the context, it’s all going to be different. Your business team should figure this out. Likewise, if you’re providing bad legal advice, you’re building a legal LLM, and it says, go see your neighbor. It’s not good. You need to calculate this as a dollar cost and figure out, how much are you going to invest into metrics, how much are you going to pay extra for this? How much latency you might incur to do online checks. What is the cost of a bad translation, from our earlier example? The reason why we build these metrics and we use LLMs at the end of the day is we want to save some human time.

All the automation that’s being built, all these fancy applications, there’s some way to save human time. Unless you’re building a social media app, then you want people to be stuck on your app for as long as they can. You’re like, “I don’t fully know the business. This is not my job. I’m a developer. I write code”. First of all, no, understand your business. Understand what you’re building. Understand the problem you’re solving. Second of all, it’s fair to ask your business team to do most of the work, otherwise, why are they around? There’s a lot of things that business teams should be doing. They should be defining use cases, talking about how things integrate with their product, measuring ROI, choosing the right model. It goes on for a long time where you need somebody from the product and business side to tell you how to relatively prioritize things. You should be part of that conversation, but metrics are not just a technical thing that should be built.

Especially in an LLM world where LLMs are being put into all sorts of products, make sure that the business team stops and thinks about, how are we defining these things for our product? If you’re finding that you’re doing these things as a technical person, just be aware that the job is quite expensive. Metrics should be retired when they’re no longer useful, or you find a different way to solve it. As models become better, that language problem that I indicated might no longer occur. Or we do the calculation and say, these few users who are non-English users are no longer our target customer, and if they have a bad time, we’ll just absorb the cost of that, whatever that is. Make sure that your metrics align with your current goals, what you’ve learned. Because if you’re launching an LLM application into production, you will learn many things. Make sure you have a cleanup process that handles this.

Lesson 5: Crawl, Walk, Run

Finally, to give some more actionable tips. Don’t jump into the deep end right away. Follow this crawl, walk, run methodology. Again, going back to the metrics approach. You want to make sure you understand use cases, and you want to make sure you have the technical teams. That’s generally how I think measuring any kind of LLM maturity is, but likewise for LLM metric maturity. Talking about crawl, the different prerequisites before you implement these metrics. You want to know what you’re building, why you’re building it. You want to have datasets for evaluations. If you don’t, please go back and make a few. You want to have some basic scoring criteria, and you want to have some basic logging. You want to be able to track your system and know what’s going on, and be able to know generally what’s right and wrong. Some sample metrics here are things like moderation or maybe some kind of accuracy metric based on your evaluation datasets. Again, they’re not perfect, but they’re a great place to start. Again, to walk.

At this point, you should know the challenges of your system, things where the skeletons are, what’s working, what’s not. You should have a clear hypothesis on how to fix them, or at least how to dig deeper into these problems. You should have some feedback loop for addressing these kinds of questions. How can I test my hypothesis, gather feedback data, whether through logs or through users, and address some of these concerns. You should have done some previous metrics attempts before. These metrics get a lot more specific. Some of them might be format following. You could do some recall metric, in this case, net discounted cumulative gain, more of a retrieval metric. You can do things like answer consistency to figure out what’s the right temperature to set and what the tradeoffs are, or you can do language detection. You can see, these are getting more specific. You need a little bit more infrastructure to actually best leverage these. We get into run.

At this point, you should be up on the stage and talking about the cool things that you’re doing. You have a lot of good automation of what you’ve built in-house. You’re doing auto prompt tuning. You have specific goals mapped to your metrics. You have a lot of good data, probably to fine-tune. Again, that’s another business decision. Then your metrics are whatever you want them to be. You understand your system. You understand your product. Figure out what those micro metrics are.

Summary

The five lessons that we covered, we’ve noticed that single metrics can be flawed. Hopefully, from my potato examples, it becomes clear. We know that models are not just LLMs, they’re broader systems, especially as you introduce complexity in various ways, whether things like RAG, tool use, or whatever it might be. You want to build metrics that actually alert you on user issues and things that affect the business, and align them with future business direction. How am I improving my product using LLMs? Don’t overcomplicate it: go through the crawl, walk, run methodology. The worst thing that you can do is make a giant dashboard with 20 metrics, and it’s not helping you do anything. Start off with one metric, build the infrastructure, build the confidence, and deploy to production.

Questions and Answers

Participant 1: Your example of switching from German to English resonated with me. I feel like I’ve seen things like this in production, where it’s obvious in retrospect, like we should have had a test for that, and any human would have seen it as a problem, clearly, the customer did. What I’m not clear we should be doing, I don’t know how to be writing robust tests for unforeseen behavior. Any hints on not just a specific test for, don’t switch languages, but like, what’s the broad type of issue, and how you write a test for that?

Linkov: I think it goes back to any foreshadowing of issues. You try to plan as much as you can, but at the end of the day, production data is the best data. This is where good software practices make sense. Run beta tests, onboard more forgiving users first, feature flag things out. Figure out what parts of your system are going to be most affected, what are weird behaviors. At the end of the day, it’s hard to see, especially if you’re running multiple models, especially if you’re a platform where people are doing so many different things. If your use case is more clearly defined, then it’s a little bit easier to ideate and think through it. I wouldn’t beat yourself up too much, but have those good release patterns and just see how your users break your product.

Luu: What about getting product managers to help you write tests?

Participant 2: I think the major difference here is that because AI models are not deterministic, because you cannot treat the LLM measurements as we do with other software systems. What are the major methods or metrics that you have used to make this need to be better, even though it won’t be 100%, for sure.

Linkov: There’s a lot of techniques to do so. One recent one is constrained decoding, where you can provide a list of possible options to the model to actually produce a result. It’s still an active research area, constrained decoding and many other ones, to actually try to make it a little bit more deterministic. I think there’s also the question of, should you be using an LLM? If an LLM hallucinates or makes an error, is non-deterministic 1% of the time, and you’re used to having 99.99% accuracy or consistency, an LLM is probably not the right model for that. There’s been a lot of advancements in other models, but they’ve been overshadowed by LLMs. I think this is the question of determining, am I going to use an LLM? Am I going to use a more standard ML model, some kind of encoder model to do a task? I’m going to define a manual workflow, and the LLM just helps guide me through that workflow. These are all decisions that you can be making. Hopefully, we’re past the just throw everything into an LLM and pray, in production.

Participant 3: You mentioned synthetic data, I wonder what Voiceflow’s take on using synthetic data. If so, how does it compare using the micro metrics that you mentioned?

Linkov: I think synthetic data is part of that general process of evaluating and generating examples. This is something where, when we’re writing test cases, it’s a really good way to expand beyond like, I’ll write some by hand or write some augmented ones, and then figure out where the extra edge conditions are. Then you can verify how often is this metric being triggered. I think there’s different ways to use it. We primarily use it in the testing stage, just to give us more variety, because it takes a lot of time to write good tests.

Participant 4: In your particular use case, how do you handle the balance between testing for too many things and just bearing the cost of being wrong or making a mistake. Is there a number of tests? Do you say, I’m going to test but it’s not going to be more than one second or something, or is there like a dollar amount? How did you handle it in your particular use cases?

Linkov: I think we have a few tests that are run online, things like content moderation, things like this language test. A few other ones as well. I think generally, you’re never going to get it right 100% of the time. We prefer to launch with the techniques I mentioned earlier. Get into production. Get into users’ hands. This is part of digital transformation in general. In large companies the whole agile process and feature flags are still making their way through. It’s still really important to have good staging environments, paid environments, going through all of this together. Recommendation is, test more. Don’t ignore testing. Have good evidence where somebody says, how do you know this will work? Don’t just say, I made it up, or, I think it’s going to work. Have some evidence to showcase that. Come up with good test cases to get most of the coverage.

At the end of the day, your product is useless unless it’s in a customer’s hands. Products that die in prototyping, you don’t get a thumbs up for that. You need to make it into production. Every organization has a different process for doing risk assessment and everything else, so, really depends on that. Write some baseline tests so you’re confident, as somebody who owns this service or owns this product, that said, I did enough, given the tradeoffs of shipping quickly, versus making sure it’s definitely going to work.

See more presentations with transcripts

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


MongoDB, Inc. (NASDAQ:MDB) Shares Bought by Robeco Institutional Asset Management B.V.

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

Robeco Institutional Asset Management B.V. lifted its position in shares of MongoDB, Inc. (NASDAQ:MDBFree Report) by 13.2% in the first quarter, according to the company in its most recent 13F filing with the Securities & Exchange Commission. The fund owned 19,449 shares of the company’s stock after buying an additional 2,270 shares during the period. Robeco Institutional Asset Management B.V.’s holdings in MongoDB were worth $3,411,000 as of its most recent SEC filing.

A number of other institutional investors and hedge funds have also made changes to their positions in MDB. Strategic Investment Solutions Inc. IL purchased a new position in MongoDB in the fourth quarter valued at approximately $29,000. Coppell Advisory Solutions LLC increased its position in shares of MongoDB by 364.0% in the fourth quarter. Coppell Advisory Solutions LLC now owns 232 shares of the company’s stock worth $54,000 after acquiring an additional 182 shares in the last quarter. Smartleaf Asset Management LLC increased its position in shares of MongoDB by 56.8% in the fourth quarter. Smartleaf Asset Management LLC now owns 370 shares of the company’s stock worth $87,000 after acquiring an additional 134 shares in the last quarter. J.Safra Asset Management Corp raised its stake in shares of MongoDB by 72.0% in the 4th quarter. J.Safra Asset Management Corp now owns 387 shares of the company’s stock valued at $91,000 after acquiring an additional 162 shares during the period. Finally, Aster Capital Management DIFC Ltd acquired a new position in shares of MongoDB during the 4th quarter valued at $97,000. 89.29% of the stock is currently owned by institutional investors and hedge funds.

MongoDB Price Performance

MongoDB stock opened at $209.99 on Tuesday. MongoDB, Inc. has a 12 month low of $140.78 and a 12 month high of $370.00. The company has a fifty day moving average price of $191.99 and a two-hundred day moving average price of $216.25. The company has a market cap of $17.16 billion, a PE ratio of -184.20 and a beta of 1.40.

MongoDB (NASDAQ:MDBGet Free Report) last posted its quarterly earnings results on Wednesday, June 4th. The company reported $1.00 EPS for the quarter, beating the consensus estimate of $0.65 by $0.35. MongoDB had a negative net margin of 4.09% and a negative return on equity of 3.16%. The business had revenue of $549.01 million during the quarter, compared to analyst estimates of $527.49 million. During the same period in the previous year, the business posted $0.51 EPS. The company’s quarterly revenue was up 21.8% on a year-over-year basis. Equities research analysts anticipate that MongoDB, Inc. will post -1.78 earnings per share for the current year.

Analyst Upgrades and Downgrades

MDB has been the subject of several recent research reports. Canaccord Genuity Group lowered their price target on MongoDB from $385.00 to $320.00 and set a “buy” rating for the company in a report on Thursday, March 6th. Oppenheimer lowered their target price on MongoDB from $400.00 to $330.00 and set an “outperform” rating for the company in a research note on Thursday, March 6th. Wedbush reissued an “outperform” rating and set a $300.00 price target on shares of MongoDB in a research report on Thursday, June 5th. Stifel Nicolaus decreased their price target on MongoDB from $340.00 to $275.00 and set a “buy” rating for the company in a report on Friday, April 11th. Finally, Morgan Stanley dropped their price objective on shares of MongoDB from $315.00 to $235.00 and set an “overweight” rating on the stock in a research note on Wednesday, April 16th. Eight equities research analysts have rated the stock with a hold rating, twenty-five have issued a buy rating and one has assigned a strong buy rating to the stock. According to MarketBeat.com, the stock has a consensus rating of “Moderate Buy” and an average target price of $282.47.

Check Out Our Latest Stock Analysis on MongoDB

Insider Buying and Selling

In other news, Director Dwight A. Merriman sold 2,000 shares of the stock in a transaction that occurred on Thursday, June 5th. The stock was sold at an average price of $234.00, for a total transaction of $468,000.00. Following the sale, the director directly owned 1,107,006 shares in the company, valued at approximately $259,039,404. This represents a 0.18% decrease in their position. The sale was disclosed in a document filed with the Securities & Exchange Commission, which can be accessed through the SEC website. Also, insider Cedric Pech sold 1,690 shares of the firm’s stock in a transaction that occurred on Wednesday, April 2nd. The stock was sold at an average price of $173.26, for a total value of $292,809.40. Following the completion of the sale, the insider directly owned 57,634 shares in the company, valued at approximately $9,985,666.84. This trade represents a 2.85% decrease in their position. The disclosure for this sale can be found here. Insiders sold a total of 50,027 shares of company stock valued at $10,371,435 over the last 90 days. 3.10% of the stock is owned by corporate insiders.

MongoDB Company Profile

(Free Report)

MongoDB, Inc, together with its subsidiaries, provides general purpose database platform worldwide. The company provides MongoDB Atlas, a hosted multi-cloud database-as-a-service solution; MongoDB Enterprise Advanced, a commercial database server for enterprise customers to run in the cloud, on-premises, or in a hybrid environment; and Community Server, a free-to-download version of its database, which includes the functionality that developers need to get started with MongoDB.

See Also

Institutional Ownership by Quarter for MongoDB (NASDAQ:MDB)

This instant news alert was generated by narrative science technology and financial data from MarketBeat in order to provide readers with the fastest and most accurate reporting. This story was reviewed by MarketBeat’s editorial team prior to publication. Please send any questions or comments about this story to contact@marketbeat.com.

Before you consider MongoDB, you’ll want to hear this.

MarketBeat keeps track of Wall Street’s top-rated and best performing research analysts and the stocks they recommend to their clients on a daily basis. MarketBeat has identified the five stocks that top analysts are quietly whispering to their clients to buy now before the broader market catches on… and MongoDB wasn’t on the list.

While MongoDB currently has a Moderate Buy rating among analysts, top-rated analysts believe these five stocks are better buys.

View The Five Stocks Here

Reduce the Risk Cover

Market downturns give many investors pause, and for good reason. Wondering how to offset this risk? Enter your email address to learn more about using beta to protect your portfolio.

Get This Free Report

Like this article? Share it with a colleague.

Link copied to clipboard.

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


MongoDB, Inc. (NASDAQ:MDB) Shares Sold by Oppenheimer & Co. Inc. – Defense World

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

Oppenheimer & Co. Inc. reduced its position in MongoDB, Inc. (NASDAQ:MDBFree Report) by 49.8% during the 1st quarter, according to its most recent filing with the Securities and Exchange Commission (SEC). The institutional investor owned 4,614 shares of the company’s stock after selling 4,579 shares during the period. Oppenheimer & Co. Inc.’s holdings in MongoDB were worth $809,000 as of its most recent SEC filing.

Other institutional investors have also modified their holdings of the company. Vanguard Group Inc. lifted its stake in shares of MongoDB by 0.3% during the fourth quarter. Vanguard Group Inc. now owns 7,328,745 shares of the company’s stock worth $1,706,205,000 after purchasing an additional 23,942 shares in the last quarter. Franklin Resources Inc. boosted its holdings in MongoDB by 9.7% in the 4th quarter. Franklin Resources Inc. now owns 2,054,888 shares of the company’s stock worth $478,398,000 after acquiring an additional 181,962 shares during the period. Geode Capital Management LLC boosted its holdings in MongoDB by 1.8% in the 4th quarter. Geode Capital Management LLC now owns 1,252,142 shares of the company’s stock worth $290,987,000 after acquiring an additional 22,106 shares during the period. First Trust Advisors LP raised its stake in shares of MongoDB by 12.6% during the fourth quarter. First Trust Advisors LP now owns 854,906 shares of the company’s stock valued at $199,031,000 after purchasing an additional 95,893 shares during the period. Finally, Norges Bank acquired a new position in shares of MongoDB during the fourth quarter worth $189,584,000. Hedge funds and other institutional investors own 89.29% of the company’s stock.

MongoDB Stock Performance

MongoDB stock opened at $209.92 on Monday. MongoDB, Inc. has a one year low of $140.78 and a one year high of $370.00. The company has a 50-day simple moving average of $190.83 and a 200-day simple moving average of $216.65. The firm has a market capitalization of $17.15 billion, a P/E ratio of -184.14 and a beta of 1.40.

<!—->

MongoDB (NASDAQ:MDBGet Free Report) last posted its earnings results on Wednesday, June 4th. The company reported $1.00 earnings per share (EPS) for the quarter, beating analysts’ consensus estimates of $0.65 by $0.35. MongoDB had a negative net margin of 4.09% and a negative return on equity of 3.16%. The firm had revenue of $549.01 million during the quarter, compared to analyst estimates of $527.49 million. During the same period in the previous year, the business posted $0.51 EPS. MongoDB’s revenue was up 21.8% on a year-over-year basis. As a group, sell-side analysts forecast that MongoDB, Inc. will post -1.78 earnings per share for the current year.

Insider Activity at MongoDB

In other news, insider Cedric Pech sold 1,690 shares of the stock in a transaction on Wednesday, April 2nd. The stock was sold at an average price of $173.26, for a total transaction of $292,809.40. Following the completion of the sale, the insider now owns 57,634 shares in the company, valued at approximately $9,985,666.84. This represents a 2.85% decrease in their position. The sale was disclosed in a legal filing with the Securities & Exchange Commission, which is accessible through this hyperlink. Also, Director Dwight A. Merriman sold 820 shares of the business’s stock in a transaction on Wednesday, June 25th. The stock was sold at an average price of $210.84, for a total value of $172,888.80. Following the sale, the director now directly owns 1,106,186 shares in the company, valued at $233,228,256.24. This trade represents a 0.07% decrease in their ownership of the stock. The disclosure for this sale can be found here. Insiders have sold 51,202 shares of company stock worth $10,576,696 over the last quarter. 3.10% of the stock is currently owned by company insiders.

Analyst Upgrades and Downgrades

Several equities research analysts recently issued reports on MDB shares. Stifel Nicolaus dropped their price target on shares of MongoDB from $340.00 to $275.00 and set a “buy” rating on the stock in a research note on Friday, April 11th. Monness Crespi & Hardt raised MongoDB from a “neutral” rating to a “buy” rating and set a $295.00 target price on the stock in a research report on Thursday, June 5th. Truist Financial cut their price target on MongoDB from $300.00 to $275.00 and set a “buy” rating on the stock in a research note on Monday, March 31st. Scotiabank increased their price target on MongoDB from $160.00 to $230.00 and gave the company a “sector perform” rating in a report on Thursday, June 5th. Finally, Redburn Atlantic upgraded MongoDB from a “sell” rating to a “neutral” rating and set a $170.00 price objective on the stock in a report on Thursday, April 17th. Eight investment analysts have rated the stock with a hold rating, twenty-five have assigned a buy rating and one has issued a strong buy rating to the stock. According to MarketBeat.com, the stock presently has an average rating of “Moderate Buy” and a consensus price target of $282.47.

Check Out Our Latest Stock Report on MDB

MongoDB Company Profile

(Free Report)

MongoDB, Inc, together with its subsidiaries, provides general purpose database platform worldwide. The company provides MongoDB Atlas, a hosted multi-cloud database-as-a-service solution; MongoDB Enterprise Advanced, a commercial database server for enterprise customers to run in the cloud, on-premises, or in a hybrid environment; and Community Server, a free-to-download version of its database, which includes the functionality that developers need to get started with MongoDB.

See Also

Want to see what other hedge funds are holding MDB? Visit HoldingsChannel.com to get the latest 13F filings and insider trades for MongoDB, Inc. (NASDAQ:MDBFree Report).

Institutional Ownership by Quarter for MongoDB (NASDAQ:MDB)



Receive News & Ratings for MongoDB Daily – Enter your email address below to receive a concise daily summary of the latest news and analysts’ ratings for MongoDB and related companies with MarketBeat.com’s FREE daily email newsletter.

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Why AI needs a new approach to unstructured, fast-changing data: MongoDB Exec

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

Why AI needs a new approach to unstructured, fast-changing data: MongoDB Exec

<!—->


Loading...
Why AI needs a new approach to unstructured, fast-changing data: MongoDB Exec

As Indian enterprises push forward with digital transformation, data modernization has become a central focus. Companies are moving to cloud-native systems, using real-time data, and beginning to build AI-driven applications. 

MongoDB is working across the ecosystem, from startups and software vendors to large enterprises, to support this shift.  

In this conversation, Sachin Chawla, VP for India & ASEAN at MongoDB, talks about how Indian enterprises are approaching data transformation, the challenges they face, and how MongoDB is adapting its strategy to meet these changing needs. Edited Excerpts:  

What’s fundamentally changing in how Indian enterprises approach data and digital transformation, and how is that shaping your strategy in the region? 

India is going through a significant phase in technology. While we’ll get to the challenges shortly, it’s worth highlighting the strong momentum we’re seeing in modern application development across the ecosystem. 

This includes the startup space, where innovation is active, from early-stage companies like RFPIO to large-scale players like Zepto and Zomato. There’s also a robust ISV ecosystem here, which operates quite differently compared to markets like the US or EMEA. For instance, many Indian banks rely heavily on ISVs, often referred to as “bank tech”, for their software needs. Companies like CRMnext, Intellect AI, and Lentra are key examples, and many of them use MongoDB. 

Beyond startups and ISVs, significant digital transformation is happening in large enterprises as well. Take Tata, for example, the Tata Neu app runs on MongoDB. So overall, there’s a lot of progress and activity in the ecosystem. 

Now, on to the challenges. Broadly, they’re similar to those faced globally and can be grouped into three areas, first, improving developer productivity. Every organization is focused on how to help developers move faster and be more efficiently. Second, building modern AI applications. There’s growing pressure, both from the ecosystem and from leadership, to deliver AI-driven solutions. 

Third, modernizing legacy applications. Many existing systems were built over years and aren’t designed to meet today’s demands. Users expect immediate, responsive digital experiences, and older systems can’t keep up. These are the key priorities: boosting developer productivity, adopting AI, and modernizing legacy systems. 

What are the biggest misconceptions Indian enterprises still have about modern databases, and how do these hold back their digital transformation? 

First, some organizations treat modernization as simply moving their existing systems from on-premises to the cloud. But lifting and shifting the stack without changing the application, the underlying infrastructure, or the database doesn’t actually modernize anything. It’s still the same application, just running in a different environment, and it won’t deliver new value just by being in the cloud. 

Second, there’s the idea of using the best purpose-built database for each use case. In practice, this often leads to an overload of different databases within one organization. While each one might be optimal for its specific function, managing a large variety becomes a challenge. You end up spending more time and resources maintaining the system than actually innovating with it. 

Third, when it comes to AI, many organizations lack a clear strategy. They often start building without clearly defined use cases or objectives, just reacting to pressure to “do something with AI.” Without a focused plan, it’s hard to deliver meaningful outcomes. 

Which industries in India are making the boldest or most unexpected moves in digital transformation or data modernization, and why? 

Every sector is adopting AI in its own way. Tech startups and digital-native companies tend to make faster, more visible moves, but even large enterprises are accelerating their efforts. 

For example, we recently partnered with Intellect AI, an independent software vendor serving the global banking sector. They aimed to lead the way in building a multi-agent platform for banking clients to automate and augment operations in areas like compliance and risk, critical functions for many institutions. 

We helped them develop this platform using MongoDB and MongoDB’s AI vector search. The result is called Purple Fabric, and it’s publicly available. This platform is now driving automation and augmentation in compliance and risk management. 

One of their major clients is a sovereign fund managing $1.5 trillion in assets across around 9,000 companies. Using Purple Fabric, they automated ASC compliance processes with up to 90% accuracy. 

This example shows that while enterprises may seem slower, companies like Intellect AI are enabling them to move quickly by building powerful tools tailored for complex environments. 

What recurring data architecture issues do you see in enterprise AI projects, and how does your company help address them? 

When you look at AI applications, it’s important to understand that the data used to build them is mostly unstructured. This data is often messy, comes from various sources, and appears in different formats such as video, audio, and text. Much of it is interconnected, and the overall volume is massive. Additionally, the data changes rapidly. 

These are the three core characteristics of AI data: it’s unstructured, high in volume, and constantly evolving. As a result, if you look at an AI application a year after it’s built without any updates, it’s likely already outdated. Continuous updates are essential. 

MongoDB stores data in a document format, unlike traditional systems that use rows and columns. Trying to store large volumes of unstructured and fast-changing data in a tabular format becomes unmanageable. You’d end up with thousands of tables, all linked in complex ways. Any change in one table could affect hundreds or thousands of others, making maintenance difficult. 

This is why many modern applications are built on MongoDB rather than on legacy systems. For example, Intellect AI uses MongoDB, as does DarwinBox, which uses AI to power talent search queries like finding the percentage of top-rated employees. Previously, this kind of semantic search would take much longer. 

Another example is Ubuy, an e-commerce platform with around 300 million products. They switched from a SQL database to MongoDB with vector search. Search times went from several seconds to milliseconds, enabling efficient semantic search. 

RFP.io is another case. It uses vector search to process and understand RFP documents, identifying which sections relate to topics like security or disaster recovery. This simplifies the process of responding to RFPs. 

As enterprises shift to unstructured data, vector search, and real-time AI, how is MongoDB adapting, and what key industry gaps still remain? 

The first step is collecting and using data in real time. For that, you need the right database. A document model is a natural fit for the scale and structure of this data. 

Once you have the data, the next step is using it effectively. That starts with full-text search, similar to how you search on Google. Most applications today rely on this kind of search functionality. 

But if you’re building AI applications, you also need to vectorize the data. Without vectorization, you can’t perform semantic searches or build meaningful AI features. 

At this point, companies usually face a choice. They often have data spread across multiple databases. To enable full-text search, they might add a solution like Elasticsearch. For semantic search, they bring in a vector database such as Pinecone. If they want to train or fine-tune models using internal data, they also need an embedding model. So now they’re managing a database, a full-text search engine, a vector search system, and an embedding model, each a separate component. 

The integration work required to get all these systems to operate together can consume a large amount of development and management time, pulling focus away from innovation. 

In contrast, our platform simplifies this. It uses a single document database to store all types of data. It includes Atlas Search for full-text search, built-in vector search, and now, with our acquisition of Voyage, integrated embedding capabilities. You don’t need separate systems for each function. 

With everything in one place, there’s no need for complex integration. You can run full-text and semantic (hybrid) search from the same platform. This reduces cost, simplifies management, and frees up time for innovation. Customers tell us this is the biggest advantage—they don’t need to stitch multiple tools together, which can be very hard to manage. 

What’s next for MongoDB in India, are you focusing on AI, edge deployments, cloud-native capabilities, or something else? 

Our focus will remain on three main areas. First, we’ll continue working with developers to help them improve their productivity. Second, we’ll collaborate across the ecosystem and with enterprises to build modern applications. Third, and most significantly, we’ll support large enterprises as they modernize their applications, whether by creating new ones or upgrading legacy systems. This includes helping them reduce technical debt, move away from outdated applications and databases like Oracle and SQL, and transition to more modern architectures that align with their goals. These are our three key priorities. 

Where do you see AI heading in the data modernization space over the next three to five years? 

In my view, we’re at a stage similar to the 1960s when computers and operating systems were just emerging. I see large language models (LLMs) as the new operating systems. We’re in the early phase, and what comes next is the development of applications built on top of them. As this evolves, more advanced and diverse applications will emerge. 

Building applications is becoming much easier. For example, there’s a concept called white coding, where even young children, eight or nine years old—can create apps. If a computer can guide you step by step, almost anyone can build one. That’s where we’re heading: a world where millions of applications can be developed quickly and easily. 

We see ourselves as a natural platform for these applications because we make it simple to store data. So, over the next few years, we expect a surge in development activity. A lot is going to change, and I think we’ll all be surprised by just how much. 

Loading...


Next Article

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Google DeepMind Unveils AlphaGenome: A Unified AI Model for High-Resolution Genome Interpretation

MMS Founder
MMS Robert Krzaczynski

Article originally posted on InfoQ. Visit InfoQ

Google DeepMind has announced the release of AlphaGenome, a new AI model designed to predict how genetic variants affect gene regulation across the entire genome. It represents a significant advancement in computational genomics by integrating long-range sequence context with base-pair resolution in a single, general-purpose architecture.

AlphaGenome processes up to 1 million base-pairs of DNA at once and outputs high-resolution predictions across thousands of molecular modalities, including gene expression, chromatin accessibility, transcription start sites, RNA splicing, and protein binding. It allows researchers to evaluate the effects of both common and rare variants, not just in protein-coding regions, but in the far more complex non-coding regulatory regions that constitute 98% of the human genome.

Technically, AlphaGenome combines convolutional neural networks (CNNs) to detect local sequence motifs and transformers to model long-range interactions, all trained on rich multi-omic datasets from ENCODE, GTEx, 4D Nucleome, and FANTOM5. The architecture achieves state-of-the-art performance across a broad range of genomic benchmarks, outperforming task-specific models in 24 out of 26 evaluations of variant effect prediction.

A notable innovation is AlphaGenome’s ability to directly model RNA splice junctions, a feature crucial for understanding many genetic diseases caused by splicing errors. The model can also contrast mutated and reference sequences to quantify the regulatory impact of variants across tissues and cell types — a key capability for studying disease-associated loci and interpreting genome-wide association studies (GWAS).

Training efficiency was also improved: a full AlphaGenome model was trained in just four hours on TPUs, using half the compute budget of DeepMind’s earlier Enformer model, thanks to optimized architecture and data pipelines.

The model is now available via the AlphaGenome API for non-commercial research use, enabling scientists to generate functional hypotheses at scale without needing to combine disparate tools or models. DeepMind has indicated plans for further extension to new species, tasks, and fine-tuned clinical applications.

This release also aligns with a broader conversation around the interpretability and emotional context of AI in medicine. As Graevka Suvorov, an AI alignment researcher, commented:

The true frontier for MedGemma isn’t just diagnostic accuracy, but the informational and psychological state it creates in the patient. A diagnosis without context is a data point that can create fear. A diagnosis delivered with clarity is the first step to healing. An AI with a true ‘informational bedside manner’—one that understands it’s not just treating an image, but a person’s entire reality—is the next real leap in AGI.

AlphaGenome pushes the field closer to that vision, enabling deeper, more accurate interpretations of the genome and offering a unified model for understanding biology at the sequence level.

About the Author

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


MongoDB’s Strategic Pivot: Securing Its Future in High-Security Cloud Databases – AInvest

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

MongoDB, Inc. (NASDAQ: MDB) has long been a leader in NoSQL databases, but its recent strategic moves—securing inclusion in the Russell Midcap Value Index and pursuing FedRAMP High authorization—signal a bold pivot toward positioning itself as a high-security cloud database provider for government and regulated industries. This repositioning could unlock significant growth opportunities while attracting new investors.

The Russell Midcap Value Inclusion: A Strategic Rebranding

MongoDB’s addition to the Russell Midcap Value Index in early 2024 marks a pivotal shift in its valuation narrative. While MDB has historically been classified as a growth stock (its price-to-sales ratio remains elevated at ~10x), its inclusion in a value-oriented index reflects a maturing business model and improving profitability.

This reclassification is no accident. MongoDB has prioritized margin expansion and recurring revenue streams, with its flagship Atlas cloud service now contributing 70% of total revenue (up from 66% in 2024). The Russell Midcap Value Index inclusion could attract investors seeking stable, cash-flow positive companies in the tech sector—a demographic MDB has historically struggled to engage.

FedRAMP High: Unlocking the $100B Government Cloud Market

MongoDB’s pursuit of FedRAMP High and Impact Level 5 (IL5) certifications by 2025 is its most critical strategic move. These certifications will enable MongoDB Atlas for Government to handle highly sensitive data, including national security and health records, which are currently off-limits to its cloud platform.

The stakes are enormous: U.S. federal cloud spending is projected to hit $100 billion by 2027, with security-conscious agencies favoring providers that meet the strictest compliance standards. While MongoDB currently holds FedRAMP Moderate authorization, the FedRAMP High upgrade—subject to 421 stringent security controls—will open access to lucrative contracts with defense, intelligence, and healthcare agencies.

A Case Study in Success: The Utah Migration

MongoDB’s partnership with the State of Utah offers a blueprint for its government strategy. By migrating its benefits eligibility system to Atlas, Utah reduced disaster recovery time from 58 hours to 5 minutes, while cutting costs and improving speed. This win highlights Atlas’s ability to modernize legacy systems securely, a key selling point for agencies wary of cloud adoption.

Financials Support the Shift to Security

MongoDB’s financials back its strategic pivot:
Q1 2025 revenue grew 22% YoY to $450.6 million, driven by 32% growth in Atlas revenue.
Customer count rose to 49,200, with 73% of $1M+ customers increasing spend.
Margin expansion: Gross margins improved to 68% in Q1 2025, up from 65% in 2024.

These metrics suggest MongoDB is executing its shift toward high-margin, subscription-based cloud services while scaling its salesforce to target regulated sectors.

Risks and Considerations

  • Competition: AWS, Microsoft, and Snowflake are aggressively targeting the government cloud market.
  • Certification Delays: FedRAMP High and IL5 approvals are pending, and delays could push revenue growth below expectations.
  • Valuation: MDB’s stock trades at a premium relative to peers (e.g., Snowflake’s P/S of ~3x).

Investment Thesis: A Buy with Long-Term Upside

MongoDB’s strategic moves—Russell Midcap Value inclusion and FedRAMP High pursuit—position it to capitalize on a $100B+ addressable market in secure cloud databases. While short-term risks exist, the long-term opportunity for MDB to dominate regulated sectors justifies its valuation.

Buy recommendation: With a $430 price target from Citigroup (108% upside from current levels) and strong hedge fund support, MDB is a speculative but high-reward play for investors willing to bet on its security-driven growth.

Final Take

MongoDB’s pivot to high-security cloud databases is more than a rebrand—it’s a calculated move to tap into one of the fastest-growing segments of the tech industry. If it secures FedRAMP High by 2025, MDB could emerge as a must-have partner for governments and enterprises, justifying its premium valuation. For investors, this is a story worth watching closely.

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


MongoDB Announces Commitment to Achieve FedRAMP High and Impact Level 5 Authorizations

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

MongoDB, Inc. announced its commitment to pursuing Federal Risk and Authorization Management Program(FedRAMP) High and Impact Level 5(IL5) authorizations for MongoDB Atlas for Government workloads, which will expand its eligibility to manage unclassified, yet highly sensitive, U.S. public sector data. 

With FedRAMP High authorization, even the most critical government agencies looking to adopt cloud and AI technologies—and to modernize aging, inefficient legacy databases—can rely on MongoDB Atlas for Government for secure, fully managed workloads.

MongoDB Atlas for Government already provides a flexible way for the U.S. public sector to deploy, run, and scale modern applications in the cloud within a dedicated environment built for FedRAMP Moderate workloads. 

Achieving FedRAMP High and IL5 will allow MongoDB Atlas for Government’s secure, reliable, and high-performance modern database solutions to be used to manage high-impact data, such as in emergency services, law enforcement systems, financial systems, health systems, and any other system where loss of confidentiality, integrity, or availability could have a severe or catastrophic adverse effect on organizational operations, organizational assets, or individuals.

“The federal agencies that manage highly sensitive data involving the protection of life and financial ruin should be using the latest, fastest, and best database technology available,” said Benjamin Cefalo, Senior Vice President of Product Management at MongoDB. 

“With FedRAMP High and IL5 authorizations for MongoDB Atlas for Government workloads, they will be able to take advantage of MongoDB’s industry-leading and proprietary Queryable Encryption, multi-cloud flexibility and resilience, high availability with automated backup, data recovery options, and on-demand scaling, and native vector search to facilitate building AI applications.”

MongoDB Atlas for Government already helps hundreds of public sector agencies nationwide develop secure, modern, and scalable solutions. An integral feature of MongoDB Atlas for Government is MongoDB Queryable Encryption. 

This industry-first, in-use encryption technology enables organizations to encrypt sensitive data that helps organizations protect sensitive data when it is queried and in use on Atlas for Government. 

With Queryable Encryption, sensitive data remains protected throughout its lifecycle, whether it is in-transit, at-rest, in-use, and in logs and backups. It is only ever decrypted on the client-side.

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.