Category: Uncategorized

MMS • Lexy Kassan

Transcript
Kassan: Internally at Databricks, I call myself the governess of data. The topics I end up covering are things like responsibility and capability maturity, and stuff like that. It sounds like I’m everyone’s nanny, hopefully more of the like Mary Poppins style than the ones that get taken away on umbrellas. I’ll talk a little bit about responsible AI, some of the things that we’re seeing now. How organizations are approaching that. A little bit of a regulation update. Obviously, there’s quite a lot going on in the space, and in particular, for FinTech, there are some specific things in there as well. Then, the response that we’re seeing in the industry and how that’s being approached.
Responsible AI
I was going to start with responsible AI. I like to lay the groundwork of what we’re talking about. One of the things to think about this, just to level set, 80% of companies are planning to increase their investment in responsible AI. Over the last few years, especially, we’ve seen a tremendous increase, of course, in the desire to use AI, and a parallel increase in the need to think about what that means from a risk perspective. How do you get the value of AI and the capabilities that it will drive for your organization and unlock that value, without creating a massive reputational risk for your company, or potentially business, and financial risks? When we think about responsible AI, we think of in multiple levels. A lot of people talk about the ethics. That’s where I started, actually. I was in data science ethics. I had a podcast for it for several years.
As a data scientist who came into the industry when things weren’t quite such a buzz, it was so important to see where could it go. There’s multiple levels. At the very bottom is where we see regulatory compliance. We have to follow the guidelines that are set in place to be compliant with regulation. Otherwise, we’re going to be out of business. That’s how it works. In the ethics space, this is like, what should we do? If we could make AI act the way we would act and have the same values that we have, what should we do? Getting there, as we may have seen in the last year and a half, is very challenging. Really, where we end up is, what can we do? What are the things that we can put in place to be responsible with AI and put in place guardrails that we actually can enforce, knowing that the aspiration is to get to ethics, but the realistic intention and goal is to be responsible. That’s where we’re at. That 80% of companies are looking to say, “Yes, I have to be compliant. That I’ve already invested in. What else can I do?” This is that next step.
I break this down to four levels. If you think about coming from the top down. At the top, you have the program, that’s really the, if we could do anything, if we could be the most ethical people that we intend to be, what does that look like for our company? It’s setting the stage on the ethical principles and the vision of what our organization wants to achieve and intends to be. This is where, at those top level, you typically see somewhere between four and six ethical principles in a given company. Some group of people, probably in the C-suite, maybe the board, are setting out some high-level principles, like, we want to be fair and unbiased. We want to be transparent. We want to be human centric. These kinds of very big, vague, floaty ideas.
Then, at some point, you have to translate that down. You say, that’s great and all. What does that mean to those of us who are actually in the organization? What is it we have to do to be whatever vague concept you’ve just thrown out there? That’s where you see policies. This is that translation down one level to say, what are the frameworks, what are the rules of the road? Are there specific things that that means we can or can’t do as an organization? Do we have to say, every time we’re going to create an AI, we need to evaluate it against this policy and say, is this allowed? Is this ok for us in our organization? Every organization is going to handle this a little differently.
Beneath that, the next level of translation is then, how do we create processes to enforce the policy? What does it look like to start enacting that on a daily basis? Do we have an AI review board that looks at this and says, does a given application of AI conform with the policy, conform with therefore the principles? Do we have auditing over time? How does this change other processes in our organization, like procurement, or like cybersecurity, for example? This is where you have all the different processes that the organization is going to operate under.
Then, at the bottom, you’ve got the practice, the actual hands to keyboard building these things. What is the way that we’re going to actually implement responsible AI? What are the tools and the templates and the techniques that we’re going to use to evaluate AI as we start to build it into our organization? What you find is that this middle section is what ends up being called AI governance. It’s governing the practice. It’s governing to make sure that these tools, templates, and techniques, abide by the processes and follow that process that is then aligned to the policy that then hopefully gets you closer to your principles. This is the stack of responsible AI.
In terms of principles, I’ve summarized from a lot of different frameworks these that end up being roughly what you find. It’s usually some subset of these. Basically, this is the picklist that ends up happening at the highest levels of, which of these do we think is most important? We’ve seen a lot more emphasis, for example, on security lately. Because there’s a lot of concern about, how do we make sure that IP is not infringed, that our information is not getting put out there, that our customers’ information is not going to be exposed? This safety and security ends up being a big part of that. Of course, we’ve also seen that in privacy as well. Now, with additional regulation, compliance is taking on more meaning, has more aspects to it. Other things that we find, for example, is efficiency.
Typically, building AI takes a lot of power, takes a lot of processing, takes a lot of money. Not the thing that most organizations, especially FinTechs, want to spend to do this. If they can get the capability less expensive, they want to. Efficiency is something that’s been talked about more because the energy costs of creating and maintaining and doing inference with AI, especially large language models, is becoming a bigger concern. How do we power these things? If efficiency becomes one of your principles, then you look at how you minimize that to still get the same outputs while not impacting the environment and also not impacting your budget.
Regulation Update
Quick update on regulation. Again, 77% of companies see regulatory compliance and regulation as a priority for AI. Of course, with the EU AI Act, that’s increasing. Anyone who does business with, or near, or on EU citizens, this is going to be a concern with the EU AI Act coming in. We’ve also seen new regulation elsewhere, and I’ll do a little bit of a mapping on that. Of course, in that ladder, the first step is, can we be compliant? With regulation changing, what it means to be compliant is changing. Of course, companies are saying, yes, we should probably still also be compliant with whatever happens. This is probably not up to date. It changes literally daily. We’re seeing regulation all over the place. The patterns that it takes is different.
In the States, what we’ve seen up to this point is more an indication that existing laws need to be abided, even if you’re using AI to do the thing. In Europe, they’re taking a different approach and saying, we’re going to have this risk based hierarchical view of how AI can and should be applied. In China, they’re saying, you can do AI as long as it is in accord with the party aspirations and how they think about the way they want their society to run. In fairness, that’s how responsible AI and regulation works. Everybody has their view of what does it mean to be responsible? What values should we be enforcing? Not universal, which makes it, again, very difficult if you’re operating in a multinational environment. We’re also starting to see additional regulations coming in in Japan, in India, in Brazil, lots of different approaches. In Australia, of course. Tons of different ways that this actually ends up implemented, and the things that you need to comply with will be different by jurisdiction.
I’ll go through a couple. Please understand that for a lot of them, preexisting laws do still apply. This is what I was talking about. In the states, if you look at the way they’re handling it, they basically say, if something was illegal before when a human was doing it, it’s still illegal if an AI does it. You’re not allowed to discriminate based on protected classes, if you’re a human. You’re not allowed to discriminate algorithmically if you’re using AI. That’s how that goes. Data and consumer protection still apply. You don’t get to just blast people’s information out there, because the AI accidentally did it, which we’ve seen. Intellectual property still applies. All the copyright disputes and IP infringements that have been spinning up in litigation still applies on AI. Anti-discrimination. Anything criminal.
For example, there’s a lot of worry about, could AI tell someone how to do something terrible that would constitute criminal behavior? There’s a lot of concern around, how do you guard against your AI saying something it shouldn’t, that it probably knows about, but you don’t want anyone to know it knows about, just based on how it was trained. There’s a lot of filtering that goes on for that. Then antitrust and unfair competition. Basically, with antitrust, it’s saying, you can’t use AI to create a non-competitive environment, despite the fact that the AI itself might be doing that.
EU AI Act: four categories of risk. You’ve probably heard about this. How many people have looked up the EU AI Act? It’s new to some people. Four categorizations. In the top left over there, unacceptable risk, is stuff that like it’s just disallowed, flat out disallowed. It’s things like behavioral profiling and all kinds of stuff with biometrics and stuff like that. The idea is that if you’re materially distorting behavior, or you’re doing things that are really privacy invading, those are prohibited. Just can’t do. Unless you’re the government or the military or a bunch of other things, just asterisks all over this slide. Imagine asterisks everywhere, like it’s snowing. Snowing caveats. In that category, hopefully in your organization, you’re not dealing with any of those things. If you are, probably think about ways to take them out now, because this is going to happen. This is going to come into effect in a year or so, year and a half. High risk.
In high risk, what it’s basically saying is, if there’s the potential for this to influence someone’s means of earning a living, means of living, health, safety, access to critical infrastructure, those kinds of things, you have to go through a very rigorous documentation process called the conformity assessment. This is where a fair bit of FinTech organizations are going to end up having something to do, and probably in the limited risk, which I’ll get to. The conformity assessment is a lot.
That said, finance has been doing impact assessments and compliance documentation for decades. I’ve been there. It sucks, but you do it. You put out reams of documentation about how you got to the conclusion you got to. Why this model was the best one of all the things you tested. Why these features were selected. What their relative importance is. How you’ve tested for disparate impact, all that good stuff. It’s that plus a bunch of other things that go into the conformity assessment. The documentation that you’ve done so far, drop in the bucket. My hope, personally, is that we can then train an LLM to help generate the documentation, and then we can figure out which bucket that falls into. I’m thinking minimal. That’s the high-risk group. There are some caveats on that as well.
Limited risk, basically, if it’s not one of the other two, but it interacts with a person, you have to tell the person, “You’re interacting with an AI”. The fun one I think about here is, a few years ago, Google did a demonstration of what was a Duo, or something like that, where they had an AI application that called and made a booking at a restaurant, or something like that. If that were the case under the EU AI Act, the phone call would sound like, “Hi, this is Google Duo calling. This is an AI. I would like to book a reservation for this particular person on this date and time”. Because you have to tell them. This is true then for all the chatbots that are being created and all these use cases that are coming in now where there’s an interaction with a person. Similarly, it would work for internal. Even if you’re not necessarily displacing customer service, for example, if you’re augmenting your customer service staff, you have to tell your customer service staff, this is an AI chatbot that’s giving you a script.
This is not a preprogrammed defined rules engine, this is an AI. Just so they’re aware. Then everything else is minimal risk. For example, things like fraud detection fall into minimal risk, according to the way that the current structure has been laid out. These are very vague. There are examples in the act, if you want to read through the 200 and some odd pages of it. I don’t recommend it unless you have insomnia, in which case, have at. The majority of what this will do, I think, again, not a lawyer, my own interpretation is a lot of how this is going to come together, will happen in litigation. As new applications come online once this is enforced, we will probably see a lot of the rules get a little bit more clear as to what they consider high risk, unacceptable risk, lower risk, and so forth. Because right now, there’s some vague understandings of what we think people are going to try and do with AI, but it’s not specific yet.
In the act, there are also specific rules around general-purpose AI and foundation models. If you’re training a general-purpose or a foundation model on your own, first of all, I would like your budget. Secondly, there are specific rules around how you have to be transparent, and how many flops you can have, and all kinds of crazy stuff. That’s for the few that want to actually build a model, which at Databricks we did. We had to actually look at that stuff.
The other one I wanted to talk about, specific to FinTechs, is consumer duty. This came in, I think, last year. It has some interesting implications for responsible AI. First of all, it says, design for good customer outcomes. How do you define a good customer outcome? How do you know that that customer outcome avoids foreseeable harm? This is going back to that conformity assessment, disparate impact assessment, all these things that you have to prove are doing the right thing for your customers. The next part is demonstrating your supply chain.
With AI, the supply chain gets a little nebulous, so you have to think about, what is your procurement process? How do you track what data is coming in, being used in your AI, how it’s being labeled, how it’s being featurized, how it’s being vectorized or chunked, or whatever you’re using, to be able to actually put it into an AI application. For FinTech, there’s this extra bit. Again, I think it’s probably mostly there for most financial services companies, including FinTech, because, again, we’ve been subject to this stuff for a long time. Things to think about with our new technologies is, how is this actually going to play out over time?
FinTech Response
It does bring us into the response, though, from FinTech. More stats, because I’m a numbers nerd. According to a survey of FinTechs, they expect a 10% to 30% revenue boost in the next 3 years, most leaders in FinTech, based on the use of generative AI specifically. This is not uncommon. I’ve heard this 30% thing bandied about. I think McKinsey had another one that was like, 30% efficiency from using generative AI. You’re going to save 30% of whatever costs, and all this stuff. Maybe. The reputation of FinTech is two things. There’s the disruption angle. From a technology perspective or digital native perspective, it’s also open source. When you think about how you’re going to achieve this 10% to 30% in a way that others aren’t, so you want to be disruptive. You don’t want to be the next J.P. Morgan, who’s saying, yes, we can incrementally improve our efficiency by 10%. FinTech is saying, no, we want to do something massively different, completely different from what the big guys are doing, disrupt it.
Often, especially in early stage, you want to go for something that you can build and control. It’s actually an advantage now, because when you talk about knowing and ensuring your supply chain for AI, being able to have transparency, driving towards responsibility. The more you use open source and can see the code and can see the data and can see the weights and can show all of that, the better you are able to take that, use it to your advantage, prove the supply chain. Go through the conformity assessments, and disrupt, so that you’re not the one sitting there going, yes, I will incrementally improve my efficiency, and I might get a 5% decrease in some sort of cost. We’re seeing, in FinTech, a lot of interest in the open-source models, a lot of interest in being able to build and fine-tune, especially for LLMs. Of course, that’s always been there for machine learning, and data science, and so forth. I’ve not yet met a FinTech using SaaS, which I’m very grateful for, but just taking advantage of what’s available.
That said, as a disruptor, there’s nothing holding you back from saying, we think there’s a 20% reduction in workforce that we could actually achieve. What’s interesting here is that it’s the unspoken bit in other industries, but in FinTech, it’s actually moving in that direction. There’s a lot of noise about it. Going from processes that had been manual through to augmenting people, and then eventually automating those capabilities. A great example of that was Klarna. A couple months ago, Klarna’s CEO, it was actually on their own website, on their blog, said, we have a chatbot that’s been handling two-thirds of our customer service requests over the last couple of months. It’s gotten better quality. There have been fewer issues where people had to come back and talk to somebody again.
We’re looking at that, and we’re seeing that it could replace 700 workers. They were public about this. Why? Because FinTech, most digital natives, are known for disrupting. They rely on technology. It’s that techno solutionism of being able to say, yes, we want to do this. Thinking back to those principles, does that make you human centric or not? These are the decisions that end up having ramifications for what you do with AI. If you’re saying we’re customer centric, we want to make sure that the humans that we’re serving are our customers, but that means potentially not serving our employees in the same way, not ensuring their continued work in this company.
Although, frankly, what he said was those were all outsourced people, so they don’t count as our employees, which, ethically vague. There’s a reason I always wear gray when I give talks about responsible AI, everything’s gray. This is something that’s very much happening now. I can tell you that they’re not the only company saying we think there’s a workforce reduction. I spoke with another very large organization not long ago that said we have 2000 analysts, but we think if we put in place the right tooling, we put in place the right AI, we could drop that to about 200. They’re not saying it publicly. They’re not telling their analysts, your job is on the line, but it’s still there. It’s happening now.
How do we move through this pattern together? From a responsibility perspective, first thing, establish your principles. That includes, how much transparency are you going to give, to whom, in what? These are the policies internally you need to set up. For organizations that have risk management, which FinTech should, extend your risk management framework to include AI. That’s happening now. They’re evaluating, what are the risks that we’re taking on when we put AI into place. Identify what I call no-fly zones. There are some organizations that are saying, we’re probably not in the high-risk camp most of the time, and we don’t really want to go through this conformity assessment stuff. If we could just never use AI in anything to do with HR, that’d be great. Because the moment it touches something like employment status, or pay, or performance, conformity assessment is required. Sometimes it’s just not worth it. Cross-functionally implementing responsible AI. There’s a lot in that one bullet.
This is something that, again, at the start, we’re seeing a lot more security getting involved. We’re seeing CISOs, legal teams, compliance, AI, governance, all coming together to figure out, what do we do? How do we safeguard the organization? How do we look at risk management differently? Do we bring in the risk officers in conjunction with all these other groups? Build your AI review board, including all these cross-functional folks, so that you can establish a holistic approach.
Then, set up practical processes. Saying, we’re just going to do a conformity assessment for every single AI that we ever have, just in case, probably not practical. May or may not need to do it, again, unless the magic LLM that happens someday is able to do all that documentation for you, which, here’s hoping. Try to think about what are all the teams that actually need to come together to help you solve for responsible AI. Especially in FinTech, a lot of this already exists. You’ve got risk management frameworks. You’ve got a model risk management capability. You’ve probably done some amount of documentation of this stuff before. You’ve got a lot of the components, all the people in different teams that could be part of this.
Questions and Answers
Ellis: You said 80% of companies are investing in ethical or responsible AI, does that mean 20% are investing in irresponsible AI?
Kassan: Twenty-percent probably have the hubris to think that they’ve already invested enough.
Ellis: When you talked about limited risk informing, what are you seeing with AIs dealing with AI? Because you talk about AIs dealing with humans a lot. Will you see a stage where AIs have to inform each AI that they’re dealing with an AI? How does that all work? Or what are you seeing with regulations around AIs dealing with AIs?
Kassan: I haven’t seen a tremendous amount of regulation around that area. What I have seen is more that AI will govern AI. It’s that adversarial approach of having an AI that says, explain to me why you said this, as a second buffer. Because a human can’t be there all the time to indicate, yes, this is a good response, or, no, it’s a bad response, in a generative nature. When you’re doing things like governing and checking and evaluating the responses after somebody’s prompted and saying, is this a response that we would want? It’s actually more scalable and effective to have another AI in place as the governor of that. That’s something I am seeing that’s come up quite a bit. It’s things like algorithmic red teaming. Yes, you could have someone try and sit there and type, but humans are constrained as to how quickly we can type and think up the next use case and the next thing we want to try and trick it to do. AIs can do that a lot faster. If we say to one AI, trick that one into saying this, it will find ways, and it’ll do it quick.
Participant 1: We started thinking about enforcing the policy of adding more use cases using generative AI. We came to a conclusion that we have to create a committee to approve every use case. What’s your takeaway about creating that type of committee?
Kassan: That’s the AI review board idea. A couple of things that that body would do. One is set up the policies, so making sure that you have that framework at the start. Also, looking at the processes. At what point do different business areas need to come to the board and say, we have this idea for a new AI application. Maybe they’ve already chatted with somebody who’s from the AI team, data science, machine learning, whatever it might be, to say, we think we want to solve it this way, so that the AI review board can then say, does this realistically conform with the policies, and processes, and so forth. It becomes that touchpoint. There are two issues with that, though. One is, if you don’t have a solid set of information on how you’re going to manage and mitigate risks, that can cause a lot of looping.
The other is, if they don’t meet often, that can cause a bit of a time suck. The two of those compound. You want to make sure that this is something where there’s enough of a framework there, and this comes back to the risk management framework. Understanding, categorizing, and quantifying the risks that you’re taking and understanding what can be done to mitigate them. Having enough information when you first go to that AI review board to say, here’s the stuff that you’re going to need as inputs to risk management. Here are the mitigations we’re planning. Here’s what we’re looking at, so that they can say, yes, go ahead. Then also just making sure they meet pretty frequently.
Participant 2: Do you think we will see a lot of chatbots giving financial advices in the future. What are the regulations in that area?
Kassan: There’s a lot of discussion around AI giving financial advice. There are regulations already in place about robo-advising. It’s been out there. Robo-advising has been out for some time, even without generative AI. I think the regulation will still apply. That said, there’s enough information for us to see that, yes, it’s something that, if the regulation allows in the jurisdiction, companies will absolutely be approaching it. I think the question there becomes one of, how many permutations of advice do you really want to offer? How customized does that become? What other information do you give access to that chatbot? For robo-advising, really, any of that, you’d want to have sufficient, up-to-date information so that it’s not giving advice on, for example, stock performances from six months ago. It’s looking at what’s happening now.
You want to make sure it’s constantly getting information. How many different places and how much information you’re going to feed it on an ongoing basis, because there’s a cost to that? It’s thinking about, what is that going to get you? How much do you want to do? Or, do you set it as almost like a rules-based engine, the way that a lot of robo-advisors do anyways, which is, instead of saying, these specific things would constitute a good portfolio for you. You say, you’re in this risk category, risk appetite category, so we recommend this fund, or whatever, and it’s a picklist. I think it’ll depend on the regulation. Like I said earlier, there are different approaches in every jurisdiction. It’s certainly a use case that comes up quite a bit.
Participant 3: There seems to be two schools of thought around bias, you either eliminate from the dataset beforehand, or you work through and let it be eliminated afterwards. What’s your thought on that?
Kassan: Neither is possible. Bias is something that we try to mitigate, but it’s very challenging, because the biases that you’re trying to take out mean that you’re putting in other biases. Good luck. It’s a vicious cycle. Bias is something that everyone has, various cognitive biases, contextual biases, and so forth. The more we try and do something about it, the more it’s just changing what bias is represented. It’s not to say don’t do it or don’t try to mitigate it, but just be aware that there will be biases. Certain of them are illegal in various countries, and those are the ones you definitely want to mitigate. That’s really what it comes down to.
Participant 4: When we get past the AIs coming for the job of the person operating the phone to the person who’s doing financial trading and moving up the stack. How do we avoid the financial war game scenario where the bots just trade away all the money?
Kassan: Generative AI is not necessarily going to be doing the trading immediately. That’s something that is a little bit different. You’re talking about different styles of agents at that point. It is helpful to think about what limits you put on these systems, so those no-fly zones. Do you allow AI to trade at certain levels? For example, how much do you automate? How much do you augment a human? Or, do you have a human in the loop where the AI says, I think it’d be prudent to do X, Y, and Z, but somebody has to approve it.
Participant 4: Do the regulations prevent someone from making the bad decision that says, I’m going to try and go fully automated on this, when the responsible thing is, keep a human in the loop.
Kassan: Some of it does. To the point earlier around robo-advising or financial advice, certain jurisdictions say you’re not allowed to give financial advice in a chatbot, for example. At that point, great. You’re not automating, you might be augmenting. Instead of a chatbot saying it directly to the person. You’ve got a financial advisor who types something in, and the chatbot says to them, here’s what I’d recommend for this client. Then they get to say, yes or no. Sometimes, depending on the jurisdiction, that’s what you’d end up seeing.
Participant 5: With LLMs being adopted quite widely, do you see that the onus being on being compliant, running against those companies that are building these LLMs, so they have like ISO, or some sort of certification that embeds that confidence that they’re compliant to a certain level.
Kassan: Compliance is an interesting one on this, because compliance is on a per application level at this point. You don’t get certified as a company that you are compliant with the AI regulation. You have to have every single AI certified, if you’re part of the conformity assessment. Every one of them has to go through that. You don’t get a blanket statement saying, stamp, you’re good. I think with respect to LLMs, the probability of things going more awry than in classical machine learning and data science is higher. I think there’s a higher burden of proof to say that we’ve done what we can to try and limit and to try to be responsible with it. It’s certainly acknowledged in all the regulations, but it’s going to be harder. Frankly, when you look at who’s on the hook, the actual liability of what happens, it’s still people. A lot of the regulation has actually been quite clear that it’s still a person, or people who are responsible for the actions of the AI.
Good example of that was the whole debacle with Air Canada and the bereavement policy that the AI said, yes, just go ahead and claim that it’s bereavement. You can say, within 90 days, and we’ll refund your money. They’re like, your AI said it. Go on, you got to honor it. I think there’s more of that. The regulation has been clearer, especially with some of the newer regulation coming out, saying it has to be a natural person that’s responsible. When it comes to financial services here, at least, as I understand it, it’s the MDs who carry that, which is a lot of risk when you look at it at scale.
Antitrust is an interesting one that I think about, not from the perspective of those implementing AI and trying to create anti-competitive environments for their potential competition. More so for the accessibility of AI as a competitive advantage that a lot of the organizations that we’re seeing that are startup or FinTech or digital native and whatnot, they’re going towards open source, partly because you have the ability to use it without having to spend millions of dollars to do it. It creates actually a more competitive environment to have open-source models and to be able to leverage some of these capabilities, so that if somebody does have a new, disruptive idea that would require the use of AI and require the use of LLMs, there’s a means of entry into that market. I personally think that it’s a helpful tool to have these kinds of open-source models to get new entrants into the market, so that it’s not reliant on this token-based economy of having to pay for a proprietary application for AI.
See more presentations with transcripts

MMS • RSS
Amitell Capital Pte Ltd increased its position in MongoDB, Inc. (NASDAQ:MDB – Free Report) by 36.7% during the 4th quarter, according to its most recent filing with the Securities and Exchange Commission. The firm owned 18,625 shares of the company’s stock after buying an additional 5,000 shares during the quarter. MongoDB comprises about 3.5% of Amitell Capital Pte Ltd’s investment portfolio, making the stock its 11th biggest position. Amitell Capital Pte Ltd’s holdings in MongoDB were worth $4,336,000 at the end of the most recent reporting period.
Other hedge funds and other institutional investors have also recently added to or reduced their stakes in the company. Vanguard Group Inc. grew its stake in MongoDB by 0.3% during the fourth quarter. Vanguard Group Inc. now owns 7,328,745 shares of the company’s stock worth $1,706,205,000 after buying an additional 23,942 shares during the period. Franklin Resources Inc. boosted its position in MongoDB by 9.7% during the 4th quarter. Franklin Resources Inc. now owns 2,054,888 shares of the company’s stock worth $478,398,000 after purchasing an additional 181,962 shares during the period. Geode Capital Management LLC increased its holdings in MongoDB by 1.8% in the 4th quarter. Geode Capital Management LLC now owns 1,252,142 shares of the company’s stock valued at $290,987,000 after purchasing an additional 22,106 shares in the last quarter. Norges Bank acquired a new stake in MongoDB in the 4th quarter valued at $189,584,000. Finally, Amundi lifted its stake in shares of MongoDB by 86.2% during the fourth quarter. Amundi now owns 693,740 shares of the company’s stock worth $172,519,000 after purchasing an additional 321,186 shares in the last quarter. Institutional investors and hedge funds own 89.29% of the company’s stock.
Insider Transactions at MongoDB
In other news, insider Cedric Pech sold 1,690 shares of MongoDB stock in a transaction on Wednesday, April 2nd. The stock was sold at an average price of $173.26, for a total transaction of $292,809.40. Following the transaction, the insider now directly owns 57,634 shares in the company, valued at approximately $9,985,666.84. This trade represents a 2.85 % decrease in their position. The sale was disclosed in a document filed with the Securities & Exchange Commission, which can be accessed through this link. Also, CEO Dev Ittycheria sold 18,512 shares of the company’s stock in a transaction on Wednesday, April 2nd. The shares were sold at an average price of $173.26, for a total value of $3,207,389.12. Following the completion of the sale, the chief executive officer now owns 268,948 shares of the company’s stock, valued at approximately $46,597,930.48. This trade represents a 6.44 % decrease in their position. The disclosure for this sale can be found here. Over the last quarter, insiders sold 58,060 shares of company stock valued at $13,461,875. 3.60% of the stock is currently owned by corporate insiders.
MongoDB Price Performance
NASDAQ:MDB opened at $147.38 on Tuesday. The firm has a market capitalization of $11.97 billion, a P/E ratio of -53.79 and a beta of 1.49. The business has a 50 day simple moving average of $234.34 and a 200 day simple moving average of $260.82. MongoDB, Inc. has a 12-month low of $140.78 and a 12-month high of $387.19.
MongoDB (NASDAQ:MDB – Get Free Report) last posted its earnings results on Wednesday, March 5th. The company reported $0.19 earnings per share (EPS) for the quarter, missing analysts’ consensus estimates of $0.64 by ($0.45). The firm had revenue of $548.40 million for the quarter, compared to analyst estimates of $519.65 million. MongoDB had a negative net margin of 10.46% and a negative return on equity of 12.22%. During the same period in the previous year, the firm earned $0.86 earnings per share. Research analysts anticipate that MongoDB, Inc. will post -1.78 EPS for the current fiscal year.
Wall Street Analysts Forecast Growth
MDB has been the subject of a number of recent research reports. Royal Bank of Canada lowered their target price on MongoDB from $400.00 to $320.00 and set an “outperform” rating for the company in a research note on Thursday, March 6th. Citigroup lowered their price objective on shares of MongoDB from $430.00 to $330.00 and set a “buy” rating for the company in a research report on Tuesday, April 1st. Morgan Stanley reduced their target price on shares of MongoDB from $350.00 to $315.00 and set an “overweight” rating on the stock in a research report on Thursday, March 6th. Cantor Fitzgerald initiated coverage on shares of MongoDB in a report on Wednesday, March 5th. They set an “overweight” rating and a $344.00 price target for the company. Finally, Monness Crespi & Hardt upgraded shares of MongoDB from a “sell” rating to a “neutral” rating in a report on Monday, March 3rd. Seven analysts have rated the stock with a hold rating, twenty-four have assigned a buy rating and one has assigned a strong buy rating to the company’s stock. According to data from MarketBeat.com, the stock presently has a consensus rating of “Moderate Buy” and a consensus target price of $312.84.
Get Our Latest Research Report on MongoDB
About MongoDB
MongoDB, Inc, together with its subsidiaries, provides general purpose database platform worldwide. The company provides MongoDB Atlas, a hosted multi-cloud database-as-a-service solution; MongoDB Enterprise Advanced, a commercial database server for enterprise customers to run in the cloud, on-premises, or in a hybrid environment; and Community Server, a free-to-download version of its database, which includes the functionality that developers need to get started with MongoDB.
See Also
Want to see what other hedge funds are holding MDB? Visit HoldingsChannel.com to get the latest 13F filings and insider trades for MongoDB, Inc. (NASDAQ:MDB – Free Report).
This instant news alert was generated by narrative science technology and financial data from MarketBeat in order to provide readers with the fastest and most accurate reporting. This story was reviewed by MarketBeat’s editorial team prior to publication. Please send any questions or comments about this story to contact@marketbeat.com.
Before you consider MongoDB, you’ll want to hear this.
MarketBeat keeps track of Wall Street’s top-rated and best performing research analysts and the stocks they recommend to their clients on a daily basis. MarketBeat has identified the five stocks that top analysts are quietly whispering to their clients to buy now before the broader market catches on… and MongoDB wasn’t on the list.
While MongoDB currently has a Moderate Buy rating among analysts, top-rated analysts believe these five stocks are better buys.

Enter your email address and we’ll send you MarketBeat’s list of seven best retirement stocks and why they should be in your portfolio.

MMS • Ben Linders

Quality Assurance Engineers can evolve into artificial intelligence (AI) strategists, guiding AI-driven test execution while focusing on strategic decisions. According to Victor Ionascu, rather than replacing testing roles, AI can enhance them by predicting defects, automating test maintenance, and refining risk-based testing. This human-AI collaboration is crucial for maintaining quality in increasingly complex software systems.
Victor Ionascu gave a talk about the role of artificial intelligence in quality assurance and software testing at QA Challenge Accepted.
QA professionals are increasingly turning to AI to address the growing complexities of software testing, Ionascu said. AI-driven automation can improve test coverage, reduce test cycle times, and enhance the accuracy of results, leading to faster software releases with higher quality, as he explained in the InfoQ article Exploring AI’s Role in Automating Software Testing.
Ionascu mentioned that he’s using AI tools like GitHub Copilot, Amazon CodeWhisperer, and ChatGPT. One of the key benefits, once you understand how to use AI effectively, is a noticeable improvement in efficiency, as he explained:
For example, with Copilot, instead of manually searching for whether a particular class or function exists, the AI automatically suggests relevant code snippets in real-time. This accelerates the development process and helps me focus more on refining and improving the logic behind the tests.
Tools like ChatGPT have proven to be invaluable for general research and guidance, Ionascu said. Instead of spending time searching through multiple sources, he uses it as a powerful assistant that provides quick insights and suggestions during the automation process. It helps reduce the time needed for researching complex testing scenarios or frameworks, which ultimately speeds up the development of robust test scripts, he mentioned.
While AI offers tremendous potential, Ionascu stressed that AI is not without limitations. It lacks the contextual understanding and human intuition required for tasks like exploratory testing and non-functional testing (e.g., performance and security), he mentioned.
The future of testing with AI will see QA professionals evolving into AI strategists, where AI tools will handle much of the execution and maintenance of automated tests, Ionascu said. AI will enable adaptive, self-healing tests that evolve with the application, reducing the overhead for QA teams, he added.
Ionascu expects AI to also improve in areas like predictive defect detection:
AI can analyze historical data to identify high-risk areas before they become critical issues.
In the long term, AI will not replace QA roles but will augment human capabilities, allowing teams to focus on strategic, high-value tasks like quality strategy, exploratory testing, and risk-based testing, Ionascu said. The key will be the partnership between AI and human oversight, where AI handles execution, and humans drive creativity and strategy, he concluded.
InfoQ interviewed Victor Ionascu about applying AI for software testing.
InfoQ: What are the limitations of AI in testing?
Victor Ionascu: While it excels at automating repetitive tasks, AI still struggles with contextual understanding of complex, domain-specific workflows. AI-generated tests may require manual refinement to ensure completeness and accuracy, especially for non-functional requirements like performance and security testing. And AI lacks human intuition, which is crucial for exploratory testing and discovering edge cases that are difficult to automate.
InfoQ: Can you give an example of a test case where human intuition made the difference?
Ionascu: An example of an edge case would be testing invisible or zero-width characters in passwords.
Scenario: A user enters a password that appears valid but contains zero-width spaces or non-printable Unicode characters (e.g., U+200B Zero Width Space, U+200C Zero Width Non-Joiner).
The example password input (User Perspective): P@ssw0rd (Looks normal)
The actual password (Hidden Characters): P@ssw0rd (Contains a zero-width space between P and @)
Automation using AI will miss this, because:
- Automated tests typically check for length, required characters, and structure but may not detect hidden characters.
- Most test automation frameworks treat these as valid input since they don’t visually alter the string.
- Traditional regex-based validation rules fail unless explicitly checking for invisible Unicode characters
Humans using AI can discover this in two ways:
- Human Tester Insight: Manually pasting a password copied from an external document (e.g., Google Docs, emails) can reveal login failures due to hidden characters.
- AI-Assisted Detection: AI-powered anomaly detection can compare expected login behavior with failed attempts where passwords “look correct” but fail
Testing this has a significant impact. Users may struggle with login failures without understanding why. It can also be exploited for phishing attacks (e.g., registering Password123 and tricking users into thinking it’s Password123).

MMS • RSS
We wouldn’t blame MongoDB, Inc. (NASDAQ:MDB) shareholders if they were a little worried about the fact that Dev Ittycheria, the President recently netted about US$3.2m selling shares at an average price of US$173. However, that sale only accounted for 8.8% of their holding, so arguably it doesn’t say much about their conviction.
MongoDB Insider Transactions Over The Last Year
Notably, that recent sale by Dev Ittycheria is the biggest insider sale of MongoDB shares that we’ve seen in the last year. So what is clear is that an insider saw fit to sell at around the current price of US$171. While we don’t usually like to see insider selling, it’s more concerning if the sales take place at a lower price. In this case, the big sale took place at around the current price, so it’s not too bad (but it’s still not a positive).
In the last year MongoDB insiders didn’t buy any company stock. You can see a visual depiction of insider transactions (by companies and individuals) over the last 12 months, below. If you want to know exactly who sold, for how much, and when, simply click on the graph below!
Check out our latest analysis for MongoDB
I will like MongoDB better if I see some big insider buys. While we wait, check out this free list of undervalued and small cap stocks with considerable, recent, insider buying.
Insider Ownership
For a common shareholder, it is worth checking how many shares are held by company insiders. I reckon it’s a good sign if insiders own a significant number of shares in the company. MongoDB insiders own 2.9% of the company, currently worth about US$344m based on the recent share price. Most shareholders would be happy to see this sort of insider ownership, since it suggests that management incentives are well aligned with other shareholders.
So What Does This Data Suggest About MongoDB Insiders?
Insiders sold MongoDB shares recently, but they didn’t buy any. And even if we look at the last year, we didn’t see any purchases. The company boasts high insider ownership, but we’re a little hesitant, given the history of share sales. While it’s good to be aware of what’s going on with the insider’s ownership and transactions, we make sure to also consider what risks are facing a stock before making any investment decision. For example – MongoDB has 3 warning signs we think you should be aware of.
If you would prefer to check out another company — one with potentially superior financials — then do not miss this free list of interesting companies, that have HIGH return on equity and low debt.
For the purposes of this article, insiders are those individuals who report their transactions to the relevant regulatory body. We currently account for open market transactions and private dispositions of direct interests only, but not derivative transactions or indirect interests.
If you’re looking to trade MongoDB, open an account with the lowest-cost platform trusted by professionals, Interactive Brokers.
With clients in over 200 countries and territories, and access to 160 markets, IBKR lets you trade stocks, options, futures, forex, bonds and funds from a single integrated account.
Enjoy no hidden fees, no account minimums, and FX conversion rates as low as 0.03%, far better than what most brokers offer.
Sponsored Content
New: Manage All Your Stock Portfolios in One Place
We’ve created the ultimate portfolio companion for stock investors, and it’s free.
• Connect an unlimited number of Portfolios and see your total in one currency
• Be alerted to new Warning Signs or Risks via email or mobile
• Track the Fair Value of your stocks
Have feedback on this article? Concerned about the content? Get in touch with us directly. Alternatively, email editorial-team (at) simplywallst.com.
This article by Simply Wall St is general in nature. We provide commentary based on historical data and analyst forecasts only using an unbiased methodology and our articles are not intended to be financial advice. It does not constitute a recommendation to buy or sell any stock, and does not take account of your objectives, or your financial situation. We aim to bring you long-term focused analysis driven by fundamental data. Note that our analysis may not factor in the latest price-sensitive company announcements or qualitative material. Simply Wall St has no position in any stocks mentioned.

MMS • RSS
Fmr LLC lessened its stake in shares of MongoDB, Inc. (NASDAQ:MDB – Free Report) by 84.3% during the 4th quarter, according to the company in its most recent filing with the Securities and Exchange Commission. The fund owned 3,480,245 shares of the company’s stock after selling 18,690,392 shares during the period. Fmr LLC owned approximately 4.67% of MongoDB worth $810,236,000 at the end of the most recent reporting period.
A number of other hedge funds have also bought and sold shares of MDB. Norges Bank purchased a new position in shares of MongoDB in the fourth quarter worth about $189,584,000. Raymond James Financial Inc. purchased a new position in MongoDB during the 4th quarter worth approximately $90,478,000. Amundi raised its position in MongoDB by 86.2% during the 4th quarter. Amundi now owns 693,740 shares of the company’s stock worth $172,519,000 after buying an additional 321,186 shares during the last quarter. Assenagon Asset Management S.A. lifted its stake in MongoDB by 11,057.0% during the 4th quarter. Assenagon Asset Management S.A. now owns 296,889 shares of the company’s stock valued at $69,119,000 after acquiring an additional 294,228 shares during the period. Finally, Franklin Resources Inc. boosted its holdings in shares of MongoDB by 9.7% in the 4th quarter. Franklin Resources Inc. now owns 2,054,888 shares of the company’s stock valued at $478,398,000 after acquiring an additional 181,962 shares during the last quarter. Institutional investors and hedge funds own 89.29% of the company’s stock.
Insider Buying and Selling at MongoDB
In other news, Director Dwight A. Merriman sold 885 shares of MongoDB stock in a transaction on Tuesday, February 18th. The stock was sold at an average price of $292.05, for a total value of $258,464.25. Following the completion of the sale, the director now directly owns 83,845 shares of the company’s stock, valued at approximately $24,486,932.25. The trade was a 1.04 % decrease in their ownership of the stock. The sale was disclosed in a filing with the SEC, which can be accessed through this hyperlink. Also, CFO Srdjan Tanjga sold 525 shares of the business’s stock in a transaction dated Wednesday, April 2nd. The stock was sold at an average price of $173.26, for a total transaction of $90,961.50. Following the sale, the chief financial officer now owns 6,406 shares in the company, valued at $1,109,903.56. This trade represents a 7.57 % decrease in their ownership of the stock. The disclosure for this sale can be found here. Insiders have sold 58,060 shares of company stock worth $13,461,875 over the last quarter. 3.60% of the stock is currently owned by company insiders.
Analyst Upgrades and Downgrades
MDB has been the subject of several recent research reports. Barclays reduced their target price on shares of MongoDB from $330.00 to $280.00 and set an “overweight” rating for the company in a report on Thursday, March 6th. Robert W. Baird decreased their price target on MongoDB from $390.00 to $300.00 and set an “outperform” rating for the company in a report on Thursday, March 6th. DA Davidson boosted their target price on shares of MongoDB from $340.00 to $405.00 and gave the company a “buy” rating in a research note on Tuesday, December 10th. Loop Capital dropped their price target on shares of MongoDB from $400.00 to $350.00 and set a “buy” rating on the stock in a research report on Monday, March 3rd. Finally, Monness Crespi & Hardt upgraded shares of MongoDB from a “sell” rating to a “neutral” rating in a research report on Monday, March 3rd. Seven analysts have rated the stock with a hold rating, twenty-four have given a buy rating and one has given a strong buy rating to the company’s stock. According to MarketBeat, MongoDB has an average rating of “Moderate Buy” and an average target price of $312.84.
MongoDB Trading Down 4.5 %
Shares of MDB stock opened at $147.38 on Tuesday. MongoDB, Inc. has a twelve month low of $140.78 and a twelve month high of $387.19. The company’s 50-day moving average price is $234.34 and its 200-day moving average price is $260.82. The stock has a market cap of $11.97 billion, a price-to-earnings ratio of -53.79 and a beta of 1.49.
MongoDB (NASDAQ:MDB – Get Free Report) last posted its earnings results on Wednesday, March 5th. The company reported $0.19 earnings per share for the quarter, missing the consensus estimate of $0.64 by ($0.45). The firm had revenue of $548.40 million during the quarter, compared to analyst estimates of $519.65 million. MongoDB had a negative net margin of 10.46% and a negative return on equity of 12.22%. During the same quarter in the previous year, the firm earned $0.86 earnings per share. As a group, equities research analysts predict that MongoDB, Inc. will post -1.78 EPS for the current year.
MongoDB Profile
MongoDB, Inc, together with its subsidiaries, provides general purpose database platform worldwide. The company provides MongoDB Atlas, a hosted multi-cloud database-as-a-service solution; MongoDB Enterprise Advanced, a commercial database server for enterprise customers to run in the cloud, on-premises, or in a hybrid environment; and Community Server, a free-to-download version of its database, which includes the functionality that developers need to get started with MongoDB.
Read More
Want to see what other hedge funds are holding MDB? Visit HoldingsChannel.com to get the latest 13F filings and insider trades for MongoDB, Inc. (NASDAQ:MDB – Free Report).
This instant news alert was generated by narrative science technology and financial data from MarketBeat in order to provide readers with the fastest and most accurate reporting. This story was reviewed by MarketBeat’s editorial team prior to publication. Please send any questions or comments about this story to contact@marketbeat.com.
Before you consider MongoDB, you’ll want to hear this.
MarketBeat keeps track of Wall Street’s top-rated and best performing research analysts and the stocks they recommend to their clients on a daily basis. MarketBeat has identified the five stocks that top analysts are quietly whispering to their clients to buy now before the broader market catches on… and MongoDB wasn’t on the list.
While MongoDB currently has a Moderate Buy rating among analysts, top-rated analysts believe these five stocks are better buys.


MMS • RSS
Prudential Financial Inc. reduced its position in MongoDB, Inc. (NASDAQ:MDB – Free Report) by 4.7% in the 4th quarter, according to the company in its most recent 13F filing with the Securities and Exchange Commission (SEC). The firm owned 2,152 shares of the company’s stock after selling 105 shares during the period. Prudential Financial Inc.’s holdings in MongoDB were worth $501,000 as of its most recent filing with the Securities and Exchange Commission (SEC).
Several other hedge funds and other institutional investors also recently added to or reduced their stakes in the company. Raymond James Financial Inc. acquired a new stake in shares of MongoDB during the 4th quarter valued at about $90,478,000. Amundi grew its holdings in MongoDB by 86.2% in the fourth quarter. Amundi now owns 693,740 shares of the company’s stock valued at $172,519,000 after purchasing an additional 321,186 shares during the period. Assenagon Asset Management S.A. increased its position in shares of MongoDB by 11,057.0% in the fourth quarter. Assenagon Asset Management S.A. now owns 296,889 shares of the company’s stock valued at $69,119,000 after buying an additional 294,228 shares in the last quarter. LBP AM SA raised its stake in shares of MongoDB by 81.9% during the 4th quarter. LBP AM SA now owns 246,091 shares of the company’s stock worth $57,292,000 after buying an additional 110,768 shares during the period. Finally, Nicholas Company Inc. grew its stake in MongoDB by 94.5% in the 4th quarter. Nicholas Company Inc. now owns 202,509 shares of the company’s stock valued at $47,146,000 after acquiring an additional 98,394 shares during the period. Institutional investors and hedge funds own 89.29% of the company’s stock.
Insider Buying and Selling at MongoDB
In other news, CAO Thomas Bull sold 301 shares of MongoDB stock in a transaction dated Wednesday, April 2nd. The stock was sold at an average price of $173.25, for a total value of $52,148.25. Following the completion of the transaction, the chief accounting officer now owns 14,598 shares in the company, valued at approximately $2,529,103.50. This trade represents a 2.02 % decrease in their position. The sale was disclosed in a filing with the SEC, which can be accessed through the SEC website. Also, Director Dwight A. Merriman sold 1,045 shares of the company’s stock in a transaction that occurred on Monday, January 13th. The stock was sold at an average price of $242.67, for a total transaction of $253,590.15. Following the sale, the director now owns 85,652 shares in the company, valued at $20,785,170.84. The trade was a 1.21 % decrease in their ownership of the stock. The disclosure for this sale can be found here. Insiders sold 58,060 shares of company stock valued at $13,461,875 in the last 90 days. Company insiders own 3.60% of the company’s stock.
Wall Street Analysts Forecast Growth
MDB has been the subject of a number of research analyst reports. Robert W. Baird reduced their price objective on shares of MongoDB from $390.00 to $300.00 and set an “outperform” rating for the company in a research report on Thursday, March 6th. Monness Crespi & Hardt raised shares of MongoDB from a “sell” rating to a “neutral” rating in a research note on Monday, March 3rd. KeyCorp lowered MongoDB from a “strong-buy” rating to a “hold” rating in a research note on Wednesday, March 5th. Stifel Nicolaus decreased their target price on MongoDB from $425.00 to $340.00 and set a “buy” rating on the stock in a research report on Thursday, March 6th. Finally, Daiwa Capital Markets began coverage on MongoDB in a research report on Tuesday, April 1st. They set an “outperform” rating and a $202.00 price target for the company. Seven analysts have rated the stock with a hold rating, twenty-four have assigned a buy rating and one has issued a strong buy rating to the stock. Based on data from MarketBeat, MongoDB presently has an average rating of “Moderate Buy” and a consensus target price of $312.84.
Get Our Latest Research Report on MDB
MongoDB Stock Up 17.5 %
MDB opened at $171.34 on Thursday. The firm has a market cap of $13.91 billion, a P/E ratio of -62.53 and a beta of 1.49. MongoDB, Inc. has a 52 week low of $140.78 and a 52 week high of $387.19. The business has a fifty day simple moving average of $224.89 and a 200-day simple moving average of $257.73.
MongoDB (NASDAQ:MDB – Get Free Report) last issued its quarterly earnings results on Wednesday, March 5th. The company reported $0.19 earnings per share (EPS) for the quarter, missing analysts’ consensus estimates of $0.64 by ($0.45). MongoDB had a negative return on equity of 12.22% and a negative net margin of 10.46%. The business had revenue of $548.40 million for the quarter, compared to analysts’ expectations of $519.65 million. During the same quarter last year, the company earned $0.86 earnings per share. Sell-side analysts expect that MongoDB, Inc. will post -1.78 earnings per share for the current year.
About MongoDB
MongoDB, Inc, together with its subsidiaries, provides general purpose database platform worldwide. The company provides MongoDB Atlas, a hosted multi-cloud database-as-a-service solution; MongoDB Enterprise Advanced, a commercial database server for enterprise customers to run in the cloud, on-premises, or in a hybrid environment; and Community Server, a free-to-download version of its database, which includes the functionality that developers need to get started with MongoDB.
Featured Articles
Receive News & Ratings for MongoDB Daily – Enter your email address below to receive a concise daily summary of the latest news and analysts’ ratings for MongoDB and related companies with MarketBeat.com’s FREE daily email newsletter.

MMS • RSS
MongoDB, Inc. (NASDAQ:MDB – Get Free Report) reached a new 52-week low during mid-day trading on Monday after an insider sold shares in the company. The stock traded as low as $140.78 and last traded at $141.13, with a volume of 442325 shares. The stock had previously closed at $154.39.
Specifically, CFO Srdjan Tanjga sold 525 shares of the business’s stock in a transaction that occurred on Wednesday, April 2nd. The stock was sold at an average price of $173.26, for a total transaction of $90,961.50. Following the completion of the sale, the chief financial officer now owns 6,406 shares in the company, valued at $1,109,903.56. The trade was a 7.57 % decrease in their ownership of the stock. The transaction was disclosed in a legal filing with the SEC, which is available through this hyperlink. Also, insider Cedric Pech sold 1,690 shares of the stock in a transaction on Wednesday, April 2nd. The stock was sold at an average price of $173.26, for a total value of $292,809.40. Following the completion of the transaction, the insider now owns 57,634 shares in the company, valued at approximately $9,985,666.84. This represents a 2.85 % decrease in their ownership of the stock. The disclosure for this sale can be found here. In other MongoDB news, CAO Thomas Bull sold 301 shares of the company’s stock in a transaction on Wednesday, April 2nd. The shares were sold at an average price of $173.25, for a total transaction of $52,148.25. Following the transaction, the chief accounting officer now owns 14,598 shares in the company, valued at approximately $2,529,103.50. This represents a 2.02 % decrease in their ownership of the stock. The transaction was disclosed in a filing with the SEC, which is accessible through this hyperlink.
Analyst Upgrades and Downgrades
Several equities research analysts have recently commented on MDB shares. Monness Crespi & Hardt raised shares of MongoDB from a “sell” rating to a “neutral” rating in a research report on Monday, March 3rd. Mizuho increased their price objective on MongoDB from $275.00 to $320.00 and gave the stock a “neutral” rating in a research note on Tuesday, December 10th. The Goldman Sachs Group dropped their target price on MongoDB from $390.00 to $335.00 and set a “buy” rating on the stock in a report on Thursday, March 6th. Loop Capital decreased their price target on MongoDB from $400.00 to $350.00 and set a “buy” rating for the company in a report on Monday, March 3rd. Finally, Stifel Nicolaus dropped their price objective on MongoDB from $425.00 to $340.00 and set a “buy” rating on the stock in a research note on Thursday, March 6th. Seven analysts have rated the stock with a hold rating, twenty-four have issued a buy rating and one has given a strong buy rating to the company. According to MarketBeat, the stock currently has a consensus rating of “Moderate Buy” and an average price target of $312.84.
Get Our Latest Report on MongoDB
MongoDB Trading Up 17.5 %
The company has a market capitalization of $13.91 billion, a P/E ratio of -62.53 and a beta of 1.49. The company’s fifty day moving average is $224.89 and its 200 day moving average is $257.73.
MongoDB (NASDAQ:MDB – Get Free Report) last posted its quarterly earnings results on Wednesday, March 5th. The company reported $0.19 EPS for the quarter, missing analysts’ consensus estimates of $0.64 by ($0.45). MongoDB had a negative return on equity of 12.22% and a negative net margin of 10.46%. The company had revenue of $548.40 million during the quarter, compared to the consensus estimate of $519.65 million. During the same period in the previous year, the business earned $0.86 earnings per share. Equities research analysts predict that MongoDB, Inc. will post -1.78 EPS for the current year.
Institutional Inflows and Outflows
A number of hedge funds have recently made changes to their positions in MDB. Strategic Investment Solutions Inc. IL bought a new stake in MongoDB during the fourth quarter valued at $29,000. Hilltop National Bank boosted its position in shares of MongoDB by 47.2% during the 4th quarter. Hilltop National Bank now owns 131 shares of the company’s stock valued at $30,000 after acquiring an additional 42 shares during the last quarter. Continuum Advisory LLC boosted its position in shares of MongoDB by 621.1% during the 3rd quarter. Continuum Advisory LLC now owns 137 shares of the company’s stock valued at $40,000 after acquiring an additional 118 shares during the last quarter. NCP Inc. purchased a new position in shares of MongoDB during the fourth quarter worth about $35,000. Finally, Wilmington Savings Fund Society FSB bought a new position in shares of MongoDB in the third quarter worth approximately $44,000. 89.29% of the stock is currently owned by hedge funds and other institutional investors.
MongoDB Company Profile
MongoDB, Inc, together with its subsidiaries, provides general purpose database platform worldwide. The company provides MongoDB Atlas, a hosted multi-cloud database-as-a-service solution; MongoDB Enterprise Advanced, a commercial database server for enterprise customers to run in the cloud, on-premises, or in a hybrid environment; and Community Server, a free-to-download version of its database, which includes the functionality that developers need to get started with MongoDB.
See Also
Before you consider MongoDB, you’ll want to hear this.
MarketBeat keeps track of Wall Street’s top-rated and best performing research analysts and the stocks they recommend to their clients on a daily basis. MarketBeat has identified the five stocks that top analysts are quietly whispering to their clients to buy now before the broader market catches on… and MongoDB wasn’t on the list.
While MongoDB currently has a Moderate Buy rating among analysts, top-rated analysts believe these five stocks are better buys.

Wondering when you’ll finally be able to invest in SpaceX, Starlink, or X.AI? Enter your email address to learn when Elon Musk will let these companies finally IPO.

MMS • RSS

- MCP was introduced to open source in November 2024
- The protocol helps AI agents access the right data and speak to each other
- Adoption is starting to ramp up among major AI players like OpenAI, Anthropic and Google
GOOGLE CLOUD NEXT, LAS VEGAS – You may have heard it talked about at Google Cloud Next. Perhaps you saw it in recent AI-related news reports. But in a sea of acronyms, it’s just another you glossed over without figuring out what the heck MCP (model context protocol) really is. That’s was a mistake. MCP matters a LOT for the future of AI.
“MCP in 2025 is kind of like HTTP in the early 1990s — it has the potential to change how people interact with businesses and services, and create new types of businesses altogether,” Cloudflare VP of Product Rita Kozlov told Fierce.
Introduced to open source by AI trailblazer Anthropic in November 2024, MCP is a standard that allows enterprises and developers to sidestep issues that previously prevented them from accessing data scattered across different repositories. Basically, it removes the headache of having to design and deploy multiple integrations by offering a single way in which to do so across data sources.
“Think of MCP like a USB-C port for AI applications,” the MCP website explains. “Just as USB-C provides a standardized way to connect your devices to various peripherals and accessories, MCP provides a standardized way to connect AI models to different data sources and tools.”
Nifty, right?
MCP as an AI enabler
But more than just being cool, it turns out MCP will actually a key tool in enabling the agentic AI future. Why? As Kozlov put it, MCP will effectively enable “agents to operate more autonomously and complete tasks on behalf of users.”
MCP has the potential to change how people interact with businesses and services, and create new types of businesses altogether.
Rita Kozlov, VP of Product, Cloudflare
Agentic AI is all about training and deploying specialized AI that can work through more complex problems. To do that, the AI agents need to be able to access “the right data at the right time” across a variety of back-ends, Amin Vahdat, Google Cloud’s VP and GM for ML, Systems and Cloud, said in response to questions from Fierce.
Back-ends, of course, means databases and data storage systems like AlloyDB, Cloud SQL and Google Cloud Spanner. Beyond that, MCP can also expose data from REST APIs, or “really any service that can expose a programmatic interface,” Ben Flast, Director of Product Management at MongoDB and the company’s resident AI guru, told Fierce.
Flast said the company sees two primary ways in which MCP will play a role in AI’s advancement. First is agent development, where MCP will be used to help access the necessary data to boost code generation and automation. Second, he said MCP can also aid agents and LLMs as they function, providing necessary context for the AI to interact with various systems.
The trick now, Flast added, is figuring out what exactly agents are going to need from application databases – i.e. what kinds of storage or memory functionality they’ll need to meet performance needs.
Connecting AI to AI with MCP
But AI agents won’t just need to be fed a constant diet of data. They’ll also need to socialize.
Flast said MCP can be used to allow agents to talk to one another. And indeed, Kozlov said “we’re actually already starting to see developers build Agents that ‘speak’ MCP to other Agents.”
But Google Cloud just came up with its own standard to make that happen: the Agent2Agent protocol.
“MCP and A2A are complimentary in that MCP allows you to access data in an open standard way, where A2A allows for interoperability between different agents,” Vahdat explained. “So, think of MCP as model-to-data and A2A as agent-to-agent.” Put the two together and you have a very “easy and productive” way to build more powerful agents, he added.
MCP adoption curve
While the protocol is still very new, Kozlov and Flast said MCP has – like everything else AI-related – been rapidly gaining steam.
“Even Anthropic’s largest competitor, Open AI, has decided to add support for it,” Flast said. “Thousands of MCP Servers have already been built and the protocol was only announced in November 2024.”
Just this week, in fact, Cloudflare joined the MCP server game, adding a remote MCP server capability to its Developer Platform.
“We’re doing this to give developers and organizations a head start building for where MCP is headed because we anticipate that this will be a major new mode of interaction, just like how mobile was,” Kozlov concluded.
Keep your eyes peeled. It sounds like much more MCP news is on the horizon.

MMS • Ben Linders

As their organization grew, Thiago Ghisi’s work as director of engineering shifted from being hands-on in emergencies to designing frameworks and delegating decisions. He suggested treating changes as experiments, documenting reorganizations, and using a wave-based communication approach to gather feedback, ensuring people feel heard and invested. This iterative process helps create sustainable growth and fosters buy-in from the team.
Thiago Ghisi presented lessons learned from growing an engineering organization at QCon London.
Ghisi explained how the growth of his organization impacted his work as director of engineering:
When we were around 30 engineers, I could still be in all the crucial standups, help new managers fill gaps, and solve emergencies directly in Slack. But once we passed 50, that just didn’t scale. My role switched from “heroic firefighting” to shaping frameworks and delegating crucial decisions to develop the leadership team.
Ghisi mentioned that he had to stop being the go-to “person” for everything and start being the designer of their broader system, so teams could operate autonomously without waiting for him to approve every move. That shift was challenging but ultimately unlocked more sustainable growth, he added.
Approaching 100 engineers, success is all about designing an environment where others can operate effectively without his constant involvement, Ghisi stated. It is all about building organizational resilience.
Ghisi mentioned that organizations evolve like living organisms. Even if nothing’s “on fire,” a small structural adjustment can be the difference between merely functioning (treading water) and truly flourishing (innovating), he said.
A big part of getting changes to stick is treating them as experiments first in a subtle way, not final mandates, as Ghisi explained:
For instance, I’ll often spin up a “temporary” or “interim” task force before making it official, exactly like when a leader appoints someone as interim manager to see how it goes.
In parallel, once the most senior leaders in our organization agree on a rough plan, we bring in waves of staff engineers and engineering managers to stress-test it, Ghisi said. They surface hidden corner cases or improvements that the core leadership group might have missed, and they get to feel like true co-creators of the new setup rather than mere recipients of a top-down organization chart.
This wave-based approach helps everyone feel heard, which makes them more invested, Ghisi said. He suggested to let people know reorganizations aren’t set in stone:
If something sparks more trouble than it solves, we iterate again. Linking every change back to our short- and long-term priorities helps them see the “why,” not just the “what.”
When leaders demonstrate they’re actively listening and adjusting, people are far more willing to adopt the new structure or process and give feedback, Ghisi concluded.
InfoQ interviewed Thiago Ghisi about what he learned from scaling up.
InfoQ: What is your approach for reorganizing and scaling up?
Thiago Ghisi: I always start with a simple one-pager that spells out motivations and goals: maybe we’re addressing overlapping ownership, or maybe a historically underfunded team is now mission-critical, or maybe staffing a new team for a new scope.
From there, I use an iterative approach:
- Create a Draft (in writing): Outline reasons, high-level roadmap, and potential outcomes.
- Whiteboard new Organization Structure: Share the draft with a small leadership circle (ideally your senior leadership team) for initial feedback.
- People Managers’ Feedback: They’re closest to day-to-day pain points—factor in their corner cases.
- Staff-Plus Review: Let senior ICs stress-test the plan. They’ll spot hidden risks. Iterate and incorporate their suggestions.
- Leadership Sync: Bring senior leadership team + managers + staff engineers together for one final pass, refining and locking the structure.
- Comms Plan: Announce changes in waves—people directly impacted first, next indirectly impacted, then the broader org, finally a town hall for Q&A and reiterate the same message that was shared in writing.
- Roll Out & Monitor: If the new structure truly reduces friction or speeds up a key OKR, we keep it. If issues arise, we iterate fast instead of waiting for a “next-year meltdown.”
By treating reorganizations as iterative design—rather than a once-a-year monolith—we keep them from becoming dreaded events. It’s less “big bang” and more continuous improvement, validated by how smoothly teams deliver or how much friction we eliminate along the way.
InfoQ: What have you learned?
Ghisi: Some of the things that I have learned are:
- Managerial cost is real: You can’t just form a new squad on paper; you need a dedicated manager or lead who can truly own it.
- Structured communication plan: Rolling changes out in at least two or three waves is critical to avoid chaos.
- Your own leadership must evolve: Doing everything yourself at 30 engineers might work, but by 60 or 100, it will collapse. You need to empower a leadership bench, focus on system design, and let go of old “hero” behaviors.
In short, scaling to 100+ has less to do with adding headcount and more to do with systematically building leadership, designing topologies, and iterating on my own role. Every doubling of team size demands a doubling of leadership maturity.

MMS • RSS
Polymer Capital Management HK LTD bought a new stake in MongoDB, Inc. (NASDAQ:MDB – Free Report) in the fourth quarter, according to the company in its most recent Form 13F filing with the Securities & Exchange Commission. The firm bought 3,937 shares of the company’s stock, valued at approximately $917,000.
Several other institutional investors also recently made changes to their positions in the stock. Aster Capital Management DIFC Ltd acquired a new position in MongoDB during the fourth quarter worth $97,000. Ilmarinen Mutual Pension Insurance Co boosted its position in shares of MongoDB by 75.0% in the 4th quarter. Ilmarinen Mutual Pension Insurance Co now owns 35,000 shares of the company’s stock worth $8,148,000 after buying an additional 15,000 shares in the last quarter. Russell Investments Group Ltd. grew its stake in MongoDB by 7.5% in the 4th quarter. Russell Investments Group Ltd. now owns 52,804 shares of the company’s stock valued at $12,303,000 after acquiring an additional 3,688 shares during the last quarter. Wedbush Securities Inc. increased its holdings in MongoDB by 4.4% during the 4th quarter. Wedbush Securities Inc. now owns 2,945 shares of the company’s stock valued at $686,000 after acquiring an additional 125 shares in the last quarter. Finally, Aviva PLC raised its position in MongoDB by 68.1% during the fourth quarter. Aviva PLC now owns 44,405 shares of the company’s stock worth $10,338,000 after acquiring an additional 17,992 shares during the last quarter. 89.29% of the stock is currently owned by hedge funds and other institutional investors.
Analysts Set New Price Targets
A number of brokerages have recently weighed in on MDB. Needham & Company LLC dropped their price objective on shares of MongoDB from $415.00 to $270.00 and set a “buy” rating for the company in a research note on Thursday, March 6th. Morgan Stanley lowered their target price on shares of MongoDB from $350.00 to $315.00 and set an “overweight” rating for the company in a research note on Thursday, March 6th. Oppenheimer reduced their price target on MongoDB from $400.00 to $330.00 and set an “outperform” rating on the stock in a research note on Thursday, March 6th. The Goldman Sachs Group lowered their price objective on MongoDB from $390.00 to $335.00 and set a “buy” rating for the company in a research report on Thursday, March 6th. Finally, Royal Bank of Canada reduced their target price on MongoDB from $400.00 to $320.00 and set an “outperform” rating on the stock in a research report on Thursday, March 6th. Seven investment analysts have rated the stock with a hold rating, twenty-four have issued a buy rating and one has issued a strong buy rating to the company. Based on data from MarketBeat, MongoDB has a consensus rating of “Moderate Buy” and an average price target of $312.84.
View Our Latest Stock Report on MDB
Insider Transactions at MongoDB
In other news, Director Dwight A. Merriman sold 1,045 shares of the stock in a transaction that occurred on Monday, January 13th. The shares were sold at an average price of $242.67, for a total value of $253,590.15. Following the completion of the sale, the director now directly owns 85,652 shares of the company’s stock, valued at $20,785,170.84. This represents a 1.21 % decrease in their ownership of the stock. The transaction was disclosed in a document filed with the Securities & Exchange Commission, which can be accessed through this link. Also, CAO Thomas Bull sold 301 shares of the stock in a transaction that occurred on Wednesday, April 2nd. The stock was sold at an average price of $173.25, for a total transaction of $52,148.25. Following the completion of the sale, the chief accounting officer now owns 14,598 shares of the company’s stock, valued at approximately $2,529,103.50. The trade was a 2.02 % decrease in their position. The disclosure for this sale can be found here. Insiders have sold 58,060 shares of company stock valued at $13,461,875 over the last three months. Company insiders own 3.60% of the company’s stock.
MongoDB Price Performance
Shares of NASDAQ:MDB traded up $25.49 during trading on Wednesday, reaching $171.34. 3,611,865 shares of the company’s stock traded hands, compared to its average volume of 1,790,746. The stock has a market capitalization of $13.91 billion, a P/E ratio of -62.53 and a beta of 1.49. The company has a 50 day moving average of $228.92 and a 200-day moving average of $259.09. MongoDB, Inc. has a 12 month low of $140.78 and a 12 month high of $387.19.
MongoDB (NASDAQ:MDB – Get Free Report) last issued its quarterly earnings data on Wednesday, March 5th. The company reported $0.19 earnings per share (EPS) for the quarter, missing analysts’ consensus estimates of $0.64 by ($0.45). MongoDB had a negative return on equity of 12.22% and a negative net margin of 10.46%. The business had revenue of $548.40 million during the quarter, compared to the consensus estimate of $519.65 million. During the same period last year, the company earned $0.86 earnings per share. Equities research analysts predict that MongoDB, Inc. will post -1.78 earnings per share for the current fiscal year.
MongoDB Company Profile
MongoDB, Inc, together with its subsidiaries, provides general purpose database platform worldwide. The company provides MongoDB Atlas, a hosted multi-cloud database-as-a-service solution; MongoDB Enterprise Advanced, a commercial database server for enterprise customers to run in the cloud, on-premises, or in a hybrid environment; and Community Server, a free-to-download version of its database, which includes the functionality that developers need to get started with MongoDB.
Featured Stories
Before you consider MongoDB, you’ll want to hear this.
MarketBeat keeps track of Wall Street’s top-rated and best performing research analysts and the stocks they recommend to their clients on a daily basis. MarketBeat has identified the five stocks that top analysts are quietly whispering to their clients to buy now before the broader market catches on… and MongoDB wasn’t on the list.
While MongoDB currently has a Moderate Buy rating among analysts, top-rated analysts believe these five stocks are better buys.

Thinking about investing in Meta, Roblox, or Unity? Enter your email to learn what streetwise investors need to know about the metaverse and public markets before making an investment.