MongoDB, Inc. (MDB) Presents at 26th Annual Needham Growth Virtual Conference (Transcript)
MMS • RSS
Posted on mongodb google news. Visit mongodb google news
MongoDB, Inc. (NASDAQ:MDB) 26th Annual Needham Growth Virtual Conference January 16, 2024 10:15 AM ET
Company Participants
Michael Gordon – Chief Operating Officer and Chief Financial Officer
Serge Tanjga – Senior Vice President of Finance
Conference Call Participants
Mike Cikos – Needham & Company
Mike Cikos
Great. Thank you to everyone for joining us today. My name is Mike Cikos. I am the Lead Analyst here at Needham covering infrastructure software. With me, I’m pleased to say we have the management team from MongoDB, CFO and CEO, Michael Gordon, as well as the SVP of Finance, Serge Tanjga. Thank you to both you guys for joining us today. We really do appreciate it as part of the Needham Growth Conference.
Michael Gordon
Thanks for having us, Mike.
Serge Tanjga
Thank you, Mike.
Mike Cikos
And I know we’re going to be tight on time here, so we’re just going to just tackle it right up front. But, one of the things I received a decent amount of inbounds on, obviously, has been the December security incident. Just to kick it off, can you kind of put any parameters out there as far as how extensive the unauthorized access was with respect to the customer base, as well as just set the table? I’m sure not everyone was following the blogs like I was, but anything else you could put around, that would be great.
Serge Tanjga
Sure. Yes. Thanks again, for having us. Happy to dive right into it and let’s start with that. So, as we shared, we were the subject of a phishing attack that gained access to certain of our corporate applications. The unauthorized, person got access primarily to customer contact info and other account-related information. We found no evidence of unauthorized access to Atlas clusters or the Atlas authentication system. Those are two different systems. So that’s sort of important to note.
At this point, our investigation is complete and closed. And so, I think that’s sort of a quick headline. I know these are somewhat common and unfortunately increasingly common events across the landscape. We tried to take a very transparent approach with customers and everything else. And so, from within the matter of days of getting, determining that there had been that unauthorized access, we alerted customers and therefore, kept the public, abreast with various updates, on our alert’s web page and everything else, and I’ve got a number of kudos from customers for doing that. So that’s the quick summary.
Question-and-Answer Session
Q – Mike Cikos
And maybe just from a logistics standpoint, but, is it fair to assume then that there might be incremental costs related to forensic partners on the security side of the house or cyber insurance non-GAAP in that out, like, how do we think about that?
Serge Tanjga
Yes. Nothing material. I mean, obviously, yes, when you responding to one of these, you have some incremental costs that you had intended. We did retain third-party help, etcetera, etcetera. But as of right now, nothing that we would attend to non-GAAP out or anything like that.
I think from a materiality standpoint obviously, you’ve got to run through all the assessments, everything that while we disclosed it publicly, that was more because we’re disclosing to our customers rather than because it was material financially.
Mike Cikos
Understood. And thank you for the qualifier on that. Want to shift to consumption for a second, but just so the audience knows, and I know I’m going to be bobbing around a little bit, but there should be a chat box on your interface. So, if you want to send in a question, feel free to do so. I’m going to try and get to as much as I can while we have Michael and Serge here as well. But please send those questions in.
On the consumption, so again, just taking a step back, can you help us think about the consumption trends that we’ve seen through calendar ’23? And really, I guess, what occurred in Q3 versus your expectations and how did that influence your assumptions when thinking about the Q4 guide?
Serge Tanjga
Yeah, maybe I’ll take a first step at that, Mike. So, I’d actually start a little bit earlier than fiscal year ’24 or calendar ’23. So, starting in the late first quarter of fiscal year ’23 for us, so calendar ’22 effectively. We’re starting to see a macro-induced slowdown in consumption. And we were among the first people to call it out, incorporated in our guide, and so forth. And, we’ve sort of done it before it really started, but we thought we saw enough evidence at the end of Q1 to make it a part of our Q2 guide and then in Q2 of last year, so at this point five or six quarters ago, we did see that slowdown materialize.
And since then, we’ve had periods of seasonal strength and seasonal weakness. And frankly, some just natural variability in the numbers. And we’ve tried to kind of be transparent along the way and give investors sort of our latest thinking. But now with the benefit of hindsight, if we look at that period, effectively starting in Q2 of fiscal year ’23 through Q3 of fiscal year ’24, and also how we thought about the guide for Q4 is we’ve seen relatively stable environment, just the growth was at a lower level.
So, there’ll be periods of strength, like the back half of Q3 tends to be seasonally strong. There’ll be periods of seasonal weakness like around the holiday season, and we try to take that into account when we provide guidance and certainly, when we provide commentary in terms of how we’ve done vis-a-vis our expectations, but generally speaking, there’s sort of like a range of consumption outcomes. You might recall, we showed that chart at our Investor Day that kind of shows the consumption. At a certain level before Q2 and then at a lower level on average since then, and, yes, there’s variability around that average, but we’ve been in kind of that slower, stable world since then. So, let’s just take a general construct.
For Q3, and now we’re in a world in which, we have been in the world of lower macro for more than a year. So, what we’re trying to do is give you color versus our expectations, but also versus last year, and you should expect that we’ll kind of continue doing that going forward. But, in Q3, consumption trends were inline with our expectations. We did see a seasonal recovery in Q3, meaning seasonally Q3 was stronger consumption than Q2.
However, the seasonal recovery was not as strong, as it was in Q3 of last year. Now you might ask yourself, if it was not as strong as last year, why did you expect that? And the reason why we expected that is because, generally speaking, we are seeing less variability in consumption in fiscal year ’24 than we did see in the back three quarters of fiscal year ’23. And we have some reasons and hypotheses of why that might be the case. But that’s what we are observing. As a result, we sort of thought that we would have less of a seasonality benefit in Q3 and that’s what’s transpired.
When it comes to Q4, two things to keep in mind. One is that, we told you in December that, we expected to see a seasonal holiday slowdown that plays out through December and January that’s usage-based like everything else in our model. Underlying application usage is what drives our consumption growth. We just see application usage take a breather, during the holiday season and that drives our consumption growth as well. And then another thing that we called out when it comes to Q4 guide, which isn’t a consumption phenomenon, but is important as you think about revenues, relates to unused commitments.
In Q4 of last year, we talked about several million more than normal, unused commitments that happened because, frankly, that was the last big batch of customers that kind of signed committed deals before macro slowdown and as a result, on average, there was more than normal unused commitments. That was important to call out, particularly when it comes to sequential guide and how people thought about at the beginning of this year. But the thing to keep in mind is that, we have our unused commitments every quarter. It’s a normal course of business. It’s a minor portion of our business, but it’s always there.
In Q4 of last year, we had more of it, and that’s why we called it out in the context of Q4 performance of last year. Now as you think about atlas’ year-over-year growth rate or sequential growth rate for that matter in Q4 of revenue, you got to keep in mind that, last Q4 benefited from several million dollars incremental of unused commitments and we do not expect that to reoccur this year.
Mike Cikos
Got it. You are already answering my next question. Just to tease that out, again, as far as quantification, we haven’t gotten anything more specific, but $7 million. And then it is something that impacts you each quarter, but the reason for the call out specifically in Q4 last year was because it was larger than what you guys typically experience.
Serge Tanjga
Yes. And just to be clear, we didn’t say seven. We said several.
Mike Cikos
Several. Okay.
Serge Tanjga
And it turns out that the bid-ask spread on several is larger than we thought it was, but that’s as specific as we got it, so we are going to stick with that.
Michael Gordon
And I think you got this, Mike, but just for the full audience, that’s sort of the incremental, the several million is the incremental over and above what we kind of normally would have expected or sort of normally see, with that given the dynamics that — talked about where you’ve got people who had commitments before macro slowed down and therefore we wound up as kind of disproportionately more.
Mike Cikos
Got it. And maybe to tease out one more item on the consumption, but we actually hosted another consumption rev rec model-based company last week who is actually talking about, a potentially steeper-than-expected drop off in relation to Christmas because Christmas this year ends up on a weekday instead of a weekend. Is there anything that you guys have seen? Again, I’m trying to think because we are all in this new consumption environment trying to figure out how these holidays flow through these consumption models.
Serge Tanjga
We haven’t specified in terms of the size of seasonality we are expecting to see in Q4. The thing that I would say, though is our underlying source of, variability might be different than other people. And this is where, you’ve heard Michael and me say in the past that consumption is not a business model, consumption is a revenue recognition sort of requirement. And so, the driver for us is the underlying application usage.
So literally, as you just think of yourself as a person who interacts with apps as a consumer and then interacts, with apps in your place of business, you just do less of that during the holidays. You also do less of that during the summer and then when you’re back in the fall, you do more of it, which is why we have why we see [indiscernible] positive seasonality in the back half of Q3. And that’s what we just see around the holidays.
So what day of the week doesn’t really matter because people don’t work on Christmas, or we wouldn’t posit that it would matter for us. And so as long as people are taking vacations, as long as they’re using those vacations to detach from applications in their life, we see a slowdown in usage and therefore a slowdown in consumption growth for us.
Michael Gordon
I do think it’s an interesting point, though, Mike, to your comment. And I know a lot of people are people are trying to understand or familiarize this also the trends or understand it at a deeper level and without standing up on our soapbox to Serge’s point, just because it’s a common rev rec model doesn’t mean that the business model is the same. And so, I think for any company. It’s a good idea to sort of ask that question or further and say, what is driving that underlying usage and consumption, right? And so, for us, to Serge’s point, I don’t think it happens to particularly matter of time where Christmas falls in terms of days of the week. And for others, it may, right?
If the consumption is driven by analysts deciding to run queries and those analysts are on a holiday or not on a holiday or this, that and the other thing like that, it’ll all affect things just like, but ours is at a more sort of fundamental read, write transactions in the database. And the beauty of having as diversified a portfolio as we do is you can’t just pinpoint and say, oh, it’s this one thing. We’re all about e-commerce and so we get this bump here or this long here. We’ve got a portfolio of applications, not just given the size of our customer base, but given the breadth of use cases. But it does have some of these underlying seasonal trends that Serge was talking about.
Mike Cikos
And two points I’d like to highlight with respect to, I guess, population of workloads as well as the Atlas versus EA. And so first on workload population, what I’d say is, while the growth of underlying workloads was impacted by this macro downturn over the last 12 months to 18 months, right? I think management’s been consistent in saying the number of net people. The workloads being brought to MongoDB has been relatively consistent or persisted. And so, where I’m going with this, is it fair to think that there’s this large volume of orphaned applications if you will?
And so, if the macro were to improve or the consumption environment were to improve this cohort of applications should torque higher at the same time like you get a springboard effect naturally. Is that a fair characterization or is that maybe overstepping or mischaracterizing?
Michael Gordon
So, I think, I don’t think of us as having orphaned applications or certainly not having, if I followed the logic train, more orphaned applications as a result of macro. Here’s the way that I would describe it. And the way that we’ve talked about it is that we have done a good job despite the more challenging and uncertain macro environment and continuing to win new workloads. The value prop resonates and we continue to bring those new workloads onboard and have been satisfied with that progress.
The growth of existing applications has grown more slowly, and that’s this read, write, underlying usage that we’re talking about that sort of macro affected. And in the shorter term, when you think about a large installed base, in the shorter term, the results will be much more dictated by the growth of existing locations, then the impact from new workloads that you win.
Over time, right, if you take a five-year of time horizon, it’s going to be much more affected by the new workloads that we’re winning now over that time period than the growth of existing, but in the short-term, the growth of existing sort of outweighs that.
We haven’t seen a dynamic, like I said, that would lead to some sort of, like, orphaned application or whatever. And if you want to explore that topic, we can definitely talk about it more, but that’s not something that we’ve seen. Certainly, new applications, new workloads, or applications and so while they ramp more rapidly than your average application, they grow more quickly in those first couple of quarters than an existing application, they are macro affected. And so, they are growing at a slower rate than on quarter x then a new application would have pre-macro. And so that’s a dynamic.
And certainly, the last thing, and maybe these ties to your kind of torque up question or part of the question or whatever, but it is, we’ve sometimes been asked about inflection points and reacceleration and things like that. And I think what we’ve seen is we’ve been, in a pretty consistent and able macro. And so, to point to, an acceleration from here, I think you would have to believe in an improvement in the macro, there are other scenarios as well that can lead to that. But for this conversation, the one that’s the most relevant, would be an acceleration of macro.
We haven’t seen the macro get better. We haven’t seen the macro get worse. So, we have no reason to call that. So, when we think about our guidance, that hasn’t been a call embedded in our guidance. And obviously, we’ll continue to monitor the things. And in March, when we give our full-year outlook, we’ll kind of update it with the latest, that we have then, but that’s kind of how we think about it.
Mike Cikos
And again, on the Atlas versus EA that I wanted to address as well, and again, just want to make sure I’m not mischaracterizing this, but like when I look at customers that are bringing workloads on to EA, I think management has been consistent in saying, hey, we’re going to meet you wherever you are in your cloud journey. But like, my view is that if you’re bringing a workload to EA, it’s just an on-ramp essentially for workloads over the long-term that are probably eventually going to end up on Atlas. And again, is that fair? Do you say, hey, there’s regulatory constraints or maybe it’s just a modernization effort? That’s not necessarily the case?
Michael Gordon
I think that’s directionally right. It’s shades of gray or the customer specifics matter. Right? But we, that narrative that you’re describing of sort of EA increasingly being viewed as an on-ramp to the cloud, I do think is correct. That doesn’t mean that 100% of workloads will move to the cloud, right, you can talk to different customers and there are plenty that are still on-prem or mostly on-prem. We shared at our Investor Day, the ARR from our top 100 customers, and we kind of broke it down into who is mostly EA, meaning 80% plus of their ARR was EA or mostly Atlas, 80% plus or more of their ARR was Atlas, and 85%, of the dollars are one or the other.
And so, people well, there is an increasing hybrid world and all those kinds of things. The people who are running EA really are not in the cloud. They’re concerned about it. To your point, it could be for regulatory reasons. There are a whole long, laundry list of reasons. It could be, data privacy, governance, sovereignty. There are there are a whole long list of reasons.
And in those places, can you find developers who are frustrated that they can’t modernize or do things? Absolutely. One of the ways that they can start to modernize is by picking MongoDB. Even if they are still running it in an on-prem fashion, they are doing a bit more future proofing and they’re getting closer to their ultimate end goal for whenever they can kind of adopt public cloud.
I think we have also seen that evolution the other way, where maybe a few years ago, moving to the cloud and modernizing were viewed as a little bit more synonymous than they really are. I think people realize, and understand that, that sort of adopting the cloud is just sort of one aspect of that, and that there are many other aspects to modernize. And so, we see people pursuing it in all different flavors.
Mike Cikos
Thank you. And shifting over to product now. I think Vector Search is the first one I wanted to tackle. MongoDB announced the Vector Search was in preview in June. In our view, I think one of the things that MongoDB strikes us as is you guys have done a great job, let’s say, featuring or integrating newer as they become available and that’s kind of established you guys as this general-purpose database.
Atlas Vector Search became generally available in early December. Just to start-off with there, but how has customer feedback been? And then secondly, have you guys thought about maybe the percentage customers that the Vector Search is applicable to? Anything you could do to frame that out?
Serge Tanjga
Maybe I will take the first stab at that. What we have said, really even before GA, which was early December, so a few weeks ago, is that, we have been very pleased with the feedback that we are getting from customers. And incrementally, what I would say is, please, there has been a sort of like a third-party validation of that feedback. We mentioned the Retool survey, but there are sort of other things we have heard indirectly from people that gave us the highest NPS score of all the Vector Search products and that was because, sorry, that was before GA.
Now, you might ask you, that’s an unusual state of affairs, right? The most liked product isn’t actually yet even generally available, compared to all the other alternatives that are out there. I think that fundamentally speaks to the value proposition that we are bringing to the table.
The value proposition is the simplicity of working with our Vector Search offering because it’s seamlessly integrated with the operational database and that demonstrates, that speaks to the value of the product, but indirectly speaks to the pain of having to stitch multiple individual services together to create this offering, which if you are going with a standalone Vector Search solution, you would need to be doing. I think that the validation that we are getting is for the work that we have done, but also for like really understanding what is the pain point. The pain point is operationalizing this and making it easier for the developer to actually deploy and get some value from it.
I will just maybe sidebar before I get back to Vector Search for a second. What I particularly like about the Retool survey and the fact that, it kind of demonstrates sort of the value of our product market fit and sort of specific value of our value proposition is that, it actually applies more broadly sort of related to your point to other things that we have added to our portfolio, whether it is traditional search, whether it is some of the stuff we’ve done at the edge, whether it is analytics, ultimately, customers don’t want to have these multiple point solutions. They don’t want to deal with the pain of integrating them in the background.
And ultimately, the developer data platform vision is all about making the developer jobs easier by solving more and more of the developer problems that relate to data in this behind a single pane of glass and sort of with an elegant developer experience. And it’s kind of great to see that that value proposition resonates in a space that is particularly of focus to customers and investors right now, and where we’re so early on, i.e., this was done before we even had the GA. So that’s in the traction.
When it comes to, like, how broadly applicable it would be. I guess the only thing I would say is two things. One is its very, very early days. Most of what we’re seeing in AI really sits in the proof-of-concept category as opposed to deployment of actual production applications, that should not be a surprise. And not only is it not a surprise because AI, even though it might not feel that way to the investor community, is still a relatively new priority, to your enterprise IT executives.
Secondly, they are increasingly realizing that their data is not ready, to really help them build AI-enabled application. That’s an opportunity for us in the long-term, but that’s like the first impediment sort of two kind of just think about a typical enterprise in their data estate. So, it’ll take some time. The thing that we have said, and, again, this is exceptionally important, but really in the long-term is like we think most apps in one way or another be AI-enabled, whether new ones or rebuilding the existing one, but that’s like over a really, really long-time horizon, but obviously, an incremental benefit to us.
Michael Gordon
I think it will become, this requires a look out in the future and prognosticating a little bit. But if you just think about where applications that are likely to go. It won’t be the core part of every application to Serge’s point, but I think this will be pretty common feature functionality. And we’ve used the example that there was a point in time when indexes, from a database standpoint were innovative. And then they just became used as part of everything, and this has an aspect that’s like a reverse index. It won’t be used in every single application. It won’t be the core of every single application.
But I think it’ll be pretty foundational over time to just how applications are used much like tech search or other things. And so, I think having it integrated in your operational transactional database is an advantage for us.
Mike Cikos
And probably more of a tech-based question here, like a two quarter though. So, when a customer brings a workload to MongoDB, for vector search, like when is the data being vectorized? Is it upon ingestion? And then the second part is, when is the embedding model then being employed? Is that when the query is being called? Again, sorry to, I’m just trying to figure out how this process flows to them?
Serge Tanjga
Yes, maybe I’ll take a first step at that. So, actually, both the data and the query need to be vectorized, and in both times, the embeddings model used. So, the way it works is, if the application is ingesting data, whether actually creating it or getting it from an S3 or somewhere. At that moment, it’s vectorized and at that moment, you use a third-party API, you access an embeddings model, you create the vectors and then you store them into MongoDB.
So that’s like step number one. And step number two, now there’s a query, there’s an actual question, there’s a RAG architecture use case that’s being applied and there’s a question being you need to vectorize that question using the same embedding model, so that you can do nearest neighbor search can actually find the data that is in the multidimensional space most similar and most relevant. So, it happens in both times.
Mike Cikos
Thank you for that. We do have some questions that are coming in from folks. One of them, just at a high level, but how should we think about how much more consumption intensive AI or vector search workloads are versus, I guess, the workloads we’ve seen more traditionally?
Serge Tanjga
So, we’ve heard that question before, and I would say two things. One is it’s early, so, it’s kind of difficult to draw any conclusion based on where we are in the process of development that. But secondly, what I would say is I’m not sure that will be the major driver. And what I mean by that, what we’ve seen over and over again with other applications, is that is the popularity of the application that really drives the intensity of usage as opposed to the usage itself.
So, if you have two video games on our platform, of which we have many, and one is a worldwide phenomenon hit and the other one is just another video game. The order of magnitude difference between the usage of those two use cases is what drives the consumption much more so than, like, take all the video games on average versus all the other workloads on average and try to figure out sort of what’s the incremental uplift for the usage intensity of the video game.
So really, the question is going to be, how popular are these use cases going to be? How many of the really popular ones are going to land up on our platform versus somewhere else? And that’s going to be the bigger driver than some sort of average uplift for music.
Mike Cikos
Thank you for that. And if we just shift over to Atlas Search nodes, which also went GA alongside Vector Search in December, just for background for the folks, but it allows customers to Vector Search workloads, which can have more demanding memory requirements independent of transactional workloads. And so, wanted to get a sense here is Search nodes, is that inherently tied to adoption of Vector Search? Do the two go hand in hand or not necessarily?
Serge Tanjga
So primarily, the independent Search nodes are driven by actual traditional text search, if want to call it that. Because where we’ve come with sort of the evolution of that product so if you go back to our search announcement, it was in calendar 2019. And obviously, we’re disrupting that space where there’s legacy players and it’s a very well-established market and one that’s existed for a long time. And kind of like a traditional disruptor, we start with relatively small use cases because, again, our value proposition is the integration.
And then as our product is improved, we’re kind of moving upmarket slowly in the search market. And we’ve now gotten to the point where we see enough of the large search use case. It’s not Vector Search, traditional search use cases, where we heard from customers that feedback that independently tuning their search deployment, separate from the operational deployment would be beneficial to them. And so that’s why we announced separate search nodes because it will allow them to scale the search separately from operational, which is sometimes helpful and certainly allows them to better optimize their use cases and then deployments.
Mike Cikos
Is that why, like, I know, again, the company has discussed being able to run 60% faster query times with search notes? Is it because of that independent scaling or not necessarily?
Serge Tanjga
So, I don’t honestly recall where the 60% number comes from, but either way, you’re right. Like, we do expect that the queries will be more efficient, and generally, the search deployment will be more efficient. It comes down to two things. One is you now have separate resources, so they’re not in contention, right? Like, it’s not one single cluster or node that’s trying to do both operational rewrite operations as well as search. So, it stands to reason that with more resources, there’ll be more efficiency and better performance, right? So that’s number one.
And then number two is back to sort of what I was saying, which is, like, if you have a separate search node, which is only used for that, now you can optimize how it’s being used specifically for search, and that ought to do better than if, again, if you’re trying to ping the shared resource. So, it is going to be better performance and it’s going to be more cost-effective for the customer.
Michael Gordon
Yes. I think of them as both beneficial related, but slightly different, right. I think the performance is the performance. And I think that the cost, which some people can include as part of the performance, but in this case, I break out the cost as different. So part of what — you would have had, maybe the simplest way to think about it is by having separate search nodes to Serge’s point, you can scale up or down the search nodes separately from your operational nodes, which depending on, if you are less search intensive, more search intensive allows you to better optimize price performance along that continuum in a way that is advantageous to you. And that’s kind of the contour of your workload.
Mike Cikos
Okay. I’m sorry. The last product here, you guys got a pretty robust roadmap coming over the next year, but stream processing. Has the company given a sense for when that’s expected to become GA? And then secondly, just to flush it out, while we have you, but, can you discuss why tying stream processing to the database is the way to go in your view? Others would have you believe that stream processing should be its own independent engine.
Serge Tanjga
I will take that to start and Michael, feel free to add, obviously. We haven’t specified when we expect the GA to come out. If you look at our history, it tends on average to be about 12 months after it’s been in preview. Not to say that, that’s a forecast or anything like that, but like that gives you sort of the mean outcome compared to history and hopefully that’s a little bit helpful. And then on this question of tying, what do you tie stream processing to? It’s not necessarily how we think about it. When we saw a problem space, in stream processing, then we thought we had a unique right to play.
And what I mean by that is, streaming data tends to come with rigid schemas, not terribly flexible. We hear time-and-time again how developers are struggling to work with it. It’s a massive market. It’s a nascent market in the sense that, like, there’s no TAM that you can go and ask Gartner or IDC about it. It’s a new, even the sort of the established players are relatively small in the grand scheme of things. But we think part of the reason why it’s nascent is because it’s hard to work with streaming data with existing solutions.
But outside of that, we saw a lot of similarities with sort of the state of the database market, when we decided to start the company and try to disrupt that market, which is difficulty working with data, rigid and inflexible schemas, an opportunity to both provide better performance as well as make the developers more productive.
Like those are the pieces. And on that level, we find great similarities between stream processing today and where the database market was like 15 years, 17 years ago, when we started and the problem that we tried to solve. We are very, very excited about like sort of our individual right to play in this market, given our history and success that we have had with the document model. That’s point number one.
Point number two goes back to what we were talking about even in the vector search. Now if we are going to build this product, which we think we can build a better mousetrap individually in and of itself. We are going to, in addition, make it easier to work other products and have the unified developer experience and make it so that, developers don’t have to tie together too many individual technologies.
We think that, that works for all our products. And because they’re, what’s the word, seamlessly integrated with our other products, it ought to help the adoption of all of them at the same time. That’s ultimately sort of the business rationale, if you will, behind the developer data platform vision.
Mike Cikos
And Serge, maybe it’s worth just two seconds talking about the difference between stream processing and stream kind of plumbing?
Serge Tanjga
Sorry. That wasn’t obvious. Like stream processing is actually…
Mike Cikos
That’s going to be obvious to everyone.
Serge Tanjga
So, stream process streaming is a far more established market, which we sort of refer to as the infrastructure of the plumbing of moving data around in real time. Stream processing is advanced operations and building that data inside, the applications, which is still a relatively new market, while streaming obviously it’s been around as many companies participating in, and the Kafka ecosystem is reasonably well defined. Stream process is different and very, very new.
Mike Cikos
Thank you for that. And just to be true to my earlier comments, I am going to start introducing some more of these client events we’ve received. First, is there any rule of thumb on a multi-quarter basis for the degree of Atlas acceleration we should expect relative to average AWS or Azure acceleration? It sounds like people are looking at you guys versus the hyperscale’s and thinking, is there a ratio like there are for maybe some other consumption names out there?
Serge Tanjga
We get the question every once in a while. We understand that some of the broadly defined consumption peers find this as a helpful sort of rule of thumb for their business model. We don’t think it’s very helpful for us. Like, ultimately, at the end of the day, we are driving the sale of our product. We are, we keep our hand in our, faith in our own hands. We partner very closely with the cloud providers, but it’s not that AWS sends us buckets and buckets of workloads to sign up on our platform. They’re also a competitor, don’t forget.
And so, we think that there’s no fixed multiplier of, if the AWS growth is this then Mongo’s going to be in this type of range. And of course, generally speaking, if the cloud adoption, as a general rule goes faster or go slower, there will be some correlation between the business models, but it’s nowhere nearly as precise as, like, let’s put a ratio or even relatively narrow range. That’s because ultimately, we drive our own sales.
Mike Cikos
And then the other question that we received was as you guys shift away from a focus on big upfront commitments, which I know we haven’t even had time to touch on the go to market here. But, you’ve obviously seen headwinds on deferred revenue, because you’re prioritizing consumption now? How long is that headwind on deferred expected to last? Is this a multiyear item? Or is it more just a couple of quarters? How close are we to getting through the other side of that?
Serge Tanjga
Let me just provide a little bit of background then address for revenue question. So, as Mike, and as most investors know, we’ve been on a multiyear journey to reduce the importance of upfront commitment in our Atlas business. And this began in calendar 2020, actually, before COVID, because frankly, we saw in the data, the commitment introduces unnecessary friction to our sales processes slows us down and probably provides some optimal customer experience and therefore was not additive to us than the purposes of maximizing long-term value.
There was a number of steps on that journey, since 2020, but in the fiscal year ’24, the year that we’re about to finish here, we did arguably the biggest one, which is that we no longer pay on a one-year commitment. So far, up until this point, we made it more attractive to get paid on consumption but still paid on commitment. Whereas this year, we say, no commitment, one-year commitment, you don’t get paid anymore. You sign a $5 million, one-year deal, doesn’t matter, you will still only get paid on consumption related to that deal.
And so why do we do that? Because, again, with large commitments, we saw that there was still a source of friction and misaligning us with our largest customers. And now that we’ve built multiple years of success in showing to our reps that there’s a reason why we’re doing this and they will still do well and the company will do well, we kind of took this big step. So that started at the beginning of the current fiscal year, and it sort of went the same way we see other changes, which is the fiscal year begins, you roll out the change in your comp plans, you enable the sales team.
But it’s usually not until a few months later that you actually start to see changes in behavior because they need to internalize it, they need to see their commission check, and then you start to see the impact. And the impact is, frankly, we see far less Atlas commit. That’s not surprising, because our reps are not pushing it. And so, to the extent that it’s still happening, it’s because customers are pushing for it, and they’re pushing for it because particularly on a one-year basis, because they hope they can get some incremental sort of discounting benefits, but those are really de minimis.
So, as a result, when neither party particularly interested, you see a huge decline in commitment. And so, you’ve seen that, and really, the first time we’ve really seen that, it was in the second quarter of this year and sort of the gap that we’ve seen between the op income and the cash flow line. Like I said, this sort of started this year, but it didn’t really pick up pace until later in the year. So, we would expect there to continue to be an impact from this change going into fiscal year ’25.
Mike Cikos
Understood. And just last point here and then we’ll leave it. But is it fair to think given, I know you’ve been on a multiyear journey with this, but last year was a big change, right, for the sales organization. So, are there any learnings or anything we should keep in mind, again, when thinking about comparisons, whether on a revenue consumption basis and then workloads or sales payout that occurred last year, which may not be recurring this year?
Serge Tanjga
No. Not really. I would say that, we’ve, because it wasn’t our first change, we learn a decent amount about change management and how to implement this and roll this out in fiscal year ’24. And that’s frankly the reason why you do it over full years. You do it to build conviction that you’re doing the right thing, but you’re also doing it to, like, demonstrate to the Salesforce, like, why you’re doing it and there’s a rationale for this. And it wasn’t actually terribly disruptive, given the size of the change that it was, not in terms of sales attrition or anything like that.
So, it was a big change. You see it flow through the financials. We also see it on the volume of new workloads, which is where we started the conversation and how good we feel about that. But as you think about, like, what does that mean in terms of compares or mechanics of the financial model for next year, nothing particular to call out.
Mike Cikos
Okay. And we’ll leave it there. We’ll leave it there. I do want to be honest about that last question. So, thank you very much, guys. I really do appreciate the time today.
Serge Tanjga
Thanks for asking.
Michael Gordon
Pleasure as always, Mike. Thanks for hosting us.
Mike Cikos
Take care. Bye, bye.
Article originally posted on mongodb google news. Visit mongodb google news