Month: July 2025

MMS • Zach Lloyd

Transcript
Shane Hastie: Good day folks. This is Shane Hastie for the InfoQ Engineering Culture podcast. Today I get to sit down with Zach Lloyd. Zach, welcome. Thanks for taking the time to talk to us.
Introductions [00:48]
Zach Lloyd: I’m really excited to be here, Shane. Thanks for having me.
Shane Hastie: I’d like to start these conversations with who’s Zach?
Zach Lloyd: Let’s see how I describe myself. So I’m an engineer I’d say first and foremost. I’ve been a software engineer for, oh God, 25-ish years now. I’ve had a career where I’ve gotten the chance to build some really cool stuff. I was a principal engineer at Google. I used to lead engineering for the Google Docs Suite. I helped build really a lot of Google’s spreadsheet product, which was very, very cool experience. I was also an engineering manager and managed a significant team of engineers there.
I have been out of Google. I’m a two-time startup founder and my current company is a company called Warp, which is a AI-powered developer tool. It’s really like a reimagination of the command line terminal, such that as a developer you can, instead of typing commands, you can just tell your terminal what you want it to do and it will agentically do it for you.
I had a brief stint as the CTO at Time Magazine, which was interesting. So I’ve kind of been, it was my only real thing outside of the technology industry.
I’d say what really motivates me is I love building software products. My goal is to just build stuff that is useful for people, whether it’s useful for knowledge workers or useful, most recently for developers, love solving interesting technical problems. But really I’m passionate about building something that other people find useful. That’s a quick summary.
Shane Hastie: Cool.
One of the things that got us in touch was you published the Warp, how we work.
I’d call it a page, a site, a … There are a whole lot of principles and ideas. What made you want to make that so visible?
The “How we Work” document [02:49]
Zach Lloyd: Yes. So what it is it’s basically my accumulated, I guess, knowledge from being an engineer and an engineering manager. I had a period in between the two companies that I founded where I was like, “You know what? I really want to get this all down. I want to get this down from my head, how I think about hiring, how I think about building an engineering culture, how I think about values, all the way into the minutia of how people should structure pull requests and use feature flags, really down into this is my playbook, or if I were to build an engineering team, build an engineering organization or product team, what are the key things that matter to me?”
I put it down initially for myself and then I shared it with friends. I used it a little bit as a basis for doing some consulting for other startups and advising for other startups.
And then when I founded Warp, I was like, this is an awesome thing to publish because from an internal perspective, it’s like our operating system as a team, and then from an external perspective it lets people get a very good sense of the culture we’re building and how we work. And so it lets people who are interested in working with us self-select in. It lets people who are like, “No, I don’t like that culture”, be like, “Okay, that’s not the team for me”. And that’s also, it’s a very useful tool from a hiring standpoint in that regard.
Shane Hastie: So what would you say is the core of culture?
Core engineering culture values [04:33]
Zach Lloyd: I think it starts with the values that we care about. I think different leaders, different cultures are going to value different things, but for me, the type of place that I want to work and the type of people I want to work with, there’s a common set of values that I care about. And for me, those happen to be sort of honesty. Are we able to be very honest and transparent at all times, kind of a no BS type culture? Is it a culture where people are concerned with hierarchy? ‘Cause I really don’t like that. I think that the best ideas are coming from any place in the company. Is it a place where it’s like people’s ideas, the best ideas went out? That really matters to me.
A second value that I, again, this might be different for other people, but for me was being pragmatic, and to me that’s where I see things go wrong. I’ve seen things go wrong in my career is when there’s dogmatism or very rigid thinking in how you do something. I want to work someplace where we realize that solving real world problems is messy, that perfection can actually get in the way and that we’re trying to make a reasonable set of trade-offs and having like-minded people who aren’t so anchored to particular ideas that they won’t adjust them in sort of the face of new information, that really matters to me.
Product-first vs code-first engineers [06:03]
A third value for us is being user-focused or product-focused. The reason I bring this up, so I wrote this essay which I think is maybe somewhat controversial where I distinguish between engineers who are product-first and engineers who are code-first. And as a product-first engineer, what I’m looking for is are you always thinking about the why that you’re building something? What problem is it solving for a user?
And if you can’t name the problem, I think probably going off track. Whereas sometimes what I think of more as a code-first approach is there’s a class of engineering who’s like, they’re really into building with the latest technology. Are my APIs right? Are my abstractions right? And it’s like looking at the code for the sake of the code. And I don’t care about that. I care about good code in the service of a great user experience. I don’t care about … Users don’t use code, let me put it like that. They use the thing that you’ve built. And so I really emphasize that. And that actually is a great filter. Some engineers do not agree with what I’m saying at all here, but to me it’s very, very important value.
So I would start with if you’re someone who’s building a team or you’re hiring or you’re managing, I would always start with what are those core values that you really care about, that you believe in, that you can embody? And then try to build a team of people who subscribe and believe in those same values I think is a good place to start.
Shane Hastie: One of the things that I see in there is just fix small issues because that almost contradict with the product-first versus the code-first engineer?
Zach Lloyd: No, ’cause the idea behind that is that it encourages a culture of ownership. So it’s like the anti-pattern to me would be working someplace where when someone sees an issue, they throw it into Slack or the bug tracker or they’re like, “Hey, I noticed this other engineer on the team broke this thing. They have to fix it”. And so it creates communication overhead and it creates a lack of ownership. So what I’m trying to accomplish with that particular rule or guideline is like we’re all owners of this thing. We also feel responsibility. If you see something, just fix it. And I’m assuming that they’re fixing something that matters to a user. Let me put it that way. If it doesn’t matter to a user, then don’t fix it. But if it’s something that is impacting a user, I would love, I’d love it when engineers just fix stuff.
Shane Hastie: You said that this feeds into your hiring process. How do you hire and how do you hire well?
How do you hire well? [08:59]
Zach Lloyd: Yes, this is really, really hard. I think it’s hard to hire perfectly. So we are looking for generalists, product-focused engineers who have a strong fundamental background in computer science and programming, and also who subscribe to these values, put it that way. So finding those people is like, those are great people. Everyone wants those people. It’s hard to find those people.
When we hire, it basically comes, there’s three sources of people. I don’t know how into the weeds to go here, but you have people who apply. So you have a sort of inbound. You have people who are referred from people who currently work there. And then you have people who aren’t looking for a job, who we have identified is like, “Hey, this person looks like an awesome potential fit. Who we reach out to?” And that’s the three things.
It’s essentially a sales process, whether you’re starting your own company or you’re trying to attract people to your team inside of a big organization. I remember doing this at Google. I was constantly trying to sell great engineers to come work with me. So I think you have to get good at communicating why someone should do that. And then, once you’ve done that, you got to know what you’re looking for. And so for us it’s like product-focused, generalists, really, really strong CS foundations.
The reason I focus on generalists by the way, is it’s partly the domain that I’m in. We’re not doing super specialized stuff. You don’t need PhD-level expertise, although we’re now opening up some roles on the AI side that are a little bit more specialized. But in general, I like people who just have great problem-solving skills, agnostic to the technology ’cause the technologies, we tend to choose the right technology to solve the problem. And so anchoring on a technology ahead of time is like an anti-pattern to me.
I like people who can work fully across the stack. So I never hire back end versus front end. And again, there’s totally valid differing schools of thought, but to me the best way to get a product feature that works well for a user is if the engineer builds the entire thing. There’s some efficiency here to that, but I think it aligns the incentive of the engineer and the user the best. So those are some of the principles.
Shane Hastie: What does great technical leadership look like?
Models of technical leadership [11:29]
Zach Lloyd: Yes, great question. Actually let me ask you this. Are you asking about management? I see leadership. How do you think of it? Or should I just give you my whole lay of the land?
Shane Hastie: Let’s go top to bottom.
Zach Lloyd: Yes. Okay. So there’s this traditional distinction where it’s like you can have people who are tech leads, maybe not managers, but are like your architects. They’re super knowledgeable about the system. And then flip that they have a counterpart who is a engineering manager whose specialty is maybe more like people management type thing. So it’s like how do you help an engineer develop their career? How do you help them get promoted? And at most big tech companies, those roles bifurcate. So you have two separate things.
My personal view on this is that I actually like a combo. So when I was at Google, I was always on what was the individual contributor track, but I managed people and I always found that my authority as a manager came at least in part because I understood the technology very, very deeply and still contributed as an engineer.
That’s my personal preference. However, I’ve seen people be very successful just as pure ICs. If they’re going to do that, I think they really need to excel in terms of teaching other people on their team the technical skills in terms of doing things like great code review, great design documents, really modeling what technical excellence looks like so that the people who they’re trying to get to build things the right way, do that, see what technical excellence is.
On the flip side, if you’re an engineering manager, I think that the thing that makes the most successful engineering managers I know successful is really deeply felt empathy and really aligning with the interests and being very good at understanding what is the motivation of an engineer who’s on their team? Are they trying to up-level their technical skills? Are they trying to get exposed to more leadership opportunities? Are they trying to run bigger projects?
I’m not quite as great at that. I mean I’m not bad at that, but there are some people who I’ve worked with who just excel and really take joy out of helping other engineers on their team succeed and understanding what their goals are. So that’s how I would think about that role.
Shane Hastie:How do we help engineers grow their careers?
Growing engineering careers [14:08]
Zach Lloyd: There’s a bunch of ways to answer this. So depending on where you’re at in your career, there’s probably some next up skillset that you want to improve upon. So when you’re really early in your career, and we hire a bunch of people who are right out of college at work, the first thing I emphasize is just become an excellent software engineer. And what that means is can you write code that is production quality code? Is it well-tested? When there’s a bug in it, are you proactive in fixing it? Do you take code review comments well and adjust? Do you really learn the language?
So at the beginning of the career, my advice is hone your technical skills, just become an excellent IC engineer. Usually when engineers are a couple years into doing that, I think that the focus shifts a bit to taking on more responsibility, shipping things that have more impact, maybe leading smaller teams, building some other skills that you’re going to need if eventually you’re going to be either the technical lead archetype or you’re going to be an engineering manager.
And so I think that’s about finding the right opportunities when people have demonstrated that they have the technical skills to take on projects that have higher scope or projects that require a leadership aspect. As an engineering manager, the trickiness there tends to be like, are there enough opportunities for that type of thing? At Google, I saw some crazy anti-patterns around optimizing for people’s promotions, which was really interesting. At Google, I don’t know how much digressing to this, but at Google, everyone is at a level. So you start as a SWE-2 or SWE-3, become a senior engineer or staff engineer or senior staff engineer or principal engineer. And a lot of what seems to drive career progression is can you get to that next level on the ladder? And each level on the ladder has a very well-defined rubric and that rubric is sliced up into, I think I forget what it is, like four things. It’s like impact, scope, leadership, whatever.
And so there’s a lot of managing to the rubric, which I do think can create these perverse incentives where it’s like you’re not necessarily managing to what the person truly needs in order to develop themselves. It’s like you’re managing so that you can put together a promotion packet that a committee of other managers will look at and be like, “Yes, this person deserves to go up one rung on the ladder”. And there’s huge amounts of money at stake in this, right? It’s a very high stakes type thing. And so that I try to stay away from. We don’t organize Warps career progression around that.
And the other problem with that is that it creates adverse business outcomes because you end up having as a manager to create opportunities for people to demonstrate these checkbox skills on the ladder, which the canonical examples, you make a project that doesn’t need to exist in order to give someone a leadership opportunity to ship a project and you end up, this happens all the time. It’s like a crazy system. I would try to avoid that and try and just focus on genuinely what is it that the person wants to grow at and how can you help them succeed.
Shane Hastie: I know you have some thoughts on, we’re in the world of AI today, on engineers using the generative AI tools.
AI tools for engineers [17:47]
Zach Lloyd: So I just posted something on LinkedIn on this and I got in a lot of trouble, but my thought is it’s not a question of if, it’s a question of when as far as developers really need to learn to use these tools as best they can to ship more software. There is fear around them. There is I think rightful frustration around using them. I don’t know if you’ve used them, but if you used them six months ago, nine months ago and you asked AI to build you a feature, you’d probably get something that didn’t work very well and you might just decide, “Hey, this isn’t worth my time. This isn’t worthwhile. Technology’s not good enough”.
The technology is changing extremely quickly to the point where I think it is useful not for every task. I also think the correct way of thinking about it right now as an engineer is as another tool in your toolbox. So it’s raising the abstraction level from, it’s not that dissimilar, the shift from assembly language to programming language, formal programming language. And then even within formal programming languages, there’s a huge difference between working in C and working in Python or JavaScript.
And this is a step change to where the way that you can work is by simply directing an AI to do some amount of the work for you. You should consider it like a draft where you then review it and iterate and get to a point where it’s at the quality bar where if you had written it by hand that it’s still there. But I basically think as an engineer, if you want to continue in this field, you need to start thinking of yourself not as a coder, but as a producer of software and the way you produce software, you have to use the best possible method for that. And that might not be writing code by hand. That might be guiding AI and hopping in and iterating with it. And so that’s my take on it.
It is a struggle as an engineering manager and leader to change the habits of people on my team to approach programming problems in this way. And this is where I got in trouble on LinkedIn because I was like, “How do you get senior engineers who want to use these tools?” And then a lot of people were like, “The engineers know best about what tools they should be using. Why are you telling them to use these tools? They’ll use them when the tools are good enough”. And there’s truth to that. But I actually think people need a nudge and people need to learn a new skill set ’cause there’s actually skill in how do you prompt the AI, how do you work with it. So that’s my take on it.
Shane Hastie: One of the things we have seen with the take-up of these tools, a couple of things happening. One, pull requests seem to have got bigger. One study done in Australia, 300% more code being produced when using the generative AI tools, and about 400% more bugs.
AI tool challenges [20:54]
Zach Lloyd: Yes. That doesn’t surprise me. More code equals more bugs almost axiomatically. And one of the best engineers I ever worked with a Google, I was like, “What’s your main principle for writing great code?” And he’s like, “Write less code”. The more you can delete, the better. So that’s not a good sign.
I would say that using these tools, you cannot advocate responsibility for the quality of the code any more than if you were using IntelliSense in your IDE and be like, “Oh well, the tab complete, put this function so I just used it”. That’s totally unacceptable. And so you as an engineer using the tool, have to maintain responsibility for the thing that you’re submitting.
I also think one of just the cardinal rules of good software engineering is small PRs, small discrete changes. And so one of the anti-patterns to me in using AI is trying to one shot or zero shot through one prompt a huge thing. And this is where I think there’s actual skill in using this stuff. You should still decompose the problem exactly as though you were going to write the components one by one, but you can just save a ton of time. And that’s actually the interesting part of the engineering a lot of the times, it’s like what’s the right decomposition? And then you should use the AI to help you write the components. But what you shouldn’t do is be like, “Write me this whole app that does this or this whole system that does that”, because the more code there is, the less you’re going to comprehend it, the more bugs that are going to be. And so that’s, I think I diagnosed that as a misuse, not an intrinsic issue with AI. That’s my take. But I don’t know, people probably disagree.
Shane Hastie: For the junior engineer who’s coming into the profession today and these tools are their norm, one of the concerns that I’ve certainly heard and seen is how do we help that person get the underlying skills to do that decomposition well?
The impact on junior engineers? [23:13]
Zach Lloyd: Yes, it’s a great question. I have some friends who run coding bootcamps and I know professors of CS and I think that you, it’s like when I was in college, I learned C, and I’ve never written a professional program in C in my life, but I am glad that I learned C because it lets you learn, understand how memory works, how the function call stack works, basically how computers work. And so what I think will be problematic for junior engineers is if you don’t learn the basics and you are just trying to go straight to the AI that is making your apps for you, I think that’s a recipe for stuff that doesn’t work in a production environment.
I think that’s fine for prototyping. I think that’s fine for low stakes applications. Like you’re making, I don’t know, a landing page or something like that, but I don’t think anyone who’s working at Google or at a bank or at SpaceX or whatever should ever be generating code without an understanding of how code works.
So I would still teach people the fundamentals. And then I think that to your question, how do you teach people to do production software engineering, decomposing things right, I don’t know, the main way that I learned that I still think works, which is I got code reviews and design doc reviews from engineers who knew what they were doing when I didn’t know what I was doing. And I think this is one of the reason why I think it’s important that the more senior engineers learn how to use these AI tools is because they’ll learn how to review and improve the experience of junior developers using these AI tools.
Changes needed in the ways we train engineers [25:11]
I think a bad outcome would be a world where the more experienced generation of developers shuns these tools. The junior developers use these tools. The more advanced or experienced developer just think that the junior developers are misusing these tools and it’s just a mess. So I think there needs to be almost a redesign of the engineering curriculum and engineering like how you teach an engineer in light of the fact that these tools now exist. That’s what I think.
Shane Hastie: Yes. I have a grandson who’s studying computer science at the moment, and they’re making them write their test programs on paper by hand.
Zach Lloyd: Oh my god. What? Is that to prevent cheating with AI or what is that?
Shane Hastie: That is what I believe it’s about.
Zach Lloyd: That’s not good. Wait, that’s not … Yes. I think that the curriculum has to be re-imagined so that you learn the basics, but then you learn the tools you’re going to use as a pro engineer, and AI is definitely going to be one of them, and you learn how to use it correctly. The risk is this stuff is changing so so quickly that I think it’s very hard to know what the heck to do ’cause the technology has advanced tremendously in the last six to nine months, so it’s a hard thing to figure out exactly what to do.
Shane Hastie: Well, Zach, we’ve meandered a lot, a lot of really interesting stuff in there. If people want to continue the conversation, where do they find you?
Zach Lloyd: Yes, I think the easiest thing is to just reach out to me on LinkedIn. I’ll respond to people DMing me there. I’m also on Twitter, X. I don’t use that as much. I think for this group of people, LinkedIn is probably the best place.
Shane Hastie: Well, thank you so much for taking the time to talk to us.
Zach Lloyd: Thank you for having me, Shane. This was awesome. I hope people enjoyed it.
Mentioned:
.
From this page you also have access to our recorded show notes. They all have clickable links that will take you directly to that part of the audio.

MMS • RSS
Nisa Investment Advisors LLC cut its stake in shares of MongoDB, Inc. (NASDAQ:MDB – Free Report) by 86.1% in the 1st quarter, according to its most recent Form 13F filing with the Securities and Exchange Commission. The fund owned 800 shares of the company’s stock after selling 4,955 shares during the quarter. Nisa Investment Advisors LLC’s holdings in MongoDB were worth $140,000 as of its most recent SEC filing.
Other institutional investors also recently made changes to their positions in the company. Cloud Capital Management LLC bought a new position in MongoDB in the 1st quarter worth approximately $25,000. Strategic Investment Solutions Inc. IL bought a new position in MongoDB in the 4th quarter worth approximately $29,000. Coppell Advisory Solutions LLC grew its holdings in MongoDB by 364.0% in the 4th quarter. Coppell Advisory Solutions LLC now owns 232 shares of the company’s stock worth $54,000 after buying an additional 182 shares in the last quarter. Aster Capital Management DIFC Ltd bought a new position in MongoDB in the 4th quarter worth approximately $97,000. Finally, Fifth Third Bancorp grew its holdings in MongoDB by 15.9% in the 1st quarter. Fifth Third Bancorp now owns 569 shares of the company’s stock worth $100,000 after buying an additional 78 shares in the last quarter. 89.29% of the stock is owned by institutional investors.
Wall Street Analyst Weigh In
A number of equities analysts recently issued reports on the company. Bank of America increased their price objective on MongoDB from $215.00 to $275.00 and gave the stock a “buy” rating in a report on Thursday, June 5th. DA Davidson reaffirmed a “buy” rating and set a $275.00 price target on shares of MongoDB in a research note on Thursday, June 5th. Cantor Fitzgerald increased their price target on MongoDB from $252.00 to $271.00 and gave the company an “overweight” rating in a research note on Thursday, June 5th. Macquarie reaffirmed a “neutral” rating and set a $230.00 price target (up from $215.00) on shares of MongoDB in a research note on Friday, June 6th. Finally, Daiwa Capital Markets initiated coverage on MongoDB in a research note on Tuesday, April 1st. They set an “outperform” rating and a $202.00 price target for the company. Nine analysts have rated the stock with a hold rating, twenty-six have assigned a buy rating and one has issued a strong buy rating to the company. According to MarketBeat.com, the stock currently has a consensus rating of “Moderate Buy” and an average price target of $281.35.
View Our Latest Research Report on MDB
MongoDB Stock Performance
Shares of NASDAQ:MDB traded up $2.68 during midday trading on Friday, hitting $221.21. The company’s stock had a trading volume of 1,822,164 shares, compared to its average volume of 1,982,480. The stock has a market capitalization of $18.08 billion, a price-to-earnings ratio of -194.04 and a beta of 1.41. The stock’s 50 day simple moving average is $202.71 and its 200 day simple moving average is $213.18. MongoDB, Inc. has a fifty-two week low of $140.78 and a fifty-two week high of $370.00.
MongoDB (NASDAQ:MDB – Get Free Report) last issued its quarterly earnings results on Wednesday, June 4th. The company reported $1.00 earnings per share (EPS) for the quarter, beating the consensus estimate of $0.65 by $0.35. The company had revenue of $549.01 million for the quarter, compared to the consensus estimate of $527.49 million. MongoDB had a negative return on equity of 3.16% and a negative net margin of 4.09%. The firm’s revenue was up 21.8% compared to the same quarter last year. During the same period last year, the business posted $0.51 earnings per share. As a group, analysts anticipate that MongoDB, Inc. will post -1.78 EPS for the current year.
Insider Buying and Selling
In other MongoDB news, Director Hope F. Cochran sold 1,174 shares of the firm’s stock in a transaction dated Tuesday, June 17th. The shares were sold at an average price of $201.08, for a total transaction of $236,067.92. Following the completion of the transaction, the director owned 21,096 shares in the company, valued at $4,241,983.68. This trade represents a 5.27% decrease in their ownership of the stock. The sale was disclosed in a document filed with the Securities & Exchange Commission, which is available at this link. Also, CEO Dev Ittycheria sold 3,747 shares of the firm’s stock in a transaction dated Wednesday, July 2nd. The stock was sold at an average price of $206.05, for a total transaction of $772,069.35. Following the transaction, the chief executive officer owned 253,227 shares of the company’s stock, valued at approximately $52,177,423.35. This represents a 1.46% decrease in their position. The disclosure for this sale can be found here. Insiders have sold a total of 32,746 shares of company stock valued at $7,500,196 over the last three months. 3.10% of the stock is owned by corporate insiders.
About MongoDB
MongoDB, Inc, together with its subsidiaries, provides general purpose database platform worldwide. The company provides MongoDB Atlas, a hosted multi-cloud database-as-a-service solution; MongoDB Enterprise Advanced, a commercial database server for enterprise customers to run in the cloud, on-premises, or in a hybrid environment; and Community Server, a free-to-download version of its database, which includes the functionality that developers need to get started with MongoDB.
Recommended Stories
Before you consider MongoDB, you’ll want to hear this.
MarketBeat keeps track of Wall Street’s top-rated and best performing research analysts and the stocks they recommend to their clients on a daily basis. MarketBeat has identified the five stocks that top analysts are quietly whispering to their clients to buy now before the broader market catches on… and MongoDB wasn’t on the list.
While MongoDB currently has a Moderate Buy rating among analysts, top-rated analysts believe these five stocks are better buys.

Discover the 10 Best High-Yield Dividend Stocks for 2025 and secure reliable income in uncertain markets. Download the report now to identify top dividend payers and avoid common yield traps.
Investor Dan Ives Says Microsoft and Nvidia To Hit $5000000000000 Market Cap … – The Daily Hodl

MMS • RSS
Dan Ives, the global head of technology research at Wedbush Securities, is predicting that two big-named tech stocks will hit astronomical market caps.
In a new CNBC Television interview, the investor says Microsoft (MSFT) and Nvidia (NVDA) will likely each hit $5 trillion market caps by the end of next year as artificial intelligence (AI) advances.
“We’re seeing the use cases when it comes to AI exploding and that’s bullish for software and the hyperscalers led by, of course, Redmond and [CEO Satya] Nadella in terms of everything that Microsoft [is doing] – not just $4 trillion, we think that could be a $5 trillion market cap along with Nvidia in the next 18 months.”
Microsoft has a market cap of $3.8 trillion and is trading for $511 per share at time of writing. Meanwhile, Nvidia is trading for $173 per share at time of writing and has a market cap of $4.2 trillion.
Ives also believes that several software companies may have explosive breakouts in the coming months.
“Software has underperformed. But now it’s not just Palantir, which obviously is our top one in terms of AI revolution. MongoDB, Snowflake, I think IBM is seeing a massive renaissance of growth when it comes to what we’re seeing on AI monetization…
I think software, and even cybersecurity, is what I believe could be a significant outperformer across all of the tech sector in the second half of the year.”
Palantir (PLTR) is trading for $153 per share at time of writing, while MongoDB (MDB) is trading for $218 per share.
Meanwhile, Snowflake (SNOW) is trading for $211 per share at time of writing, and International Business Machines Corporation (IBM) is trading for $282 per share.
Follow us on X, Facebook and Telegram
Don’t Miss a Beat – Subscribe to get email alerts delivered directly to your inbox
Check Price Action
Surf The Daily Hodl Mix
 
Disclaimer: Opinions expressed at The Daily Hodl are not investment advice. Investors should do their due diligence before making any high-risk investments in Bitcoin, cryptocurrency or digital assets. Please be advised that your transfers and trades are at your own risk, and any losses you may incur are your responsibility. The Daily Hodl does not recommend the buying or selling of any cryptocurrencies or digital assets, nor is The Daily Hodl an investment advisor. Please note that The Daily Hodl participates in affiliate marketing.
Generated Image: Midjourney

MMS • RSS

On July 17, 2025, MongoDB (MDB) experienced a significant surge in trading volume, with a total of $484 million in shares exchanged, marking a 45.43% increase from the previous day. This surge placed MongoDB at the 234th position in terms of trading volume for the day. Additionally, MongoDB’s stock price rose by 4.24%, extending its winning streak to three consecutive days, with a cumulative increase of 8.44% over this period.
MongoDB’s recent performance can be attributed to several factors, including its strong financial results and strategic initiatives. The company has been focusing on expanding its cloud services, which has attracted a growing number of enterprise customers. This shift towards cloud-based solutions has been a key driver of MongoDB’s revenue growth, as more businesses seek to leverage the scalability and flexibility of cloud infrastructure.
Furthermore, MongoDB’s commitment to innovation and continuous improvement of its database technology has positioned it as a leader in the NoSQL database market. The company’s Atlas platform, which provides a fully managed cloud database service, has been particularly well-received by developers and enterprises alike. This platform offers a range of features, including automated backups, high availability, and advanced security measures, making it an attractive option for businesses looking to modernize their data management strategies.
In addition to its technological advancements, MongoDB has also been proactive in building strategic partnerships and alliances. These collaborations have helped the company expand its market reach and integrate its solutions with other leading technologies. By leveraging these partnerships, MongoDB has been able to offer more comprehensive and integrated solutions to its customers, further enhancing its competitive position in the market.

MMS • RSS
MongoDB isn’t exactly a household name. Unless, of course, you’re living in the land of databases or cutting through the dense jungles of unstructured stuff that needs to be stored.
In the realm of data, MongoDB is kind of a big deal. One might say it has even achieved “celebrity status” as the go-to document database for building scalable, high-availability internet apps.
To get a little nerdy (and because I’ve used it for years), Mongo’s NoSQL posture delivers a flexible schema that lends it to agile development. It’s super versatile, enabling users to start building applications without fretting over complicated database configurations. It’s been a fixture in my cloud deployments for years, and it keeps on keepin’ on.
So… about that name: According to lore, it’s derived from the word “humongous,” as in managing large amounts of data. Which makes sense, given its purpose. And since 2007, Mongo’s purpose has been data, forging its place as a well-established and persistent service within countless product architectures.
MongoDB now has a worldwide footprint of over 57,000 customers, making it the most used modern database on the planet. This is due in no small part to its document modeling, scalability, and deployment options—which include AWS, Azure, and GCP.
It also has a website. Or websites, to be more precise. That includes properties for documentation, client libraries, and more. Like most enterprise software companies, it has developed an expansive ecosystem of digital content used for everything from marketing to technical briefs to thought leadership.
When I was covering ContentCon25 in Chicago last month—Contentstack’s fourth annual customer event and its largest to date—I had a chance to connect and chat with a lot of smart people at agencies and end customers, as well as Contentstack’s team, including CEO Neha Sampat (you can read our interview and the event coverage here).
When I saw MongoDB on the event’s customer list, I was curious to hear how an enterprise software company approached its CMS and DXP decision-making, and how Contentstack was fulfilling its needs.
I caught up with Mongo’s Director of Marketing Operations, Bill Mitchell, who has been involved with the Contentstack relationship from day one. As a veteran marketing technologist with hard-earned experience at global brands like Pure Storage and HP, Bill brings deep insight to B2B web strategies. We chatted about the experience of moving to a modern composable solution, the role of data and content, and even where AI is presenting opportunities and challenges.
Feel free to watch the highlight video below on CMS Critic TV, or keep scrolling for more of the good parts:
Finding the right tool for the job
MongoDB has been working with Contentstack for about two and half years, and according to Bill, it’s been a solid ride. But their previous content management system—which was custom built—wasn’t cutting it for a myriad of reasons.
“We needed to look at a new CMS stack for our company,” he said. “We were on an open source, proprietary tool. It was free, but a ‘get what you pay for’ kind of thing. We threw a lot of manpower at it, and it was limited.”
Part of the challenge was overcoming the internal culture of “build or buy.” As Bill explained, MongoDB is an engineering culture with smart people everywhere. This made the idea of a homegrown solution a possible pathway—and a big reason why the previous CMS was a custom job.
The upside of having a technical culture? They weren’t afraid of technical problems. The downside? As Bill expressed, those technical resources focused more on maintaining an ailing stack versus investing in new features and capabilities.
Driving a lot of the challenges was a lack of basic control for his content team. They knew they needed a platform with forward thinking, something that could adapt to market changes and was future-proofed for capabilities like personalization. As a data leader, he was focused on leveraging a broad range of integrations across a marketplace of options. The previous system lacked all those things.
“Everything was hand-orchestrated APIs, and it was just messy and wouldn’t scale,” Bill explained. “We had to find a way to get a better platform underneath us, so we could move forward.”
Of course, finding a new solution required arduous discovery, something enterprises with Mongo’s size and scale are accustomed to when considering a big software transformation. They conducted an expansive search, looking at different technology vendors, and arrived at Contentstack—a choice that he and his team have been very happy with.
When approaching the decision, I asked Bill how that team consideration played into the calculus. The solution needed to fit into Mongo’s stack, but it had to work for its people, regardless of what roles they might have across the company’s marketing and web teams.
“I came at it as a technologist,” he said. “I had a development team and the skills and expertise they brought to the table. I wanted to make sure we had a solution that one, worked for them, two, worked for us company-wise, and three, worked for others. We have multiple teams that have now adopted Contentstack, and if we had gone in other directions, that friction would be much more pronounced.”
While Bill was confident that his team could have rolled with the punches that other solutions might have presented, Contentstack helped his team overcome a number of technical limitations by unifying behind a single technology. With lots of teams working on multiple web projects, having confidence was going to be a key factor. Bill also said that Contentstack set him up with one of their certified partners to help make the transition successful.
Now that they’re operating at full speed, Contentstack is also providing continued innovation. As Bill explained, the partnership empowers MongoDB with a clear path for what’s ahead, and how customers like MongoDB can harness new features in the best ways.
“It’s setting a trajectory where, as they build new capabilities, we get to adopt them,” he said. “It’s putting us in a much better position to be forward-thinking about how we build digital web experiences.”
Realizing the roadmap for personalization
ContentCon25 was the splashdown for Contentstack’s new Data & Insights capabilities, which leverages the relationship between content and context – the latter term popping up across the CMS and DXP space as a sort of portmanteau for data. As Contentstack’s CEO, Neha Sampat, explained, the two need to interoperate to bring AI-powered personalization to life.
Data & Insights, which was made possible via the acquisition of the Lytics CDP earlier this year, brings a stunning scope of features to the marketer’s toolkit. This was all demoed live at ContentCon, where features like Audience Insights, Opportunity Explorer, Real-Time Data Activation, and Flows were test-driven on stage to the delight of many in the audience.
The emphasis on context came over the last 12 months as the company rolled out its new Contentstack Personalize solution, an A/B/n multivariate testing and segmentation engine. The suite also included the platform’s Brand Kit, which aimed to align AI-generated content at scale with a brand’s voice, as well as a bevy of new extensions for Contentstack Automate.
“Content is critical, but data is the foundation that makes content work. You need both. Without data, content lacks personalization. Without content, data has nothing to fuel.”
Personalization continues to be the “Holy Grail” for marketers, and the promise of generative AI has made it more attainable than ever before. Of course, there are still challenges that persist, but as Bill relayed, Contentstack’s foray into personalization is showing real promise. It’s all still relatively new, but it’s clearly becoming a game changer—and appealing to both the technical and marketing sides of Bill’s brain.
“It’s interesting, because when we first went down this path, personalization was not part of their feature set,” he said. “And now, in that split personality world where the technology piece of me says we could deliver personalization multiple different ways, the marketer in me says we need it to be easy, efficient, and scalable, and get the power into more hands, rather than having a few people with technical capabilities trying to orchestrate content and manage different experiences.”
Bill went on to explain how MongoDB is leveraging Contentstack to distribute these skill sets to more people on his team, so personalization can be harnessed in a more holistic way. Composability is a critical part of this, as MongoDB is using Segment as its CDP, and they have a roadmap that aligns with a number of Contentstack’s features. He said they’re already personalizing to a few audiences on specific pages of their site, with ambitions to expand.
“I’m envisioning a world where we’re personalizing to multiple segments on multiple pages,” he said, “completely changing how we approach that particular problem or opportunity.”
Back in February, as Contentstack was introducing its Contentstack EDGE concept—effectively repositioning its platform as an “Adaptive DXP”—Bill was quick to address the question of how data and content coexist.
“Content is critical, but data is the foundation that makes content work,” he posted on LinkedIn. “You need both. Without data, content lacks personalization. Without content, data has nothing to fuel.”
This might be one of the most significant motivating factors behind his continued enthusiasm for Contentstack. As an ecosystem, Bill sees it bridging the gap between content and data. Traditional DXPs haven’t had the composable posture to meet this urgent need, one that’s essential to realizing the value of AI.
On that note… what about AI?
As I reported from the ground at ContentCon, the conversation around AI was at a fever pitch. There was barely a session that didn’t evoke some modicum of discussion about an AI-powered feature, or how AI was affecting the entire trajectory of the DX industry.
For her part, Neha Sampat focused on the positive opportunities being activated by AI. During her opening keynote, she painted a picture of what’s ahead for those daring enough to make the journey. As she said, there’s no bridge from the “safe” to the promise of the future. Crossing that chasm is an act of courage.
But even the best-built bridges can make people anxious, especially when you’re taking the first steps across. I asked Bill what he thought about the outlook with AI, how the market is changing, and where MongoDB is headed with its own AI trajectory.
Is it scary? Sure, he admitted. But the momentum is undeniable.
“You’re taking steps without really knowing that it’s the right step,” he said. “You know you need to move forward. There’s no doubt AI is going to drive change from top to bottom in every organization. I think in our world, we’re trying to do the basics, get chatbots going, and help people find content and answers to problems. But on the internal side, we’re still struggling to really crack the code, if you will, on the best way [for AI] to bring scale and opportunities to how we do business more effectively. And that’s beyond just the web.”
In terms of the early AI gains, Bill’s documentation team launched an AI chatbot for its dedicated docs site a year and a half ago. It was initially focused on documentation, allowing users to ask a question, be served an answer, and link to the correct doc for a more complete story. Given its success, they’ve pulled it into Mongo’s dotcom site, where it’s evolving the experience in new ways.
“We’re building out this custom LLM that has all our content housed in it,” he said. “Now we’re trying to figure out the right recipe for what it is, because it’s not just a web experience anymore.”
As Bill mentioned, a lot of the AI-powered topics being discussed on stage at ContentCon—things like automating campaign generation or setting up audience segmentation—are still being classically orchestrated. Although everyone sees and understands the potential for AI, tapping into the full potential is still an ongoing process. This is where Contentstack’s culture of support and guidance is proving decisive for customers as they try to predict what’s next.
For Bill, finding solid ground to land AI is the goal. “Leading into AI, there’s still trepidation around driving content without human oversight, which will limit the variations for personalization,” he said. But the potential to scale up to the right number of variants is a game-changer.
At ContentCon, the Magic 8-Ball predictions were focused on 2030 and anticipating how things will change in just five years. Neither Bill nor I had any idea what the software and marketing world might look like, but he had one response that rang true:
“I know it’ll be a hell of a lot different than it is today.”
Leveraging the power of human support
As the composable, MACH-driven approach has caught fire, enterprises have struggled with the role of accountability in the equation. In many cases, agency partners have assumed the risk associated with any recommended technologies in a stack. But for some organizations, internal teams have been struck with hosting overages or other issues related to a point of failure. In those cases, who’s responsible?
When I spoke to Contentstack about this in Amsterdam at the MACH TWO Conference back in 2023, they were already evolving their “Care Without Compromise” program to answer this conundrum. Since then, it has become a foundation for its composable ambitions, providing a deeper relationship promise for customers to help ensure success.
Does it work? According to Contentstack, the program boasts a 98% customer satisfaction rating and a 97% customer retention rate. While the company furnished those numbers, they’re pretty compelling metrics.
I asked Bill about his experience with Contentstack’s support, and what the idea of “Care Without Compromise” really means to a customer like MongoDB.
“Having worked with lots of technology vendors, I tend not to put much faith in those statements, but I was really surprised by how well they’ve catered to us,” he said glowingly. “We had to get off the ground with content modeling, new tools, and integrations. We were working with a vendor on some of these things, but Contentstack was in there, making sure we had the right advice.”
As Bill explained, Contentstack was involved in multiple dimensions of the relationship, supporting Mongo’s decision-making around strategic pathways. And now, as they transition to personalization, AI, and other advanced features, Contentstack’s support and technical services teams are engaged, conducting periodic check-ins and helping them think through what’s next.
“They’re rolling stuff out all the time, which is great from a product side,” he said. “But there are people to help us ensure we can build a plan to utilize it, which is important. Otherwise, it’s just shelfware, and we’re not able to take advantage of it.”
Looking for guidance on Contentstack? Talk to an expert.
Upcoming Events
CMS Connect 25
August 5-6, 2025 – Montreal, Canada
We are delighted to present the second annual summer edition of our signature global conference dedicated to the content management community! CMS Connect will be held again in beautiful Montreal, Canada, and feature a unique blend of masterclasses, insightful talks, interactive discussions, impactful learning sessions, and authentic networking opportunities. Join vendors, agencies, and customers from across our industry as we engage and collaborate around the future of content management – and hear from the top thought leaders at the only vendor-neutral, in-person conference exclusively focused on CMS. Space is limited for this event, so book your seats today.

MMS • RSS
Stephens has begun coverage of MongoDB (MDB, Financial) with an “Equal Weight” rating and has set a price target of $247. Despite competitive pressures, Stephens views MongoDB’s Atlas product as a “rare, high-quality asset.” The firm notes that MongoDB’s peers are developing similar database functionalities, which adds a layer of competitive tension in the market.
Wall Street Analysts Forecast
Based on the one-year price targets offered by 36 analysts, the average target price for MongoDB Inc (MDB, Financial) is $279.66 with a high estimate of $520.00 and a low estimate of $170.00. The average target implies an
upside of 27.97%
from the current price of $218.53. More detailed estimate data can be found on the MongoDB Inc (MDB) Forecast page.
Based on the consensus recommendation from 38 brokerage firms, MongoDB Inc’s (MDB, Financial) average brokerage recommendation is currently 1.9, indicating “Outperform” status. The rating scale ranges from 1 to 5, where 1 signifies Strong Buy, and 5 denotes Sell.
Based on GuruFocus estimates, the estimated GF Value for MongoDB Inc (MDB, Financial) in one year is $379.42, suggesting a
upside
of 73.62% from the current price of $218.53. GF Value is GuruFocus’ estimate of the fair value that the stock should be traded at. It is calculated based on the historical multiples the stock has traded at previously, as well as past business growth and the future estimates of the business’ performance. More detailed data can be found on the MongoDB Inc (MDB) Summary page.
MDB Key Business Developments
Release Date: June 04, 2025
- Revenue: $549 million, a 22% year-over-year increase.
- Atlas Revenue: Grew 26% year over year, representing 72% of total revenue.
- Non-GAAP Operating Income: $87 million, with a 16% non-GAAP operating margin.
- Customer Count: Over 57,100 customers, with approximately 2,600 added sequentially.
- Net ARR Expansion Rate: Approximately 119%.
- Gross Margin: 74%, down from 75% in the year-ago period.
- Net Income: $86 million or $1 per share.
- Operating Cash Flow: $110 million.
- Free Cash Flow: $106 million.
- Cash and Equivalents: $2.5 billion.
- Share Repurchase Program: Increased by $800 million, totaling $1 billion.
- Q2 Revenue Guidance: $548 million to $553 million.
- Fiscal Year ’26 Revenue Guidance: $2.25 billion to $2.29 billion.
- Fiscal Year ’26 Non-GAAP Income from Operations Guidance: $267 million to $287 million.
For the complete transcript of the earnings call, please refer to the full earnings call transcript.
Positive Points
- MongoDB Inc (MDB, Financial) reported a 22% year-over-year increase in revenue, reaching $549 million, surpassing the high end of their guidance.
- Atlas revenue grew 26% year over year, now representing 72% of total revenue, indicating strong adoption of their cloud-based platform.
- The company achieved a non-GAAP operating income of $87 million, resulting in a 16% non-GAAP operating margin, which is an improvement from the previous year.
- MongoDB Inc (MDB) added approximately 2,600 new customers in the quarter, bringing the total customer count to over 57,100, the highest net additions in over six years.
- The company announced a significant expansion of their share repurchase program, authorizing up to an additional $800 million, reflecting confidence in their long-term potential.
Negative Points
- Despite strong results, MongoDB Inc (MDB) noted some softness in Atlas consumption in April due to macroeconomic volatility, although it rebounded in May.
- The non-Atlas business is expected to decline in the high single digits for the year, with a $50 million headwind from multiyear license revenue anticipated in the second half.
- Gross margin slightly declined to 74% from 75% in the previous year, primarily due to Atlas growing as a percentage of the overall business and the impact of the Voyage acquisition.
- The company experienced slower than planned headcount additions, which could impact future growth and operational capacity.
- MongoDB Inc (MDB) remains cautious about the uncertain macroeconomic environment, which could affect future consumption trends and overall business performance.

MMS • RSS
A number of stocks jumped in the afternoon session after the second quarter (2025) earnings season got off to a strong start.
Quarterly earnings reports released during the week exceeded Wall Street’s expectations, fueling investor confidence. Around 50 S&P 500 components reported, with 88% of those exceeding analysts’ expectations, FactSet data revealed. Investors were also encouraged by several positive reports that painted a picture of a resilient consumer. One key report revealed that shoppers increased their spending at U.S. retailers more than economists had anticipated. Precisely, retail sales increased 0.6% from May, surpassing the 0.2% estimate. This robust consumer spending is a crucial pillar supporting the economy.
Adding to the positive sentiment, the latest data on unemployment claims showed a decrease in the number of workers applying for benefits, signaling that layoffs remain limited and the job market is steady. This combination of strong earnings reports, retail sales, and a solid labor market suggests the economy is navigating challenges successfully.
The stock market overreacts to news, and big price drops can present good opportunities to buy high-quality stocks.
Among others, the following stocks were impacted:
Citizens Financial Group’s shares are not very volatile and have only had 6 moves greater than 5% over the last year. In that context, today’s move indicates the market considers this news meaningful, although it might not be something that would fundamentally change its perception of the business.
Citizens Financial Group is up 12.4% since the beginning of the year, and at $49.03 per share, has set a new 52-week high. Investors who bought $1,000 worth of Citizens Financial Group’s shares 5 years ago would now be looking at an investment worth $1,949.
Here at StockStory, we certainly understand the potential of thematic investing. Diverse winners from Microsoft (MSFT) to Alphabet (GOOG), Coca-Cola (KO) to Monster Beverage (MNST) could all have been identified as promising growth stories with a megatrend driving the growth. So, in that spirit, we’ve identified a relatively under-the-radar profitable growth stock benefiting from the rise of AI, available to you FREE via this link.

MMS • RSS
Let’s talk about the popular MongoDB, Inc. (NASDAQ:MDB). The company’s shares led the NASDAQGM gainers with a relatively large price hike in the past couple of weeks. Shareholders may appreciate the recent price jump, but the company still has a way to go before reaching its yearly highs again. With many analysts covering the large-cap stock, we may expect any price-sensitive announcements have already been factored into the stock’s share price. However, what if the stock is still a bargain? Let’s examine MongoDB’s valuation and outlook in more detail to determine if there’s still a bargain opportunity.
We’ve found 21 US stocks that are forecast to pay a dividend yield of over 6% next year. See the full list for free.
Is MongoDB Still Cheap?
Great news for investors – MongoDB is still trading at a fairly cheap price. Our valuation model shows that the intrinsic value for the stock is $281.90, which is above what the market is valuing the company at the moment. This indicates a potential opportunity to buy low. However, given that MongoDB’s share is fairly volatile (i.e. its price movements are magnified relative to the rest of the market) this could mean the price can sink lower, giving us another chance to buy in the future. This is based on its high beta, which is a good indicator for share price volatility.
What kind of growth will MongoDB generate?
Investors looking for growth in their portfolio may want to consider the prospects of a company before buying its shares. Although value investors would argue that it’s the intrinsic value relative to the price that matter the most, a more compelling investment thesis would be high growth potential at a cheap price. However, with an extremely negative double-digit change in profit expected over the next couple of years, near-term growth is certainly not a driver of a buy decision. It seems like high uncertainty is on the cards for MongoDB, at least in the near future.
What This Means For You
Are you a shareholder? Although MDB is currently undervalued, the negative outlook does bring on some uncertainty, which equates to higher risk. Consider whether you want to increase your portfolio exposure to MDB, or whether diversifying into another stock may be a better move for your total risk and return.
Are you a potential investor? If you’ve been keeping tabs on MDB for some time, but hesitant on making the leap, we recommend you research further into the stock. Given its current undervaluation, now is a great time to make a decision. But keep in mind the risks that come with negative growth prospects in the future.
If you want to dive deeper into MongoDB, you’d also look into what risks it is currently facing. For example, we’ve discovered 2 warning signs that you should run your eye over to get a better picture of MongoDB.
If you are no longer interested in MongoDB, you can use our free platform to see our list of over 50 other stocks with a high growth potential.
Have feedback on this article? Concerned about the content? Get in touch with us directly. Alternatively, email editorial-team (at) simplywallst.com.
This article by Simply Wall St is general in nature. We provide commentary based on historical data and analyst forecasts only using an unbiased methodology and our articles are not intended to be financial advice. It does not constitute a recommendation to buy or sell any stock, and does not take account of your objectives, or your financial situation. We aim to bring you long-term focused analysis driven by fundamental data. Note that our analysis may not factor in the latest price-sensitive company announcements or qualitative material. Simply Wall St has no position in any stocks mentioned.

MMS • RSS
M&T Bank Corp reduced its holdings in shares of MongoDB, Inc. (NASDAQ:MDB – Free Report) by 17.1% during the first quarter, according to its most recent disclosure with the Securities and Exchange Commission (SEC). The fund owned 2,518 shares of the company’s stock after selling 519 shares during the period. M&T Bank Corp’s holdings in MongoDB were worth $442,000 at the end of the most recent reporting period.
A number of other institutional investors and hedge funds have also bought and sold shares of MDB. OneDigital Investment Advisors LLC grew its holdings in shares of MongoDB by 3.9% in the fourth quarter. OneDigital Investment Advisors LLC now owns 1,044 shares of the company’s stock worth $243,000 after purchasing an additional 39 shares during the last quarter. Handelsbanken Fonder AB grew its holdings in shares of MongoDB by 0.4% in the first quarter. Handelsbanken Fonder AB now owns 14,816 shares of the company’s stock worth $2,599,000 after purchasing an additional 65 shares during the last quarter. O Shaughnessy Asset Management LLC grew its holdings in shares of MongoDB by 4.8% in the fourth quarter. O Shaughnessy Asset Management LLC now owns 1,647 shares of the company’s stock worth $383,000 after purchasing an additional 75 shares during the last quarter. Fifth Third Bancorp grew its holdings in shares of MongoDB by 15.9% in the first quarter. Fifth Third Bancorp now owns 569 shares of the company’s stock worth $100,000 after purchasing an additional 78 shares during the last quarter. Finally, Moody National Bank Trust Division grew its holdings in shares of MongoDB by 5.6% in the first quarter. Moody National Bank Trust Division now owns 1,751 shares of the company’s stock worth $307,000 after purchasing an additional 93 shares during the last quarter. 89.29% of the stock is currently owned by institutional investors.
Insider Transactions at MongoDB
In other news, Director Hope F. Cochran sold 1,174 shares of MongoDB stock in a transaction dated Tuesday, June 17th. The stock was sold at an average price of $201.08, for a total value of $236,067.92. Following the completion of the transaction, the director owned 21,096 shares of the company’s stock, valued at $4,241,983.68. The trade was a 5.27% decrease in their position. The transaction was disclosed in a document filed with the Securities & Exchange Commission, which is available through the SEC website. Also, Director Dwight A. Merriman sold 2,000 shares of MongoDB stock in a transaction dated Thursday, June 5th. The stock was sold at an average price of $234.00, for a total value of $468,000.00. Following the completion of the transaction, the director directly owned 1,107,006 shares of the company’s stock, valued at approximately $259,039,404. This represents a 0.18% decrease in their position. The disclosure for this sale can be found here. Insiders sold a total of 32,746 shares of company stock valued at $7,500,196 in the last quarter. 3.10% of the stock is owned by insiders.
Analyst Upgrades and Downgrades
MDB has been the topic of several recent analyst reports. Citigroup cut their target price on shares of MongoDB from $430.00 to $330.00 and set a “buy” rating on the stock in a report on Tuesday, April 1st. Loop Capital cut shares of MongoDB from a “buy” rating to a “hold” rating and cut their target price for the stock from $350.00 to $190.00 in a report on Tuesday, May 20th. Morgan Stanley cut their target price on shares of MongoDB from $315.00 to $235.00 and set an “overweight” rating on the stock in a report on Wednesday, April 16th. Macquarie restated a “neutral” rating and issued a $230.00 target price (up previously from $215.00) on shares of MongoDB in a report on Friday, June 6th. Finally, DA Davidson restated a “buy” rating and issued a $275.00 target price on shares of MongoDB in a report on Thursday, June 5th. Eight analysts have rated the stock with a hold rating, twenty-six have assigned a buy rating and one has given a strong buy rating to the stock. According to data from MarketBeat, the company presently has a consensus rating of “Moderate Buy” and a consensus price target of $282.39.
Check Out Our Latest Report on MongoDB
MongoDB Trading Up 0.5%
Shares of MongoDB stock opened at $209.64 on Thursday. The firm has a market capitalization of $17.13 billion, a P/E ratio of -183.89 and a beta of 1.41. The business has a fifty day simple moving average of $201.07 and a two-hundred day simple moving average of $213.37. MongoDB, Inc. has a 12 month low of $140.78 and a 12 month high of $370.00.
MongoDB (NASDAQ:MDB – Get Free Report) last announced its quarterly earnings data on Wednesday, June 4th. The company reported $1.00 EPS for the quarter, beating the consensus estimate of $0.65 by $0.35. The business had revenue of $549.01 million during the quarter, compared to analyst estimates of $527.49 million. MongoDB had a negative return on equity of 3.16% and a negative net margin of 4.09%. The company’s revenue for the quarter was up 21.8% on a year-over-year basis. During the same quarter last year, the firm posted $0.51 earnings per share. Analysts predict that MongoDB, Inc. will post -1.78 earnings per share for the current year.
About MongoDB
MongoDB, Inc, together with its subsidiaries, provides general purpose database platform worldwide. The company provides MongoDB Atlas, a hosted multi-cloud database-as-a-service solution; MongoDB Enterprise Advanced, a commercial database server for enterprise customers to run in the cloud, on-premises, or in a hybrid environment; and Community Server, a free-to-download version of its database, which includes the functionality that developers need to get started with MongoDB.
Featured Stories
Want to see what other hedge funds are holding MDB? Visit HoldingsChannel.com to get the latest 13F filings and insider trades for MongoDB, Inc. (NASDAQ:MDB – Free Report).
Receive News & Ratings for MongoDB Daily – Enter your email address below to receive a concise daily summary of the latest news and analysts’ ratings for MongoDB and related companies with MarketBeat.com’s FREE daily email newsletter.

MMS • Ohan Oda

Transcript
Oda: I wonder if any of you could guess what this number, 1 out of 4, may represent? You might be surprised to hear that actually this number represents today’s 20-year-old becoming disabled before they retire. This could be caused by accidents, disease, or their lifestyle, or it could be anything that happens that’s disastrous in their lifetime. It’s been estimated that about 1.3 billion people worldwide have significant disability today, which is roughly 16% of the world population.
As humans live longer, the chance of having a disability increases. Starting in the mid-1800s, human longevity has increased a lot, and the life expectancy is increasing by an average of six hours a day. The point is that we all hope to live healthy and without any disabilities, but that’s not always the case. Anyone can have a certain type of disability during their lifetime. There are actually many types of disabilities that exist, and I know these are not legible and that’s on purpose. For my topic, I’ll be specifically focusing on visual impairment-related disabilities, such as blindness and low vision.
My name is Ohan Oda. I work on Google Maps as a software engineer. I’ll be talking about how we made our augmented reality feature, called Lens in Maps, accessible to visually impaired users, and how our learnings could apply to your situation. I wonder how many of you here have used the feature called Lens in Maps in Google Maps? Just a hint, it’s not a lens. It’s not street view. It’s not immersive view. It’s not one of the AR Walking Navigation that we provide that has a big arrow overlaid in the street. It’s a different feature. Most of you don’t know.
What is Lens in Maps?
First, let me introduce what Lens in Maps is. It’s a camera-based experience in Google Maps that helps on-the-go users understand their surroundings and make decisions confidently by showing information in first-person perspective. Here’s a GIF that shows how Lens in Maps works in Google Maps. The user enters the experience by tapping on the camera icon at the top on the search bar, and the user will hold their phone up, and they can see places around them. They can also do search for specific types of places, such as restaurants. Here’s a video showing how this feature works with screen reader, which is an assistive technology often used by visually impaired users.
Allen: First, we released new screen reader capabilities that pair with Lens in Maps. Lens in Maps uses AI and augmented reality to help people discover new places and orient themselves in an unfamiliar neighborhood. If your Screen Reader is enabled, you can tap the camera icon in the search bar, lift your phone, and you’ll receive auditory feedback about the places around you – Restaurant Canet, fine dining, 190 feet – like ATMs, restaurants, or transit stations. That includes helpful information like the name and type of the place you’re seeing, and how far away it is.
Oda: Here you saw an illustration of how this feature works with screen reader.
Motivation
AR is a visual-centric experience. Why did we try to make our AR experience accessible to visually impaired users? Of course, there are apps like Be My Eyes that is targeted specifically for visually impaired users. Our feature, Lens in Maps, was not designed for such a case. Indeed, there are not many AR applications that exist today that are usable by visually impaired users. Lens in Maps is useful when used during traveling, where the place or the language is not familiar to the user. Our feature can show the places and streets around the user with the language that the user is familiar with.
However, this feature is not used very often in everyday situations, because people know the places and they understand the language seen on the street. There’s also a friction to this feature. Like any other AR apps that you probably have used before, you have to take out the phone, and you have to hold your phone up and face the direction where the AR elements can be overlaid. This can be sometimes awkward, especially in the public area where people are standing in front of you. They might be thinking you’re actually taking a video of them. In addition to this general AR friction, our feature also requires a certain level of location and heading accuracy relative to the ARs so that we can correctly overlay the information in the real world.
This process is very important so that we don’t mistakenly, for example, overlay the name of the restaurant in front of you with the name of the restaurant next to it. This localization process really only takes a few seconds, but people are sometimes impatient to even wait for just a few seconds, and they would exit the experience before we can show them any useful information. These restrictions make our Lens in Maps feature used less often than we would like it to be. We have spent a lot of time designing and developing this feature, so we would love to have more users using it and also loved by the user.
Ideation
While thinking about ideas, how we can achieve that, I found that our other AR feature that we provide in Google Maps, called AR Walking Navigation, has a very good DAU and has a very good user retention rate as well. This is a feature that is targeted to navigate users from point A to point B with instructions overlaid in the real world with big arrows, big red destination pins as you can see from the slides. Why so? This feature has the exact same friction as Lens in Maps, where people have to hold their phone up and they have to wait for a few seconds before they can start the experience.
After digging through our past documents, our past presentations in our team, I found that our past UX studies have shown that AR Walking Navigation can really help certain users and those users who actually have difficulties reading maps and understanding it. Basically, the directions displayed on the 2D map didn’t make much sense to those users, and showing those directions directly overlaid in the real world really helped them understand which directions to take and where exactly the destination is, which made me think what kind of user would really benefit from using Lens in Maps that eventually it becomes a must-have feature for them. Even though this feature has some restrictions to start the experience, the benefit of using this would actually outweigh the friction.
Research
After thinking over and over, an idea struck me that maybe Lens in Maps could help visually impaired users because our feature can basically show the places and streets in front of them. Not show, but tell, for this case. I thought it was a good idea, but I had to do some research to make sure this feature can really help those users. Luckily, Google provides many learning opportunities throughout the year and they had a few sessions about ADI, which stands for Accessibility and Disability Inclusion. After attending those sessions, I learned that last-mile problems can be very challenging for visually impaired users. The navigation app that you have today may actually tell you exactly how to get to the destination, but once you are at the destination, it’s really up to you or the user to figure out where exactly that destination is.
The app may say the destination is on your left side or right side, but often you realize that the destination actually can be many feet away from you, and it could be in any direction on your left or right side. Also, blind and low-vision users tend to visit places that they have been before and are familiar with, because it’s a lot harder for them to explore new places, because it’s hard to know what places are there, first of all, and it’s hard to get more information about those new places without a lot of pre-planning. Once I learned that Lens in Maps could really help those users, I started to build a prototype and demoed it to my colleagues and also other internal users who have visual impairment.
Challenge
However, as I built my prototype, I realized that there are many challenges, because we are basically trying to do the reverse of the famous saying, a picture is worth a thousand words. It’s actually even worse here because we are trying to describe a live video, which may actually require 1 million words. Also, I myself am not an accessibility expert. Indeed, I was more on the side of avoiding any type of accessibility-related features because it’s really hard to make it work right. I know there are many great tools that exist that can help you debug and create those accessibility features, but a lot of us engineers are probably not that familiar with those kinds of tools, so it takes a lot longer to make those features work right compared to non-accessibility related features.
For first-party apps at Google, there is an accessibility guideline called GAR, which stands for Google Accessibility Rating. These guidelines were not very applicable for a lot of the AR cases we encountered during their development. For example, one of the guidelines recommends that we should describe what’s being displayed on the screen. Unlike 2D UIs, where the user has more control over which element to focus on, what to be described, the objects in the AR scene could move around a lot. The object could even disappear and appear based on how your camera moves, which makes it really hard for the user to decide which things to focus on.
Also, we are detecting places in the world that have a lot of information to present, like the name of the place, the rating of the place, how many reviews it has, what type of place it is, what time it opens, and so on. If the user wants to hear all this information, they have to hold up their phone in a very specific position until all these information is described to them. There are also many other cases that I won’t go through, but these general guidelines that existed before were mostly designed for non-AR cases. The general guidelines basically didn’t apply much for what we have been doing.
Once I have the prototype ready, it was hard for me to tell whether this works or not, because I myself am not a target user. Even though I think it works well, it may not work well for the actual target user. None of my colleagues near me were actually a target user either. It wasn’t very easy for me to test. I basically have to go out and find somebody else from our team that has visual impairment to test it. Last but not least, I’m sure my company doesn’t want to hear about this, but it’s a reality that it’s really hard to get leadership buy-in for this type of project, because often leadership themselves are not the target user. It’s really hard for them to see the real value of this type of feature. These days also companies are under-resourced, and so this type of project tends to get lower priority over others. We indeed had several proposals in the past to make our AR features accessible to visually impaired users, but they always got deprioritized over other more important projects, and they just never got implemented.
Coping with Challenges
How did I cope with all these challenges? As I said, I’m not an expert in this accessibility field. The first thing I did was to reach out to teams who work on technology for visually impaired users, such as the team working on Lookout, which is an Android app that can describe what’s in the image. I explained to those teams how Lens in Maps could revolutionize the way those visually impaired users would interact with maps, and basically demoed my prototype to them. Because they are the specialists in the field, they gave me a lot of good feedback, and I iterated my prototype based on those feedbacks. Now I have my prototype ready to test.
As I said before, I cannot test it myself, so I basically try to find volunteers internally to first check if it’s working ok. Luckily, there are several visually impaired users within Google who are very passionate about pushing the boundary of assistive technology and willing to be early adopters. It’s actually usually hard to find those users within anyone’s company because they are very limited, and they are usually overwhelmed with a lot of requests to test any accessibility features that are being developed in that company. I got a lot of good feedback from those users, and I was able to incorporate again to my prototype and improve it further.
Once the prototype is polished enough or to a satisfying level from the internal testing, I also wanted to test with external users to get a wider range of opinions. I had great support from our internal UXR group who are specialized in accessibility testing. They basically organized, from recruiting to running the tests and everything, with external blind and low-vision users. The study went really well, and actually the response was very positive. From those responses, I was more confident that this feature is getting ready to go public. The study went well, but from those external testing, I actually didn’t get to interact directly with those users. I also wanted to demo my prototype and get direct feedback from external target users. I was looking for where I can do that. Luckily, I was able to find this great conference called XR Access, which is directed by Dylan.
In the conference, I proactively approached two target users and asked if they could try out my prototype. That went well, and I again got a lot of good feedback from the real users, and I was able to incorporate those. Last but not least, when I was developing this feature, it takes several months, so I need to make sure that my project doesn’t suddenly come to an end because of leadership saying, priority has changed, so let’s work on something else. What I did was I tried to demo my prototype to various internal accessibility events to get this project more attention and also get people excited. I don’t know if my effort has really worked out, but at least I was able to release my feature to the public on both Android and iOS.
What Worked Well?
What worked well for us? It worked well that we used technology that blind and low-vision users are already familiar with. We decided to use screen reader technology to describe places and streets around the user. Basically, on iOS, this will be VoiceOver, and on Android, this will be TalkBack. We also considered using text-to-speech libraries, but it won’t be very easy to adjust a lot of the settings, like volume, the speech rate, and those, which blind and low-vision users tend to adjust to suit their needs.
The thing is, also, if we would require them to have additional configuration, that means they have to take extra steps just for Lens in Maps to make those configurations. It made a lot of sense for us to use the screen reader technology. There could be multiple places and streets visible from where the user stands. Like you see here, there are many things there. We can only describe them one at a time because our brain does not process multiple channels of audio very well. You may hear the sound, but it’s hard to understand all of them at once. Not only places and streets, but we also detect situations, like the user might be near an intersection, so we need to tell them that they need to be careful. Or maybe they’re facing a direction that has nothing to see, but if they turn left or right, they could actually see more. In those cases, we also want to notify the user. We iterated multiple times and carefully prioritized what to announce at what situation.
When we describe places and streets, Lens in Maps already had this thing called hover state, which is basically detecting what’s around the center of the image and highlighting those places or streets, as you can see on this slide. We basically made the feature to announce what’s being hovered in our experience. We initially described many things that appear on the screen that is hovered, because that’s what we show in our experience, like here, which has a label that has all the information of the hovered place, and that’s also what the accessibility guideline recommends.
This prevented the user from quickly browsing through different places because they have to wait for a longer time to get all those information, especially in a busy area like downtown. We got great feedback from the Lookout team that we might be over-describing, and it’s probably better for us to shorten the description, even though it may not exactly match what they see on the screen. We decided to only describe what’s most important to the blind and low-vision users at the moment, which is the name of the place, the type of the place, and distance to the place. For example, as you see in this slide, instead of announcing T.J.Maxx, 4.3 stars, department store, open, closes at 11 p.m., which is what you usually hear if you’re using any other 2D device with screen reader technology. We instead only announced T.J.Maxx, department store, 275 feet.
If we only provide this succinct description, the user won’t know if it’s really a place they want to visit. We provide an easy way for the user to get detailed information when they want to see, like the one seen on the right side of this slide. We added double-tap interaction on the screen to bring up this information. This interaction may not be obvious to the user, so we added a hint of the succinct description so that they can actually get more information by double-tapping. Using the example before, we would announce T.J.Maxx, department store, 275 feet, double-tap for details.
We only made changes to existing Lens in Maps behavior that is absolutely needed, such as disabling an action to go into 2D basemap view, which didn’t help much for the visually impaired users, because they can’t get any information out of the 2D, and it’s hard to know the distance to anything. We also hide places that are just too far away for them to walk to within five minutes. We made small adjustments here and there, but we tried to minimize those changes. This is important, otherwise it would be really hard to maintain the application in sync between screen reader and non-screen reader experiences. Whenever you modify or add a new feature to your experience, you have to make sure that it doesn’t break the other experience, and if their experience is too different, then there’s more chance of breaking the other one.
If the experience really diverges a lot, then, at that point, there’s no point of having a single application to support it, and at that point, it’s better to just create another one. Besides auditory feedback, haptic feedback can also help blind and low-vision users, and it won’t interfere with audio cues when it’s being used right. We use the general vibration to indicate that something is hovered. Before we can describe the place to the user, we have to fetch additional information from our server, and this means the user, when they hover something in the screen, they have to wait for a few seconds before we’re ready to announce anything.
For this wait time, if we announced loading every time, that would be annoying because we have a lot of things, a lot of places that we detect. Instead of that, we change it to haptic feedback so that the user will, over time, learn that whenever they feel this small haptic feedback, they need to wait a little bit before they can hear the information.
How to Apply Learnings
How can you apply our learnings to your situation? I won’t say that every AR app should work for users with visual impairment because, again, AR is a visual-centric experience. Most of the cases, it works best for sighted users. However, it would be really great for you to at least think whether your AR application could be useful or entertaining to blind and low-vision users, if you make them accessible. As an example, the IKEA app has a very useful AR feature that allows the user to overlay the furniture in their room. The 3D furniture blends really well with the actual environment. The left sofa is a fake one, and the right chair is the real one.
As you can see from here, it uses all the lighting conditions of the room and surroundings. It looks almost like it’s there. For people using this feature today, they use this feature to see if the furniture fits well in their space before they make the decision to buy. However, when I tried this feature on Android with TalkBack turned on, it didn’t describe what’s happening in the AR scene. Of course, it was covering all the 2D UIs, what it says or what it does, but whatever happens in the AR scene, there was no description. Also, I couldn’t do any interaction with the 3D model using the general interaction model provided by TalkBack. I would imagine if this feature could be made accessible, it will really help visually impaired users to explore new furniture before they actually buy them. Once you have determined that your AR app can be useful or entertaining for blind and low-vision users, making sure it’s accessible doesn’t mean you have to change a lot.
Like I said before, it’s important to keep the behaviors in sync between screen reader and non-screen reader experience, so it doesn’t become a burden to maintain or improve in the future. Also, there’s no need to explain everything that’s going on. A picture is worth a thousand words, but the user doesn’t have the time to listen to a thousand words. Try to make it succinct and only extract the most important information the user needs to know at the moment. However, make sure you can also provide a way to get additional information if the user requests, so that they can explore further.
As part of the make it succinct principle, it’s a good idea to combine auditory feedback with haptic feedback, since they can be sensed simultaneously. Try to use haptic feedback like gentle vibration when the meaning of the vibration is easy to figure out after a few tries. You may also change the strength of the vibration to give it a different meaning, but make sure you don’t overuse haptic feedback for many different meanings, because the strength of the vibration is very subtle to sense.
Real User Experience (Lens in Maps)
Now I’d like to show a short video from Ross Minor, who is an accessibility consultant and content creator. He shared how Lens in Maps helped him.
Minor: For the accessibility features that I really liked, I really love the addition of Lens in Maps. It’s honestly just a gamechanger for blind people, I feel, when it comes to mobility. I talked about it in my video. Just GPSs and everything, they’re only so accurate and so just being able to move my phone around and pretty much simulate looking, has already helped me so much. This is a feature I literally use all the time when going out and about. Some use cases that I really have benefited from is when I’m Ubering.
A lot of times I’ll get to the destination, and places can be wedged between two buildings, or buried, or whatever, and it’s difficult to find. In the past, my Uber drivers would always be like, “Is it right here, this is where you’re looking for?” I was like, “I can’t tell you that. I don’t know”. Now I’m able to actually move my phone around and say, yes, it’s over there, and saying it’s over there and pointing is like a luxury I’ve never had before. There have very much been cases where my Uber is about to drop me off at the wrong place and I’m like, no, I see it over there, it’s over that way. It’s a feature I use all the time. I’m just really happy to have it, and it works so well.
Oda: It’s really great and rewarding to hear this type of feedback from a user, that it’s a gamechanger to the user.
Prepare Your Future-Self
Now we’re back to stats again. Roughly 43 million people living with blindness and 295 million people living with moderate to severe visual impairment worldwide. You might be thinking that you are advancing the technology for people with disabilities. That’s great, but, remember, you’re not only helping others, but you might be helping your future self. Let’s prepare for our future self.
Lens in Maps Precision vs. Microsoft Soundscape
Dylan: Obviously, this is fantastic work. I’m really glad that it’s out there and improving people’s lives. I’m very curious to compare these features to something like Microsoft Soundscape, which I think used GPS mostly to figure out, there’s stuff around you in this direction in that direction, and help people explore and get a sense for a space. It feels like here the major advantage that this would have over that is that ability to be much more precise, to use those visual markers, understand, you are specifically looking at this. What are some of the specific things that that level of precision enables that an app like Soundscape may not be able to do?
Oda: As Ross in the video shared, for example, he was riding with his Uber driver. From Soundscape he uses GPS and compass and all those information to tell you, these are places around you. It may even tell you that your destination is 100 meters away from you. The thing is, it doesn’t have the ability to tell you which direction, and that actually sometimes can be very difficult. One of the sessions I learned from our internal ADI session is that they know that they’re near a destination, but the question is, where exactly is that? In the video actually they shared with us their story, is that they reached the destination and they have to wander around 10 minutes to actually find where exactly that destination is. That made me think that if we can provide this exact preciseness based on your phone, which is basically the direction you’re facing, so you know, it’s on that direction. This level of precision really helped for those last-mile cases.
Questions and Answers
Participant 1: Earlier you described that there was friction in holding up the camera. I was wondering if that was consistent around the world or if there are certain countries where Lens in Maps was less used because of that or any other reason.
Oda: I think that’s probably not the first reason that the feature itself is being used less. It is more of the cases people don’t understand they’re supposed to use this outside. Also, there are certain places in the world that we don’t have a lot of information because the technology heavily depends on street view collection. The way we detect where exactly you stand and where exactly you’re facing is based on comparing your image with street view information, which is a technology called VPS. Of course, there is some social awkwardness, especially if people are in front of you and if you’re holding your phone up, they may think you’re taking video.
Actually, we were being intimidated when we were testing this feature outside. Not just for the accessibility feature but just testing these Lens in Maps in general, that even though we’re actually facing the restaurant because people pass by, they sometimes think, we’re taking their video. There’s definitely a certain level of friction from there. The only thing is it’s really hard to know from the metrics gathered in the production to know, did they stop using because of their social awkwardness or something else. This is really just our guess but from our own experience we can see that. From the data itself that we can gather, it’s like we know if people are using this feature inside not outside, and that’s where their use is for.
Participant 1: You also mentioned that it was good to focus on one thing at a time. If there was too much on screen, how did you decide what to focus on and how to limit what to focus on?
Oda: We assign priority for each type of announcement, and whichever we think is most important at the moment is something we describe first. Anything that becomes a danger to the user is the highest priority. Like they are near an intersection so we don’t want them to cross. They are very careful, but we still want to add extra caution. Also, the places you hover is also considered to be more important than things that we tell you, there’s something else on your left side or right side. I think for any apps, you can think about, what is the most important things even though there might be multiple stuff. For our very specific use cases, those were the ranking of what we thought is important, and we only describe the one that has the highest priority.
See more presentations with transcripts