Month: April 2025
Nebula Research & Development LLC Purchases New Holdings in MongoDB, Inc. (NASDAQ:MDB)

MMS • RSS
Posted on mongodb google news. Visit mongodb google news
Nebula Research & Development LLC purchased a new stake in MongoDB, Inc. (NASDAQ:MDB – Free Report) during the fourth quarter, according to its most recent 13F filing with the SEC. The firm purchased 5,442 shares of the company’s stock, valued at approximately $1,267,000.
Other institutional investors and hedge funds have also bought and sold shares of the company. Strategic Investment Solutions Inc. IL acquired a new position in shares of MongoDB in the 4th quarter valued at $29,000. Hilltop National Bank boosted its stake in shares of MongoDB by 47.2% during the 4th quarter. Hilltop National Bank now owns 131 shares of the company’s stock worth $30,000 after acquiring an additional 42 shares in the last quarter. NCP Inc. acquired a new position in shares of MongoDB during the 4th quarter worth about $35,000. Versant Capital Management Inc raised its position in shares of MongoDB by 1,100.0% during the 4th quarter. Versant Capital Management Inc now owns 180 shares of the company’s stock worth $42,000 after purchasing an additional 165 shares during the last quarter. Finally, Wilmington Savings Fund Society FSB acquired a new position in shares of MongoDB during the 3rd quarter worth about $44,000. 89.29% of the stock is owned by hedge funds and other institutional investors.
MongoDB Stock Up 0.1 %
Shares of MDB stock traded up $0.18 during trading hours on Tuesday, hitting $174.69. 1,468,630 shares of the company were exchanged, compared to its average volume of 1,842,724. The firm has a market cap of $14.18 billion, a PE ratio of -63.76 and a beta of 1.49. MongoDB, Inc. has a 1 year low of $140.78 and a 1 year high of $387.19. The firm has a fifty day moving average of $192.74 and a 200 day moving average of $247.82.
MongoDB (NASDAQ:MDB – Get Free Report) last posted its quarterly earnings results on Wednesday, March 5th. The company reported $0.19 EPS for the quarter, missing the consensus estimate of $0.64 by ($0.45). MongoDB had a negative net margin of 10.46% and a negative return on equity of 12.22%. The business had revenue of $548.40 million during the quarter, compared to analysts’ expectations of $519.65 million. During the same quarter in the previous year, the business posted $0.86 EPS. Analysts expect that MongoDB, Inc. will post -1.78 earnings per share for the current year.
Insider Activity at MongoDB
In other news, CEO Dev Ittycheria sold 18,512 shares of the business’s stock in a transaction on Wednesday, April 2nd. The shares were sold at an average price of $173.26, for a total transaction of $3,207,389.12. Following the transaction, the chief executive officer now directly owns 268,948 shares of the company’s stock, valued at approximately $46,597,930.48. The trade was a 6.44 % decrease in their ownership of the stock. The transaction was disclosed in a document filed with the SEC, which is available at this hyperlink. Also, CAO Thomas Bull sold 301 shares of the firm’s stock in a transaction on Wednesday, April 2nd. The stock was sold at an average price of $173.25, for a total transaction of $52,148.25. Following the transaction, the chief accounting officer now directly owns 14,598 shares of the company’s stock, valued at approximately $2,529,103.50. This represents a 2.02 % decrease in their ownership of the stock. The disclosure for this sale can be found here. Over the last quarter, insiders have sold 39,345 shares of company stock worth $8,485,310. Company insiders own 3.60% of the company’s stock.
Wall Street Analysts Forecast Growth
MDB has been the subject of a number of recent research reports. Canaccord Genuity Group cut their price objective on shares of MongoDB from $385.00 to $320.00 and set a “buy” rating for the company in a report on Thursday, March 6th. Truist Financial decreased their target price on MongoDB from $300.00 to $275.00 and set a “buy” rating on the stock in a research note on Monday, March 31st. Guggenheim upgraded MongoDB from a “neutral” rating to a “buy” rating and set a $300.00 price target on the stock in a report on Monday, January 6th. Royal Bank of Canada dropped their price objective on MongoDB from $400.00 to $320.00 and set an “outperform” rating on the stock in a report on Thursday, March 6th. Finally, Oppenheimer dropped their price target on MongoDB from $400.00 to $330.00 and set an “outperform” rating on the stock in a research note on Thursday, March 6th. Eight equities research analysts have rated the stock with a hold rating, twenty-four have given a buy rating and one has issued a strong buy rating to the company’s stock. According to data from MarketBeat, MongoDB presently has a consensus rating of “Moderate Buy” and an average target price of $294.78.
Get Our Latest Analysis on MDB
About MongoDB
MongoDB, Inc, together with its subsidiaries, provides general purpose database platform worldwide. The company provides MongoDB Atlas, a hosted multi-cloud database-as-a-service solution; MongoDB Enterprise Advanced, a commercial database server for enterprise customers to run in the cloud, on-premises, or in a hybrid environment; and Community Server, a free-to-download version of its database, which includes the functionality that developers need to get started with MongoDB.
Featured Stories
Before you consider MongoDB, you’ll want to hear this.
MarketBeat keeps track of Wall Street’s top-rated and best performing research analysts and the stocks they recommend to their clients on a daily basis. MarketBeat has identified the five stocks that top analysts are quietly whispering to their clients to buy now before the broader market catches on… and MongoDB wasn’t on the list.
While MongoDB currently has a Moderate Buy rating among analysts, top-rated analysts believe these five stocks are better buys.

Just getting into the stock market? These 10 simple stocks can help beginning investors build long-term wealth without knowing options, technicals, or other advanced strategies.
Article originally posted on mongodb google news. Visit mongodb google news

MMS • Sergio De Simone
Article originally posted on InfoQ. Visit InfoQ

Docker has announced two new AI-focused tools—the Docker MCP Catalog and the Docker MCP Toolkit—to bring container-grade security and developer-friendly workflows to agentic applications, helping build a developer-centric ecosystem for Model Context Protocol (MCP) tools.
The Docker MCP Catalog is a centralized platform for developers to discover MCP tools. Docker’s COO Mark Cavage and head of enfinnering Tushar Jain compare the current AI landscape to the early days of cloud computing and containers, highlighting the need for standardized tooling and secure, scalable development workflows.
Back in the early days of the cloud, Docker brought structure to chaos by making immutability and isolation the standard, building in authentication, and launching Docker Hub as a central discovery layer. It didn’t just streamline deployment – it redefined how software gets built, shared, and trusted.
Docker has partnered with companies across cloud, developer tooling, and AI, to build a catalog of over 100 MCP servers, all hosted on Docker Hub. The catalog includes tools from Stripe, Elastic, Neo4j, and more. Each tool is curated, verified, and versioned to ensure reliability and consistency.
The Docker MCP Toolkit allows developers to run, authenticate, and manage MCP tools from the Docker MCP Catalog directly on their development machines using the new docker mcp
CLI command.
With one-click launch from Docker Desktop, you can spin up MCP servers in seconds and connect them to clients like Docker AI Agent, Claude, Cursor, VS Code, Windsurf, continue.dev, and Goose – no complex setup required
The toolkit also includes built-in credentials and OAuth support along with a Gateway MCP Server that dynamically exposes enabled tools to compatible clients.
Introduced by Anthropic, the Model Context Protocol is an open standard for integrating external resources and tools into LLM-centered apps. Built on a client-server architecture, MCP enables an app to use an MCP client to connect to MCP servers that provide access to datasources or external tools. Anthropic’s official documentation shows how a developer can implement an MCP server using Python to wrap calls to a public weather service. Any MCP-compliant app, such as Claude for Desktop, can then access this server without modifications.
Since its introduction, MCP has seen wide adoption—most recently from GitHub and Cloudflare—and has inspired the creation of several static and dynamic MCP server catalogs.

MMS • Anthony Alford
Article originally posted on InfoQ. Visit InfoQ

Google released the Gemma 3 QAT family, quantized versions of their open-weight Gemma 3 language models. The models use Quantization-Aware Training (QAT) to maintain high accuracy when the weights are quantized from 16 to 4 bits.
All four Gemma 3 model sizes are now available in QAT versions: 1B, 4B, 12B, and 27B parameters. The quantized versions require as little as 25% of the VRAM needed by the 16 bit models. Google claims that the 27B model can run on a desktop NVIDIA RTX 3090 GPU with 24GB VRAM, while the 12B model can run on a laptop NVIDIA RTX 4060 GPU with 8GB VRAM. The smaller models can run on mobile phones or other edge devices. By using Quantization-Aware Training, Google was able to reduce the accuracy loss from quantization, as much as 54%. According to Google,
While top performance on high-end hardware is great for cloud deployments and research, we heard you loud and clear: you want the power of Gemma 3 on the hardware you already own. We’re committed to making powerful AI accessible, and that means enabling efficient performance on the consumer-grade GPUs found in desktops, laptops, and even phones…Bringing state-of-the-art AI performance to accessible hardware is a key step in democratizing AI development…We can’t wait to see what you build with Gemma 3 running locally!
InfoQ covered Google’s initial launch of the Gemma series in 2024, which was quickly followed by Gemma 2. The open-source models achieved performance competitive with models 2x larger by incorporating design elements from Google’s flagship Gemini LLMs. The latest iteration, Gemma 3, has performance improvements that make it the “top open compact model,” according to Google. Gemma 3 also added vision capabilities, except in the 1B size.
While the unquantized Gemma 3 models exhibit impressive performance for their size, they still require substantial GPU resources. For example, the unquantized 12B model requires an RTX 5090 with 32GB of VRAM. To allow the quantization of model weights without sacrificing performance, Google used QAT. This technique simulates inference-time quantization during training, instead of simply quantizing the model after it’s trained.
Google dev Omar Sanseviero wrote about using the QAT models in a thread on X and suggested there was still room for improvement:
We still recommend playing with the models (e.g. we didn’t quantize the embeddings, some people even did 3-bit quantization and it was working better than naive 4 bits)
Users praised the QAT models’ performance in a discussion on Hacker News:
I have a few private “vibe check” questions and the 4 bit QAT 27B model got them all correctly. I’m kind of shocked at the information density locked in just 13 GB of weights. If anyone at Deepmind is reading this — Gemma 3 27B is the single most impressive open source model I have ever used. Well done!
Django Web Framework co-creator Simon Willison wrote about his experiments with the models and said:
Having spent a while putting it through its paces via Open WebUI and Tailscale to access my laptop from my phone I think this may be my new favorite general-purpose local model. Ollama appears to use 22GB of RAM while the model is running, which leaves plenty on my 64GB machine for other applications.
The Gemma 3 QAT model weights are available on HuggingFace and in several popular LLM frameworks, including Ollama, LM Studio, Gemma.cpp, and llama.cpp.

MMS • Tiani Jones
Article originally posted on InfoQ. Visit InfoQ

Transcript
Jones: First of all, why patterns? Why do I talk about patterns and why do I care about them, and why do I think they’re so interesting to explore? This is the reasoning steps that I go through when I think about patterns and when I talk to them in organizations and teams. The first is that every system is producing patterns. It’s a function of the interactions between people, the use of technology in the system. Every system is optimized to do something. Every system design is perfect, you could say, for whatever it’s designed to do, whether we acknowledge it or realize it or not.
The behavior then of the system is revealed in patterns, so we can then conclude that the system is optimized to produce patterns. What I’ve noticed in my work and over the many different domains that I’ve worked in why patterns is important, is that many organizations pick practices and then think that those will be silver bullets to solve problems. Some of those problems might be slow delivery of value, it could be the product is expensive for the market and they’re trying to figure out how to make it less expensive. Whatever the problem, they throw practices at the problem and just have a go at it from there.
Why A Game of Patterns?
Why do I call this a game of patterns in systems and organizations? In minimal gameplay, we ask, did we follow the rules? If you follow the rules in a game without thinking, are you really playing? I don’t think you’re really playing. I think that the primary game in the complex, adaptive systems, which I’ll define more about systems, I think that’s recognizing patterns, understanding the patterns that are produced in the organization, how the organization is optimized to produce them. When you play games, you have good gameplay, you have poor gameplay, but you’re practicing in order to perform, it’s practice to performance.
The ultimate thing that you’re looking for is performance. Performance is a sign of good play. You need to know in order to get to good performance, what to notice, what to measure to determine that something’s well done. When something’s not well done, or the performance is suffering, a deeper look into patterns and behaviors, generated by what you’re doing and how you’re interacting, will give you a clue into where to start. It will give you insight and awareness.
I like to bring this to life with an example. A few months back, I started playing chess again. I had not played for many years. I could move all the pieces correctly, but very clunkily. I would get a checkmate within minutes, because I really didn’t know how to play. I felt confused, sometimes I had low confidence. I couldn’t recognize the patterns and use the grid on the board. I would get cornered. I would blunder. I knew how to move the rook, the knight, and the queen. I knew exactly how to move them and what those rules were.
My performance overall was poor in the game, even though I could move these pieces perfectly. I saw Nakamura play with a YouTuber. They just had to put a video out there of them playing together. After playing a round or two, Nakamura noticed what was missing. He said, you don’t know the opening patterns. You haven’t mastered any closing techniques. There’s a few times you should have won the game. That’s where I think it brings to life this idea of understanding what good performance is and what it actually means to play a game.
Systems
Now that I’ve laid out that idea a little bit, I’d like to talk about systems, and then get back to how we bring patterns and behaviors together. I think it’s important to understand systems and their properties and their nature to have a deeper conversation about patterns. The first thing about system, this holds true for all systems, is that they’re never the sum of their parts, so they’re a product of the interactions. No matter what kind of system, this holds true. In systems thinking, it focuses more on steady-state systems generally, with constraints on the parts. Interactions and behaviors are knowable.
The best example I could find was this about a feedback control system, which is taking me back to electrical engineering days. You have three kinds of control in a feedback control system, proportional, integral, and derivative. In the proportional control, there’s a multiplier. It’s a multiplier known as gain to value to the direct difference between the measured state and the desired state. This is the easiest type of feedback control system to implement. Integral integrates the error between the desired state and the measured state over time, and scales this over gain value. When used in conjunction with proportional control, the integration acts as a low-pass filter and it eliminates steady-state errors in the system. It reduces noise in the derivative error and therefore smooth out the control signal.
Then derivative acts on the current rate of change of the error and is scaled by a gain value, and allows the controller to anticipate future trend of error. Derivatives amplify signal noise. There’s latency in systems, which is a time delay between when a real-world event occurs and when the data is fed back into the controller. There’s noise in systems, and that’s a disturbance on the signal. Mechanical and electrical signals produce these, and it’s unavoidable. Noise could be from the environment, it could be from defects, it can be from design implementation decisions. They produce tiny shifts in voltage. All these elements work together and interact to produce something. Looking at them in isolation doesn’t really mean much. It’s how they work together to perform the design intent that’s the most important. It’s the sum of the interactions that stands out boldly.
In terms of complex, adaptive systems, there is no ability of the critical aspects. For example, you can probably know that certain behaviors and patterns are present in an organization based on how they do budgeting or how they fund projects, what they incentivize, for example. Anthro-complexity is a key concept in a complex, adaptive system, which is the people element. Peter Checkland defined it in Soft Systems Methodology as the human activity system with uncertainty and unknowability because of the human element. There are other types of complex, adaptive systems in nature. A beehive is complex. An anthill is complex. The ants respond to constraints. How they respond doesn’t really change over time except through adapting to the environment.
If you can put ants in different environments, they’ll do what they do. They’re not self-reflective. Humans, we can actually change the rules of the game. We can change the dynamics. It’s another layer of complexity in a system. It’s not just reconfiguration, it’s reshaping the constraints. There’s an example of football. Humans created rules over time, but they’ve changed. We have an official rule book, but what happens live in a match? The referees interpret what they see live on the field.
Then recently, VAR was introduced, it was added. A ref can now make a call, but they can now be challenged by a VAR ref. This gives deeper understanding to the game. Socio-technical systems is another way of describing a complex, adaptive human activity system. There’s a really great white paper on socio-technical systems, “The Evolution of Socio-technical Systems”, by Trist. The analysis of these systems include three levels, which is the primary work system, the whole organization, and the macro-social phenomena. Organizations which are primarily socio-technical are dependent on their material means and resources for their outputs. Their core interface consists of the relations between a non-human system and a human system.
The macro-social phenomena, for example, is an interesting thing because when you have work done, say, for example, in a town or in a geographical area, people know each other, and perhaps they live in the same neighborhood. There’s friendships. There’s cultural elements. In the study around socio-technical systems, there is an insight that those social elements actually influence the ability to do work and had a part to play in an organization. In a large conglomerate, you could also say that macro-social phenomena is manifest in or at play when you analyze bigger bounded groupings of the organization, the geography, the company history. When one organization is purchased and melded into the larger group, the culture, the languages, all of that factors in.
I was speaking with this product leader one time who explained this example that brings this to life. We were actually chatting about Conway’s Law, which is also known as the mirroring hypothesis. He mentioned that there was a company he was working with that purchased another company from another state in the U.S., and that the smaller company handled some packaging of their data. The main company was larger and often forgot to communicate with the smaller company. They would forget to include them in meetings. Everyone was co-located in one location except Maine, the Maine smaller company, and they were always frustrated. They would sometimes get voicemails that were left, hoped someone picked up. They would forget them in meetings. Calls would be dropped, emails. There was just no communication.
When he poked into the software architecture and the product architecture of this company, the product performance was perfectly mirrored in this communication breakdown between the larger company and the purchased smaller company. There was data losses and defects directly mirroring that structure. That’s an interesting way of thinking about macro-social phenomena and the whole organization in the primary work system analyzing what happens in the socio-technical system. There’s also the idea of internal networks. Those could be considered maybe micro-social.
Another example from this is from Lean. A lot of times, people thought that Americans would not understand how to work in a Lean way, and work in the Toyota Production System, because in Japan, from a cultural standpoint, you get lifelong employment. There’s employment guarantees. Because of that, there was a feeling of safety and expectation then that you can pull the Andon cord and report problems because you don’t feel the fear of retribution because you happen to be the person who’s next to the problem and you’re standing close to the problem. Sometimes, you would see when you have that fear of retribution and you don’t have that freedom to pull the Andon cord, you might bury mistakes versus exposing them because of low safety. Those are some of the things when you start analyzing the socio-technical system and what’s actually happening.
Patterns and Behaviors
I’d like to talk a little bit about patterns and behaviors. I like to use this tool that gives us a way to start explaining patterns and behaviors. A colleague of mine, he and I were thinking through this concept around what does it actually mean for an organization to behave in a certain way? What are the patterns then that you might see? How do you link those to practices? This is the beginning of that exploration. This is what we came up with. Using Westrum’s Typology, which I really like as a starting point to think through, it’s a great framework to think about recognizing organizational cultures. There are three types: pathological, bureaucratic, and generative. We thought through this a little bit, and thought, we think that the behavior in each case is either power-oriented, rule-oriented, or performance-oriented. Then there’s constituent patterns that are manifest, that reinforce, and that are seen based on that behavior.
Way on the left, you see in the pathological environment, there’s low cooperation, bridging is discouraged, bridging across teams and groups and roles and responsibilities. Novelty is crushed. It’s very much like, stay in your lane, and responding to the hierarchy. That’s the type of organization. If you go all the way to the right, you see the opposite of that in generative culture, where if it’s focused and centered on the mission, focused on performance, then you see patterns of high cooperation, bridging, co-working. If there’s a failure, it leads to inquiry and curiosity, what’s happening? How do we understand this and how do we work together to work through these failures? Novelty is enacted. Of course, in a large organization, you can have pockets of this, but this is a good way just to start thinking and talking about behaviors and patterns.
The thing about patterns is that they’re just patterns until they aren’t. Consider this outcome statement. This is an outcome statement that an organization actually shared with us one time. This idea that leaders can enable a generative culture where technology, products, people, and the operating system continuously evolve to enact innovation in the marketplace. We took this idea from Westrum’s Typology around rule-oriented behavior, and thought, what are some other ways you could think about patterns that you might see?
In an organization, if you see monitoring and complying patterns, a focus and overemphasis of monitoring and complying throughout the organization, some of the practices that might be linked to that would be maybe traditional project management, siloed information processing. This is where you see companies who are data-rich but information-poor, lots of data collection. Designing for security, where you have things locked down and designed for security and safety without actually covering over and burying where problems are, where you can actually surface them and designing in a different way. This behavior as manifested in these patterns is likely antithetical to this outcome, and therefore, in this case, you would say that these are anti-patterns now. Because the minute that you have an outcome that’s stated and then you analyze what the patterns and the behaviors are, if they don’t coincide, then you say that those are anti-patterns.
Let’s reconsider the outcome. In this case, we have the performance-oriented organization. What you might see in this case then is guiding and enabling type of patterns. The practices that are reinforced by that would be adaptive ways of working, designed for trust, insights-focused, automation, digital threads, that kind of thing, value stream convergence. The point is to really think that there’s a few elements at play when it comes to behaviors, patterns, and practices.
First of all, there’s social practice. Many of the things we do in organizations, we practice socially, and that’s an integration of meaning, and know-how or skill, and technology and material. Patterns emerge from how we use technology and how we structure the organization. There’s a relationship between your goal and the materialized patterns. That’s really the point to take away from here. I would like to focus on agility as an example to bring this to light, to now, we’ve gone to these higher concepts, and how do we understand good performance and patterns and behaviors. That it’s not just following the rules, like we said in the outset. There’s another white paper that I found when we were doing some research on this, called, “Agile Base Patterns in the Agile Canon”. It’s this idea of base patterns of agility, based on pattern language.
Regardless of an organization, whatever methods or frameworks or agility techniques that they might be employing, these are the base patterns to look for. This idea of patterns came through very strongly in this paper, and I thought it was a really good way to think about it. The first three, to measure economic progress, to proactively experiment, and to limit work in process, those give you agility in an organization. If you want to have a resilient organization, you need to embrace collective responsibility. If you want to move beyond the actor boundary to the whole organization, you solve systemic problems. That’s less focus on local optimization and more focus on how is the system behaving, what’s actually happening, and going from there.
I’ll go through these just to give some explanation about why these patterns are interesting to me. This pattern about measuring economic progress, this is about being insight-focused versus data collection-focused, which is one of the patterns and practices links that we discussed before. Many organizations will measure and report everything that they think is interesting, all the way up to executives. This increases cognitive load and it decreases decision-making quality. Optimal outcomes can only be achieved through informed decision-making. Therefore, it is crucial to employ improved methods of measuring progress that provide better insights. These insights are essential to clearly understand the situation at hand and make confident decisions that lead to success. One of the techniques that we’ve used before, to help teams and organizations start to design well thought-out metrics that help them measure economic progress are the ODIM technique.
The idea around the ODIM technique is a metric suite that’s evolving that has a low number of metrics, where measurements can be performed frequently, that they’re balanced so you can expose or reduce the effect of gaming. You can identify when metrics are subjective and reducing bias, and then considering other factors around measures. Those are the main points about that. To proactively experiment to improve, much product development is iterative and involves some experimentation to achieve value. Still, change in a complex, adaptive system must also involve evolutionary and revolutionary experiments. That’s on the system itself to understand the nature of the system, what’s happening, what are the patterns, what are the behaviors, and then running experiments and tests to achieve the outcomes that you want in the system itself. I like that the white paper called out this idea of becoming an improvement scientist in the organization, that everyone plays a part in that. Then, limiting work in process.
If you’re familiar with Kanban and Lean, this is one topic that you’re probably familiar with. It’s work that’s started but not finished. It can be code that’s waiting for testing. It could be a design that you’ve finished but hasn’t been approved. It could be a requirement you’ve started to look at but you don’t understand it. It’s anything that you’ve started but haven’t finished, and it adds to your cognitive load. It’s mentally exhausting, and it’s another thing that causes us to mistake activity for accomplishment. It hides process problems and wastes that are lurking in the system. If we don’t limit our work in process, we can’t see where the bottlenecks in the system are. If we don’t identify bottlenecks, then how can we fix them? These are probing questions that following an Agile method won’t really answer for you, but when you dive into these questions, solutions, and tactics with the constraints that you modify when you’re looking into the system.
Resiliency, which is around embracing collective responsibility. This is the domain of shared responsibility for the system, for the work. Also, there’s a couple of ideas from a complexity and a social practice standpoint, one of them is inter-predictability. Inter-predictability is this way that you increase the transmission and sharing and development of skills and knowledge through how you work together. It could be through peer-to-peer co-working, not asynchronously, but working together, building together at the same time. The reason why this is important is because of this idea around tacit knowledge.
First, you had a project or something that you were working on, and someone asked, how do you do X, Y, or Z? You explain it to them. Then when they watch you do that thing, they say, “But you didn’t mention that. I just saw you do something different”. Then you sit back and you think, yes, there’s some things I know how to do that I don’t know how to explain, and it just comes to me when I’m doing that task, or doing that work. That’s that tacit knowledge, and that’s what you gain, inter-predictability, when you’re working together with your teammates, and you’re able to build bridges through that tacit knowledge, and through observing, and sharing, and then building new things together. Deming was also famous for saying that quality is everyone’s responsibility, so that’s another way of embracing collective responsibility. In the Azure world, we hear a lot about collective code ownership, where the team is the base unit of value delivery. This is also the domain of skills liquidity.
Just like financial liquidity, where you can move money around, and use it to how you wish to use it, and how you need to use it, imagine that same thing with skills in an organization, being able to move them around, because you’ve created this inter-predictability. Also, if you’ve read “The Phoenix Project”, and this concept of de-Brenting the organization, where you have single points of failure. In the Phoenix Project, Brent had his hands on the keyboard and he did everything, and the minute he stepped back and helped others learn, it was unblocking the organization, because that tacit knowledge was then being shared, and other people were learning how to do that work.
Then solving systemic problems, moving beyond the actor boundary. This is hunting for where things are out of balance. Where Lean says to eliminate waste everywhere, and it paints waste with an equal brush, Theory of Constraints says to hunt for waste. If you think of the socio-technical system, and looking through the organization, diving into the system, and looking for waste across the system, or how things are linked and connected in interesting and hidden ways, this domain, this is where this comes to life.
Case Study – Patterns in a Cyber-Physical Environment (The Fan Team)
I’d like to give you an example now of a team that I worked with. It’s in a cyber-physical product development environment, and it’s a non-software example, so step away from some of the typical work that I’ve worked on, with a team of rocket scientists, and how they wanted to figure out how to work in a different way. It brings to life this concept of patterns and behaviors and their link to practices. In this case study, I call them the fan team. They’re research scientists in aerospace industry. They did initial calculations and modeling around how to build an engine fan blade in a new material. They needed funding, and so they assembled a year-long plan on a Gantt chart with all the usual tasks, and dependencies, and swags of how long each phase would take.
The leader who sponsored this effort was concerned. He said, could we present this plan in a different way? Our goal is, one, to prove that we can build a fan blade out of this different material, but it’s also to prove that we can work in a completely different way in this research center, and to be more effective as an organization. Is Agile the answer? Should we follow some kind of Agile methodology? How do we do this? Where do we start?
I helped the team get together, and first of all, just describe a new plan for their funding. Instead of this Gantt chart, this rigid plan, where they tried to think of all the tasks in sequence that went out for maybe a year, I helped them create a presentation on how they would perform the following experiments. The workshop was as simple as it could be. I had a whiteboard with risks, assumptions, issues, dependencies on the board, on giant Post-its. They had a place for open questions, and then the whiteboard where they mapped out the experiments.
I asked them, what are you actually trying to accomplish here? Just start that, like stripping away the everything that you’ve done so far, and how you had initially started, and thought you were going to do this proposal. You’ve done some simulations. You’ve made some calculations. You think there’s something here. What are you trying to do? They said, we’re trying to prove that we can put a fan blade in an aircraft out of this material and that will work. I said, if you were going to test that in real life, you’re going to produce this fan blade, how would you do that? What would you do? He said, we can go over to this shop over here. They can actually manufacture it for us. We have a lab where we can throw the blade, that fan blade on a rotor and blast it at the power that they would expect that it would perform at. I would say, write that down. They wrote that down.
Then the first question was, will the fan fail when pushed at the required power? They mentioned in their first experiment, the test bench that they would set up, the experiment details, what they’re measuring for, and where they’re going to focus. I said, if test number one goes well, experiment number one succeeds, what will you do in experiment number two? They said, the next step would then be to put a housing on it. Just like you see on an aircraft, the blade spins within a housing, and that would mimic the next step of the real-life test. I said, so the question is, if it doesn’t fail, will it fail when put it into a housing mimicking a typical engine fan housing? He said, yes, that’s it. Experiment number two with some modifications, what they’re going to measure, what they’re going to do, how they’re going to run that experiment. I said, if experiment number two goes well, then what are you going to do? They said, if the housing doesn’t fail, then we have some additional things that we can do to get as close as possible to 100% efficiency.
We’ll begin to do the tooling of the blades and refining and getting the blades smooth to get the perfect efficiency and some other tweaks. I said, write that down. They presented this plan instead of the Gantt chart plan, three experiments, who the team was, where they do the tests, where they source the fan blade, and they got funding. The other thing is, from a complexity standpoint, they preserved a little bit of optionality. They prioritized, which is about prioritizing certain options based on their expiry. In this case, they deferred commitment on that blade tooling design because they knew they didn’t have to answer that up front, and they found that they often would get caught up and spend too long on the blade tooling design. They received funding on that.
Then we kicked off the project. The first thing we talked through was this idea of visualizing work in progress. They visualized their workflow on an Excel Kanban, and they had a system for updating it as a team. Why did they do it this way? First of all, this is a top-secret program. They couldn’t put anything on the wall. In their context, they couldn’t have a team room. They weren’t used to actually working in team rooms. They all had their own separate desks and would often just hand things off to each other. That was a completely foreign concept, and they couldn’t actually do it based on the top-secret nature of the product. They didn’t have any electronic tools. They didn’t have Trello. They didn’t have Jira. They couldn’t really use that also, for they had security concerns. They asked me, could we use Excel? I said, of course. Let’s just design a starting workflow together. We designed a starting workflow.
Two weeks later, I came in to help them again, and we discussed those challenges, and I gave them some rules of thumb. By then, they had added a color-coding system, and they had one person that would update the board in Excel when they got together in their morning session, a.k.a. standup, before their morning coordination meeting three times a week. They developed this system for knowing what was in progress, what was up next, what was blocked, and who was working on what, and that kind of thing. They just leveraged Excel, after some priming on how to use a Kanban.
The next thing is they worked cross-functionally, and met several times a week. I gave them a simple rule. I said, for work that you typically hand off between roles, look for opportunities to pair up, to parallelize, to work together, to sit next to each other literally, and collapse that work into one set of tasks, one moment of work together. Perform it simultaneously. If it helps, even mark that work on your board. If that sounds familiar, that’s like pairing, maybe like mobbing, but definitely embracing collective responsibility and limiting work in process.
The other thing that they also did was they had a rule for how many work items would be in progress at a given time. For them, this was so different because they would just throw a ton of things in progress. From one system engineer to a mechanical engineer, they would hand things off. Now they were actually thinking intentionally about how to achieve flow, how to maximize flow, how to maximize problem solving together with the tools that they had at hand. The next thing is they worked on solving systemic problems. This one was a complete surprise to me. There was one person that was responsible for and had the knowledge for how to set up the test lab. That person was overbooked. They were booked for a month at a time. You couldn’t get time with them to set up the lab, and it was blocking this team from doing their work, even when the lab was open.
One person from the team paired up with this person and documented the setup, published it, and published it somewhere where everyone could view. That person gave them the thumbs up that, yes, if you follow these steps for the setup, running the lab, and the shutdown of the lab, this is safe. It removed this bottleneck for the future. I couldn’t have anticipated that. By the team getting together and working in this new way, they learned how to solve a systemic problem together and to achieve value delivery together.
The other thing that happened was there were leaders that lit the spark that put this team together and wanted to achieve this mission of solving this problem around the engine fan blade, and also how they work and their methods for working. The leaders would just unblock the team when there was something in the way. They didn’t tell them what to do exactly. They gave them no direction other than the strategic vision to demonstrate the new way of working based on principles of decentralization of some decision-making and control, small batches and experiments, resiliency and responsiveness. The team, in collaboration with the leadership, rid themselves of review gates. In this case, they usually followed the V-model of engineering, which has lots of stage gates and review steps, lots of inspecting quality in versus building it in.
In this case, they had live demos in the lab with the physical product, with stakeholders, with leaders, with other scientists working on the problem. They were focused on the problem-solving together. In record time, they were able to prove the viability of this material in a near-world application. The other thing that happened is that they caught a major design flaw and they were able to correct it very early. The chief rocket scientist said that he noticed that in a similar project previous to this one, it took them months and so far downstream that they uncovered a similar design flaw of a similar nature that they saved themselves maybe eight months by working in this different way. That was the success they had from that approach.
What were the observed patterns? What emerged? How did this actually work? In anthro-complexity, because humans work with constraints, they change the constraints, I could not assume that this team would go into the room and build something. I couldn’t assume that they would get somewhere. My small part was just to help set the conditions to make it likely to make them successful through some simple rules and heuristics. Constraints management by modulating the constraints and not telling people what to do.
Then, once we establish those heuristics and constraints and put them into action, the questions we ask are, are the patterns I expect to play out playing out? Can I make some changes to the constraints to see if the patterns I want begin to emerge? This is not just an ethical statement, but the minimum amount of process produces better outcomes for everyone. The minimum constraints to get started equals better results. It’s better to actually start somewhat under-constrained. What that means is this idea of incremental elaboration. Every intervention you make into a system causes other problems. Now that those problems happen, we then begin to notice new things if we practice by modulating constraints.
Now we might notice, the work in progress is going higher, or the work item size is too high. We need to break this work down smaller. Or we have this external dependency now that we need to grapple with, can we untangle from this dependency? Can we reduce dependencies in some way? It’s this practice to performance that I mentioned in the outset that you start to notice things when you work in this way, when you’re under-constrained and you actually have within the team’s grasp to do that problem solving on the system. When the solutions are presented to teams and forced onto them, that’s where you get things like learned helplessness because they’re trying solutions to things that they cannot yet observe or perceive.
I tried to map this on a Wardley map to think about this in a different way. The map is meant to explain value chains and the current state for an applicable market. I assumed that the organization, a whole organization, is an applicable market from this standpoint. From an intervention standpoint, we need to diminish the value chain, way on the right, from functioning. The way we might do that is intervening where you have the Gantt charts and the schedule data, the data collection focus. If we’re talking about certainty and ubiquity in the organization, there are certain practices that are standard and assumed and just done, versus all the way over to the left in the genesis, which are more emerging practices in an organization. It’s something new, it’s never been tried, or it’s floundering. In the example I gave previously in rule-oriented behavior, it depends on the monitoring pattern, which depends on data collection focus practices.
Say if you have perfect authority, you can outlaw the use of spreadsheets data in presentations, and you could outlaw Gantt charts and schedule data. Maybe the outlawing for the fan team, for example, would be presented as a challenge to them, and it would erode support for the data collection focus, and then that outlaw or challenge can be framed as a constraint for a given context for the team. This opens up the opportunity space for alternative practices. It opens up opportunity space for doing things in different ways.
At the end of this project, the experienced rocket scientist said, I remember how we used to work. We’d just start building until we got it right, but somewhere along the way, all this red tape and standard work came in and we began to slow down. At some point, standard work made sense for this organization, but then it was used everywhere for everything, including engineering and research and development, and it slowed it down and stifled it. Imagine research and development being so focused on the schedule and scope. It seems a bit crazy when the whole point was problem solving a design problem for new technology and how to do that as effectively as possible.
Conclusion
In my conclusion, I state a few things, the same thing in different ways. The first complexity topic here would be bounded applicability. That’s to beware of addiction to practices, the silver bullet solution to problems without knowing if it’s meant for your context. The fact that interventions cause problems. The point is to learn and build skills in what to measure and what to notice, all centered on the desired outcome, the relationship between your goal and the materialized pattern. There’s a really great blog post about this by Jabe Bloom, where he wrote, the failure to recognize the bounded applicability of our tools results in less effective utilization of those tools. Teams using the wrong process for the wrong problem may lose confidence in the tool, which is very useful in other domains, which they will now avoid.
The other concluding idea I had was around learning how to play well, and going back to the idea that performance is the sign of good play. You need to know what to notice and what to measure to determine something is well done. Principles, like Agile principles and practices, can create constraints that allow habitudes. You need to leverage these practices as constraints in order to observe and notice the patterns that emerge in your context, and what emerges as you use them is something unique to your organization.
The awareness of anthro-complexity in these systems, it changes our expectations: so what we expect of ourselves and what we expect of others in product development as leaders and team members. We can’t expect each other always to have the right answers and consistently get results, especially if we haven’t created the conditions for learning through an understanding of our patterns and moving constraints to change the possibility space. I really can’t emphasize enough the importance of creating conditions for learning. One brief anecdote about this is a team that I worked with very early in my Agile experience, the tech lead, he was on the verge of firing a young developer. He called him a poor performer. After some team training on XP, the lead, along with the team, decided to play with a couple of constraints, one being, let’s test pair programming and rotating in the pairs. That young developer began to flourish in the new collaboration format to the point that the lead thought he was a whole new person.
The fact of the matter is the constraints have been modulated so that this young developer could now develop and build together with his team versus getting an assignment in isolation in their respective cubicles. It was a step towards being more performance-centered. There was less siloed information processing. Really, there is a human level to all this that can make work better for everyone.
See more presentations with transcripts
Java News Roundup: Gradle 8.14, JBash Jash, Hibernate, Open Liberty, Spring Cloud Data Flow

MMS • Michael Redlich
Article originally posted on InfoQ. Visit InfoQ

This week’s Java roundup for April 21st, 2025 features news highlighting: the GA release of Gradle 8.14; JBang introduces Jash, a Java library for shell scripts; the first release candidate of Hibernate ORM 7.0; the April edition of Open Liberty; and the end of open-source support for Spring Cloud Data Flow.
OpenJDK
Two JEPs have been elevated from Candidate to Proposed to Target for JDK 25, announced here and here, respectively, namely: JEP 512, Compact Source Files and Instance Main Methods, and JEP 511, Module Import Declarations. Their reviews are expected to conclude on Monday, April 28, 2025 and details for each JEP may be found in this InfoQ news story.
JEP 513, Flexible Constructor Bodies, has been elevated from its JEP Draft 8344702 to Candidate status. This JEP proposes to finalize this feature, without change, after three rounds of preview, namely: JEP 492, Flexible Constructor Bodies (Third Preview), delivered in JDK 24; JEP 482, Flexible Constructor Bodies (Second Preview), delivered in JDK 23; and JEP 447, Statements before super(…) (Preview), delivered in JDK 22. This feature allows statements that do not reference an instance being created to appear before the this()
or super()
calls in a constructor; and preserve existing safety and initialization guarantees for constructors. Gavin Bierman, Consulting Member of Technical Staff at Oracle, has provided an initial specification of this JEP for the Java community to review and provide feedback.
JDK 25
Build 20 of the JDK 25 early-access builds was made available this past week featuring updates from Build 19 that include fixes for various issues. More details on this release may be found in the release notes.
For JDK 25, developers are encouraged to report bugs via the Java Bug Database.
GlassFish
GlassFish 7.0.24, the twenty-fourth maintenance release, delivers bug fixes, dependency upgrades and new features such as: support for JDK 24; and faster deployment time with improved file discovery by using the walkFileTree()
method defined in the Java Files
class. More details on this release may be found in the release notes.
Spring Framework
It was a busy week over at Spring as the various teams have delivered first release candidates of Spring Boot, Spring Data 2025.0.0, Spring Security, Spring Authorization Server, Spring Session, Spring Integration, Spring Modulith and Spring Web Services. There were also second milestone releases of Spring Data 2025.1.0 and Spring for Apache Kafka and a first milestone release of Spring Vault. Further details may be found in this InfoQ news story.
The Spring Cloud Data Flow team has announced the end of open-source support for this project along with Spring Cloud Deployer and Spring Statemachine. The reasoning for this includes:
Spring Cloud Data Flow came out of the roots for Spring XD eight years ago for orchestrating both batch and streaming workloads and has shown great success with our customers over those years. However, in order to keep Spring Cloud Data Flow and related ecosystem projects going into the future in a way that is sustainable, we have made the decision to only release Spring Cloud Data Flow as a commercial offering.
Future releases, after versions 2.11.x, 2.9.x and 4.0.x, respectively, will only be made available to Tanzu Spring customers.
Open Liberty
IBM has released version 25.0.0.4 of Open Liberty featuring: support for Java 24; the ability to collect Liberty audit logs, via their Audit 2.0 feature, and send them to a configured OpenTelemetry exporter; and InstantOn support for the J2EE Management 1.1, Application Client Support for Server 1.0, Jakarta Application Client Support for Server 2.0 and Web Security Service 1.1 features. There were also resolutions to CVE-2025-25193 and CVE-2025-23184 that may cause a denial-of-service due to vulnerabilities from Netty versions up to and including 4.1.118.Final and Apache CXF versions before 3.5.10, 3.6.5 and 4.0.6, respectively.
Quarkus
Quarkus 3.21.4, the fourth maintenance release, ships with notable changes such as: a resolution to a StackOverflowError
using a retry policy from the SmallRye implementation of MicroProfile Fault Tolerance specification; and the addition of a warning or error when attempting to create an instance of the HttpSecurityPolicy
interface with a duplicated name. More details on this release may be found in the release notes.
Helidon
The release of Helidon 4.2.1 provides bug fixes and notable changes such as: the use of base units from the Timer
interface for improved metrics reporting, in JSON format, in the toString()
method defined in the MTimer
class; and support for configurable buffering added to the TcpClientConnection
class to to prevent small write chunks. More details on this release may be found in the release notes.
Hibernate
The first candidate release of Hibernate ORM 7.0.0 delivers new features such as: a new QuerySpecification
interface that provides a common set of methods for all query specifications that allow for iterative, programmatic building of a query; and a migration from Hibernate Commons Annotations (HCANN) to the new Hibernate Models project for low-level processing of an application domain model. There is also support for the Jakarta Persistence 3.2 specification, the latest version targeted for Jakarta EE 11. The team anticipates this as the only release candidate before the GA release. More details on this release may be found in the release notes and the migration guide.
JBang
The JBang team has introduced Jash, a new Java library that provides a way to execute process or shell scripts that are “fluent, predictable and with a great developer experience.” Jash, pronounced “Jazz,” handles the behind-the-scenes tasks with the complexities of using multiple threads. More details on this initial release may be found in the release notes and InfoQ will follow up with a more detailed news story.
Gradle
After three release candidates, the release of Gradle 8.14 delivers new features such as: support for JDK 24; an introduction to lazy dependency configuration initialization for improved configuration performance and use of memory; and a new integrity check mode for improved debugging in the configuration cache. More details on this release may be found in the release notes.

MMS • RSS
Posted on mongodb google news. Visit mongodb google news

MongoDB, Inc. (NASDAQ: MDB) today announced the appointment of Mike Berry as Chief Financial Officer, effective May 27, 2025. Berry will lead MongoDB’s accounting, FP&A, treasury and investor relations efforts and partner with other senior leaders to set and deliver on the company’s long-term strategic and financial objectives.
Berry joins MongoDB from NetApp, where he served as CFO for the past five years. A seven-time CFO, Berry previously held that role at McAfee, FireEye, Informatica, IO, Solarwinds, and i2 Technologies. Berry brings to MongoDB over 30 years of experience, a wealth of experience in the technology and software industry, and a proven track record of driving profitable growth.
“Mike’s unique combination of strategic, operational, and financial expertise makes him a key addition to the MongoDB leadership team,” said Dev Ittycheria, President and CEO of MongoDB. “His industry expertise and proven ability to drive efficient growth aligns perfectly with our vision for the future. This is an incredibly exciting time for MongoDB as customers are in the very early stages of harnessing GenAI to build new applications and modernize their vast installed base of legacy workloads. Mike’s experience with consumption models and history of successfully scaling businesses to $5 billion in revenue and beyond make him the ideal choice to serve as MongoDB’s next CFO.”
“I’m thrilled to join MongoDB at such an exciting moment in its growth journey,” said Berry. “The company’s incredible track record of product innovation and established leadership position in one of the largest, most strategic markets in software provides significant growth drivers that we expect to benefit our business for years to come. While it was not my intention to pursue another CFO role when we announced my retirement from NetApp, the opportunity to join a company the caliber of MongoDB was incredibly compelling. I can’t wait to get started in late May and I look forward to working with the team to create long-term value for our customers, shareholders, and employees.”
About MongoDB
Headquartered in New York, MongoDB’s mission is to empower innovators to create, transform, and disrupt industries with software. MongoDB’s unified database platform was built to power the next generation of applications, and MongoDB is the most widely available, globally distributed database on the market. With integrated capabilities for operational data, search, real-time analytics, and AI-powered data retrieval, MongoDB helps organizations everywhere move faster, innovate more efficiently, and simplify complex architectures. Millions of developers and more than 50,000 customers across almost every industry—including 70% of the Fortune 100—rely on MongoDB for their most important applications. To learn more, visit mongodb.com.
Article originally posted on mongodb google news. Visit mongodb google news

MMS • RSS
Posted on mongodb google news. Visit mongodb google news
Gilder Gagnon Howe & Co. LLC lessened its holdings in shares of MongoDB, Inc. (NASDAQ:MDB – Free Report) by 1.3% during the 4th quarter, according to its most recent disclosure with the SEC. The institutional investor owned 364,573 shares of the company’s stock after selling 4,810 shares during the quarter. Gilder Gagnon Howe & Co. LLC owned about 0.49% of MongoDB worth $84,876,000 as of its most recent filing with the SEC.
Other hedge funds and other institutional investors have also made changes to their positions in the company. Strategic Investment Solutions Inc. IL acquired a new stake in MongoDB in the 4th quarter valued at about $29,000. Hilltop National Bank increased its stake in shares of MongoDB by 47.2% during the fourth quarter. Hilltop National Bank now owns 131 shares of the company’s stock worth $30,000 after buying an additional 42 shares during the period. NCP Inc. purchased a new position in MongoDB in the fourth quarter worth approximately $35,000. Versant Capital Management Inc lifted its stake in MongoDB by 1,100.0% in the fourth quarter. Versant Capital Management Inc now owns 180 shares of the company’s stock worth $42,000 after acquiring an additional 165 shares during the last quarter. Finally, Wilmington Savings Fund Society FSB acquired a new position in MongoDB during the 3rd quarter worth approximately $44,000. Hedge funds and other institutional investors own 89.29% of the company’s stock.
Insider Buying and Selling
In other news, CEO Dev Ittycheria sold 18,512 shares of the stock in a transaction that occurred on Wednesday, April 2nd. The shares were sold at an average price of $173.26, for a total value of $3,207,389.12. Following the sale, the chief executive officer now directly owns 268,948 shares in the company, valued at $46,597,930.48. This represents a 6.44 % decrease in their ownership of the stock. The transaction was disclosed in a document filed with the Securities & Exchange Commission, which is available through the SEC website. Also, CAO Thomas Bull sold 301 shares of MongoDB stock in a transaction that occurred on Wednesday, April 2nd. The stock was sold at an average price of $173.25, for a total value of $52,148.25. Following the transaction, the chief accounting officer now owns 14,598 shares of the company’s stock, valued at $2,529,103.50. This represents a 2.02 % decrease in their ownership of the stock. The disclosure for this sale can be found here. Insiders have sold a total of 47,680 shares of company stock worth $10,819,027 in the last quarter. Company insiders own 3.60% of the company’s stock.
Analyst Ratings Changes
Several equities research analysts have recently issued reports on the stock. Robert W. Baird lowered their target price on shares of MongoDB from $390.00 to $300.00 and set an “outperform” rating for the company in a research note on Thursday, March 6th. Wells Fargo & Company downgraded MongoDB from an “overweight” rating to an “equal weight” rating and dropped their price objective for the company from $365.00 to $225.00 in a report on Thursday, March 6th. Oppenheimer decreased their target price on MongoDB from $400.00 to $330.00 and set an “outperform” rating for the company in a research note on Thursday, March 6th. Wedbush dropped their price target on shares of MongoDB from $360.00 to $300.00 and set an “outperform” rating on the stock in a research note on Thursday, March 6th. Finally, Rosenblatt Securities reissued a “buy” rating and set a $350.00 price objective on shares of MongoDB in a report on Tuesday, March 4th. Eight analysts have rated the stock with a hold rating, twenty-four have issued a buy rating and one has assigned a strong buy rating to the company. Based on data from MarketBeat.com, MongoDB has an average rating of “Moderate Buy” and a consensus target price of $294.78.
Check Out Our Latest Analysis on MDB
MongoDB Trading Up 0.2 %
Shares of MDB stock opened at $173.50 on Friday. MongoDB, Inc. has a 12-month low of $140.78 and a 12-month high of $387.19. The company has a market cap of $14.09 billion, a price-to-earnings ratio of -63.32 and a beta of 1.49. The stock has a 50 day simple moving average of $195.15 and a 200-day simple moving average of $249.13.
MongoDB (NASDAQ:MDB – Get Free Report) last issued its earnings results on Wednesday, March 5th. The company reported $0.19 EPS for the quarter, missing the consensus estimate of $0.64 by ($0.45). MongoDB had a negative return on equity of 12.22% and a negative net margin of 10.46%. The company had revenue of $548.40 million for the quarter, compared to analysts’ expectations of $519.65 million. During the same quarter in the previous year, the firm posted $0.86 earnings per share. On average, equities analysts forecast that MongoDB, Inc. will post -1.78 earnings per share for the current fiscal year.
About MongoDB
MongoDB, Inc, together with its subsidiaries, provides general purpose database platform worldwide. The company provides MongoDB Atlas, a hosted multi-cloud database-as-a-service solution; MongoDB Enterprise Advanced, a commercial database server for enterprise customers to run in the cloud, on-premises, or in a hybrid environment; and Community Server, a free-to-download version of its database, which includes the functionality that developers need to get started with MongoDB.
See Also
This instant news alert was generated by narrative science technology and financial data from MarketBeat in order to provide readers with the fastest and most accurate reporting. This story was reviewed by MarketBeat’s editorial team prior to publication. Please send any questions or comments about this story to contact@marketbeat.com.
Before you consider MongoDB, you’ll want to hear this.
MarketBeat keeps track of Wall Street’s top-rated and best performing research analysts and the stocks they recommend to their clients on a daily basis. MarketBeat has identified the five stocks that top analysts are quietly whispering to their clients to buy now before the broader market catches on… and MongoDB wasn’t on the list.
While MongoDB currently has a Moderate Buy rating among analysts, top-rated analysts believe these five stocks are better buys.

Explore Elon Musk’s boldest ventures yet—from AI and autonomy to space colonization—and find out how investors can ride the next wave of innovation.
Article originally posted on mongodb google news. Visit mongodb google news

MMS • RSS
Posted on mongodb google news. Visit mongodb google news

#inform-video-player-1 .inform-embed { margin-top: 10px; margin-bottom: 20px; }
#inform-video-player-2 .inform-embed { margin-top: 10px; margin-bottom: 20px; }
Berry joins MongoDB with more than three decades of expertise in software and cloud businesses
NEW YORK, April 28, 2025 /PRNewswire/ — MongoDB, Inc. (NASDAQ: MDB) today announced the appointment of Mike Berry as Chief Financial Officer, effective May 27, 2025. Berry will lead MongoDB’s accounting, FP&A, treasury and investor relations efforts and partner with other senior leaders to set and deliver on the company’s long-term strategic and financial objectives.
#inform-video-player-3 .inform-embed { margin-top: 10px; margin-bottom: 20px; }
Article originally posted on mongodb google news. Visit mongodb google news

MMS • RSS
Posted on mongodb google news. Visit mongodb google news

- MongoDB (MDB, Financial) appoints Mike Berry as the new Chief Financial Officer, effective May 27, 2025.
- Berry brings over 30 years of experience in the technology and software industry, previously serving as CFO at major firms such as NetApp and McAfee.
- MongoDB to report Q1 FY2026 financial results on June 4, 2025, after the markets close.
MongoDB, Inc. (MDB) has announced the appointment of Mike Berry as its Chief Financial Officer, effective May 27, 2025. Berry joins MongoDB from NetApp, where he served as CFO for the past five years. With a rich career spanning over three decades, Berry has held CFO roles at technology firms including McAfee, FireEye, Informatica, IO, SolarWinds, and i2 Technologies.
In his new role at MongoDB, Berry will oversee accounting, financial planning and analysis (FP&A), treasury, and investor relations. He is also expected to work closely with senior leadership to set and achieve the company’s strategic and financial goals. Berry’s experience with consumption models and scaling businesses to over $5 billion in revenue is particularly valuable as MongoDB focuses on growth in the GenAI space and legacy workload modernization.
MongoDB’s CEO, Dev Ittycheria, expressed confidence in Berry’s ability to contribute to the company’s long-term vision, noting that his expertise aligns perfectly with MongoDB’s objectives in tapping into new opportunities with GenAI applications.
In addition to this leadership announcement, MongoDB is scheduled to release its financial results for the first quarter of fiscal year 2026 on June 4, 2025. Following the release, the company will hold a conference call at 5:00 p.m. Eastern Time to discuss the results and business outlook.
Article originally posted on mongodb google news. Visit mongodb google news

MMS • RSS
Posted on mongodb google news. Visit mongodb google news
MongoDB (MDB, Financial) has announced the appointment of Mike Berry as its new Chief Financial Officer, starting on May 27. Berry will be responsible for overseeing the company’s accounting, financial planning and analysis, treasury, and investor relations departments. Additionally, he will collaborate with MongoDB’s senior leadership to establish and implement long-term strategic and financial goals.
Berry transitions to MongoDB from NetApp, where he held the CFO position for the past five years. His extensive experience in financial leadership is expected to support MongoDB’s growth and strategic initiatives.
Wall Street Analysts Forecast
Based on the one-year price targets offered by 34 analysts, the average target price for MongoDB Inc (MDB, Financial) is $273.96 with a high estimate of $520.00 and a low estimate of $160.00. The average target implies an
upside of 57.90%
from the current price of $173.50. More detailed estimate data can be found on the MongoDB Inc (MDB) Forecast page.
Based on the consensus recommendation from 38 brokerage firms, MongoDB Inc’s (MDB, Financial) average brokerage recommendation is currently 2.0, indicating “Outperform” status. The rating scale ranges from 1 to 5, where 1 signifies Strong Buy, and 5 denotes Sell.
Based on GuruFocus estimates, the estimated GF Value for MongoDB Inc (MDB, Financial) in one year is $432.68, suggesting a
upside
of 149.38% from the current price of $173.5. GF Value is GuruFocus’ estimate of the fair value that the stock should be traded at. It is calculated based on the historical multiples the stock has traded at previously, as well as past business growth and the future estimates of the business’ performance. More detailed data can be found on the MongoDB Inc (MDB) Summary page.
MDB Key Business Developments
Release Date: March 05, 2025
- Total Revenue: $548.4 million, a 20% year-over-year increase.
- Atlas Revenue: Grew 24% year-over-year, representing 71% of total revenue.
- Non-GAAP Operating Income: $112.5 million, with a 21% operating margin.
- Net Income: $108.4 million or $1.28 per share.
- Customer Count: Over 54,500 customers, with over 7,500 direct sales customers.
- Gross Margin: 75%, down from 77% in the previous year.
- Free Cash Flow: $22.9 million for the quarter.
- Cash and Cash Equivalents: $2.3 billion, with a debt-free balance sheet.
- Fiscal Year 2026 Revenue Guidance: $2.24 billion to $2.28 billion.
- Fiscal Year 2026 Non-GAAP Operating Income Guidance: $210 million to $230 million.
- Fiscal Year 2026 Non-GAAP Net Income Per Share Guidance: $2.44 to $2.62.
For the complete transcript of the earnings call, please refer to the full earnings call transcript.
Positive Points
- MongoDB Inc (MDB, Financial) reported a 20% year-over-year revenue increase, surpassing the high end of their guidance.
- Atlas revenue grew 24% year over year, now representing 71% of total revenue.
- The company achieved a non-GAAP operating income of $112.5 million, resulting in a 21% non-GAAP operating margin.
- MongoDB Inc (MDB) ended the quarter with over 54,500 customers, indicating strong customer growth.
- The company is optimistic about the long-term opportunity in AI, particularly with the acquisition of Voyage AI to enhance AI application trustworthiness.
Negative Points
- Non-Atlas business is expected to be a headwind in fiscal ’26 due to fewer multi-year deals and a shift of workloads to Atlas.
- Operating margin guidance for fiscal ’26 is lower at 10%, down from 15% in fiscal ’25, due to reduced multi-year license revenue and increased R&D investments.
- The company anticipates a high-single-digit decline in non-Atlas subscription revenue for the year.
- MongoDB Inc (MDB) expects only modest incremental revenue growth from AI in fiscal ’26 as enterprises are still developing AI skills.
- The company faces challenges in modernizing legacy applications, which is a complex and resource-intensive process.
- CEO Buys, CFO Buys: Stocks that are bought by their CEO/CFOs.
- Insider Cluster Buys: Stocks that multiple company officers and directors have bought.
- Double Buys: Companies that both Gurus and Insiders are buying
- Triple Buys: Companies that both Gurus and Insiders are buying, and Company is buying back.
Article originally posted on mongodb google news. Visit mongodb google news