MongoDB (MDB US): Fast-Exit from Nasdaq100 in May 2025 – Dimitris Ioannidis

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

Begin exploring Smartkarma’s AI-augmented investing intelligence platform with a complimentary Preview Pass to:

  • Unlock research summaries
  • Follow top, independent analysts
  • Receive personalised alerts
  • Access Analytics, Events and more

Join 55,000+ investors, including top global asset managers overseeing $13+ trillion.

Upgrade later to our paid plans for full-access.

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Java News Roundup: OpenJDK JEP Updates, Spring AI, Quarkus, LangChain4j, JReleaser, WildFly

MMS Founder
MMS Michael Redlich

Article originally posted on InfoQ. Visit InfoQ

This week’s Java roundup for April 28th, 2025 features news highlighting: four JEPs proposed to target and targeted for JDK 25; new JEPs; three new JEPs; the eighth milestone release of Spring AI 1.0.0; Quarkus 3.22.0; the first release candidate of LangChain4j 1.0.0; the release of JReleaser 1.18.0; and Wildfly joins the Commonhaus Foundation.

OpenJDK

Two JEPs have been elevated from Proposed to Target to Targeted for JDK 25, namely: JEP 512, Compact Source Files and Instance Main Methods, and JEP 511, Module Import Declarations, announced here and here, respectively.

Two JEPs have been elevated from Candidate to Proposed to Target for JDK 25, namely: JEP 513, Flexible Constructor Bodies, and JEP 505, Structured Concurrency (Fifth Preview), announced here and here, respectively. Their reviews are expected to conclude by May 8, 2025.

Details for each of these four JEPs may be found in this InfoQ news story.

JEP 517, HTTP/3 for the HTTP Client API, has been elevated from its JEP Draft 8291976 to Candidate status. This JEP proposes to “update the HTTPClient API to support the HTTP/3 protocol, so that libraries and applications can interact with HTTP/3 servers with minimal code change.

JEP 515, Ahead-of-Time Method Profiling, has been elevated from its JEP Draft 8325147 to Candidate status. This JEP proposes to improve application warmup time by “making method-execution profiles from a previous run of an application instantly available, when the HotSpot JVM starts.” This allows the JIT compiler to immediately generate native code upon application startup as opposed to waiting for profiles to be collected.

JEP 470, PEM Encodings of Cryptographic Objects (Preview), has been elevated from its JEP Draft 8300911 to Candidate status. This JEP previews “an API for encoding objects that represent cryptographic keys, certificates, and certificate revocation lists into the widely-used Privacy-Enhanced Mail (PEM) transport format, and for decoding from that format back into objects.” This feature will support conversions between PEM text and cryptographic objects in PKCS #8 and X.509 binary formats.

JDK 25

Build 21 of the JDK 25 early-access builds was made available this past week featuring updates from Build 20 that include fixes for various issues. More details on this release may be found in the release notes.

For JDK 25, developers are encouraged to report bugs via the Java Bug Database.

Spring Framework

The eighth milestone release of Spring AI 1.0.0 features “several significant changes [that] would become breaking changes in an [upcoming] RC1 release.” This additional milestone release serves as a transition for providing the deprecated API along wit the corresponding replacement APIs. More details on this release may be found in the upgrade notes and release notes.

The first release candidate of Spring Cloud 2025.0.0, codenamed Northfields, features bug fixes and notable updates to sub-projects: Spring Cloud Kubernetes 3.3.0-RC1; Spring Cloud Function 4.3.0-RC1; Spring Cloud Stream 4.3.0-RC1; and Spring Cloud Circuit Breaker 3.3.0-RC1. This release is based on Spring Boot 3.5.0-RC1. More details on this release may be found in the release notes.

Quarkus

The release of Quarkus 3.22.0 features: Compose Dev Services that discover Compose specification files in a Quarkus application; a dedicated user interface to execute Hibernate Query Language (HQL) queries; and an improved test class loading infrastructure using a runtime classloader. More details on this release may be found in the release notes.

LangChain4j

The first release candidate (along with the fourth beta release) of LangChain4j delivers fie modules released under the release candidate, namely: langchain4j-core; langchain4j; langchain4j-http-client; langchain4j-http-client-jdk and langchain4j-open-ai wit the the remaining modules still under the milestone 4 release. Breaking changes include: a rename of the ChatLanguageModel and StreamingChatLanguageModel interfaces to ChatModel and StreamingChatModel, respectively; and a renaming and reshuffling of some internal utility classes that the team recommends that should not be directly used (even if they are public as these classes are now annotated with @Internal. More details on this release may be found in the release notes.

JReleaser

Version 1.18.0 of JReleaser, a Java utility that streamlines creating project releases, has been released to deliver: support for Forgejo, a self-hosted lightweight software forge; allow the native-image assembler to create FLAT_BINARY distributions; and support for deploying to the Sonatype Nexus 3 repository manager (NXRM3). More details on this release may be found in the release notes.

Commonhaus Foundation

The Commonhaus Foundation, a non-profit organization dedicated to the sustainability of open source libraries and frameworks, has announced that WildFly has joined the foundation this past week as a member project. In a blog post published in early-February 2025, Brian Stansberry, Senior Principal Software Engineer at Red Hat, described their rationale to transition to the foundation, writing:

WildFly has been a successful project for a long time now, and I believe that’s largely because we are passionate about serving our community. To help us continue on this path, we are considering moving WildFly to a vendor-neutral software foundation. Our hope is that by doing this we could further expand our community, improve our openness and transparency, refresh our governance model, and encourage more participation by contributors not affiliated with Red Hat.

Other notable projects that have joined the foundation include: Infinispan, Debezium, JReleaser, JBang, OpenRewrite, SDKMAN, EasyMock, Objenesis and Feign.

About the Author

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Instance Main Methods Move from Preview to Final in JDK 25

MMS Founder
MMS A N M Bazlur Rahman

Article originally posted on InfoQ. Visit InfoQ

JEP 512, Compact Source Files and Instance Main Methods, has been integrated for JDK 25 following a comprehensive four-round preview cycle beginning with JDK 21. Previously known as Implicitly Declared Classes and Instance Main Methods, these features are now finalized for JDK 25. This evolution introduces refined concepts such as Compact Source Files, flexible instance main methods, a new console I/O helper class, java.lang.IO, and automatic imports for core libraries. The primary goal is to provide beginners with an accessible entry into the Java language, while also enabling experienced developers to craft scripts and prototypes with significantly reduced ceremony.

The initiative aligns with the vision articulated by Brian Goetz, Oracle’s Java Language Architect, in his September 2022 blog post, “Paving the On-Ramp.” Additionally, Gavin Bierman, Oracle’s Consulting Member of Technical Staff, recently published the initial specification draft for community review.

Traditionally, even the simplest Java program required explicit class declarations:

// Traditional "Hello, World!"
public class HelloWorld {
    public static void main(String[] args) {
        System.out.println("Hello, World!");
    }
}

JEP 512 addresses this complexity by introducing implicitly declared classes. If a source file (.java) contains methods or fields outside explicit class declarations, the Java compiler treats these contents as belonging to an unnamed, implicitly declared top-level class. This significantly simplifies beginner programs:

// "Hello, World!" using JEP 512 features
void main() {
    IO.println("Hello, World!");
}

Complementing implicitly declared classes, JEP 512 introduces support for instance methods as program entry points. For simpler scenarios, the historical requirement for a static entry point (public static void main(String[] args)) has been relaxed.

The Java launcher protocol now recognizes instance main methods. If the class selected for launching lacks a suitable static main method, the launcher searches for an instance main method. The preferred instance entry-point signature is straightforward:

void main() {
    // Program logic
}

An alternative signature, void main(String[] args), is also supported for scenarios involving command-line arguments. This approach eliminates beginners’ immediate need to grapple with the static keyword or the String[] args parameter. When utilizing an instance main method, the Java launcher automatically instantiates the class before invoking the main method.

Addressing another common point of complexity, particularly reading from System.in and printing via System.out.println, JEP 512 introduces a utility class, java.lang.IO. Residing in the java.lang package, it is implicitly available without explicit import statements. It provides essential static methods for basic console interactions:

public static void print(Object obj);
public static void println(Object obj);
public static void println();
public static String readln(String prompt);
public static String readln();

This facilitates simple, interactive programming:

// Simple interactive program using java.lang.IO
void main() {
    String name = IO.readln("Please enter your name: ");
    IO.print("Pleased to meet you, ");
    IO.println(name);
}

Notably, while the IO class itself requires no import, its static methods are no longer implicitly imported into compact source files as in earlier previews. Developers must explicitly qualify method calls (e.g., IO.println(...)) unless using explicit static imports. This adjustment ensures a smoother transition when evolving compact source files into regular classes, avoiding sudden additional requirements like static imports.

Further minimizing boilerplate, particularly beneficial for beginners unfamiliar with package structures, compact source files now automatically access all public top-level classes and interfaces from packages exported by the java.base module. This implicit import resembles a declaration (import module java.base;) proposed in a companion JEP, providing seamless access to common classes such as those in java.util, java.io, and java.math (e.g., List, ArrayList, File, BigDecimal). Thus, classes can be directly utilized without explicit imports:

// Compact source file using List without explicit import
void main() {
    var authors = List.of("Bazlur", "Shaaf", "Mike"); // List is auto-imported
    for (var name : authors) {
        IO.println(name);
    }
}

The finalization of Compact Source Files, instance main methods, the java.lang.IO class, and automatic imports from the java.base module in JDK 25 marks a substantial refinement to improve Java’s learning curve and simplify small program development. By reducing initial complexity, these enhancements facilitate a gradual introduction to Java without compromising the smooth transition to advanced programming constructs. Crucially, these features maintain compatibility and integrate seamlessly into the standard Java toolchain, reinforcing their place as core components rather than isolated dialects. If widely adopted, these improvements could profoundly influence Java education and developers’ approach when crafting simple utilities and prototypes.

About the Author

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


How MCP could add value to MongoDB databases – InfoWorld

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

Context-aware vibe coding via MCP clients

Another advantage of MongoDB integrating MCP with its databases is to help developers code faster, Flast said, adding that the integration will help in context-aware code generation via natural language in MCP supported coding assistants, such as Windsurf, Cursor, and Claude Desktop.

“Providing context, such as schemas and data structures, enables more accurate code generation, reducing hallucinations and enhancing agent capabilities,” MongoDB explained in the blog, adding that developers can describe the data they need and the coding assistant can generate the MongoDB query along with application code that is needed to interact with it.

MongoDB’s efforts to introduce context-aware vibe coding via MCP clients, according to Andersen, will help enterprises reduce costs, both financial and technical debt, and sustain integrations with AI infrastructure.

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


MongoDB: High Growth Database Software Company (NASDAQ:MDB) | Seeking Alpha

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

This article was written by

7.71K Followers

Khaveen Investments is a global Investment Advisory Firm dedicated to serving the investment needs of clients worldwide including high-net-worth individuals, corporations, associations, and institutions. We provide comprehensive services ranging from market and security research to business valuation and wealth management. Our flagship Macroquantamental Hedge Fund maintains a diversified portfolio with exposure to hundreds of investments across various asset classes, geographies, sectors, and industries. We employ a multifaceted investment approach that integrates top-down and bottom-up analysis, blending three core strategies: global macro, fundamental, and quantitative. Our core expertise lies in disruptive technologies that are reshaping the landscape of modern industries including Artificial Intelligence, Cloud Computing, 5G, Autonomous and Electric Vehicles, FinTech, Augmented and Virtual Reality, and the Internet of Things (IoT).www.khaveen.com

Analyst’s Disclosure: I/we have a beneficial long position in the shares of MDB either through stock ownership, options, or other derivatives. I wrote this article myself, and it expresses my own opinions. I am not receiving compensation for it (other than from Seeking Alpha). I have no business relationship with any company whose stock is mentioned in this article.

No information in this publication is intended as investment, tax, accounting, or legal advice, or as an offer/solicitation to sell or buy. Material provided in this publication is for educational purposes only, and was prepared from sources and data believed to be reliable, but we do not guarantee its accuracy or completeness.

Seeking Alpha’s Disclosure: Past performance is no guarantee of future results. No recommendation or advice is being given as to whether any investment is suitable for a particular investor. Any views or opinions expressed above may not reflect those of Seeking Alpha as a whole. Seeking Alpha is not a licensed securities dealer, broker or US investment adviser or investment bank. Our analysts are third party authors that include both professional investors and individual investors who may not be licensed or certified by any institute or regulatory body.

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Podcast: How To Improve the Quality of the Gen AI-Generated Code And Your Team’s Dynamics

MMS Founder
MMS Birgitta Bockeler

Article originally posted on InfoQ. Visit InfoQ

Transcript

Olimpiu Pop: Hello everybody. I’m Olimpiu Pop, an editor with InfoQ and I have today with me, Birgitta Boeckeler, which is a subject matter expert with Thoughtworks in the generative AI space. Well, the title is a little bit more difficult than that for me, so I’ll ask Birgitta to introduce herself and to say, what’s the role with Thoughtworks related to generative AI?

Birgitta Böckeler: Yes, hi Olimpiu. Thanks for having me. Yes, indeed, the title of the role I have right now is quite a mouthful. We currently call it Global Lead for AI Assisted Software Delivery. And so indeed, as you said, it’s a subject matter expert role or some parts of my role kind of are developer advocacy, both internally in Thoughtworks but also for our clients. And so, I talk to people at ThoughtWorks about what they’re doing in this space. I try a lot of these tools myself. I write about them, I talk about them at conferences like at QCon for example, just recently.

Olimpiu Pop: Okay, thank you. Well, now, I have to have a cheeky comment because it sounds like a German name, your title. So probably it’s … Everybody says they’re very lengthy, so yes, it goes hand in hand.

Birgitta Böckeler: The joke would probably be that in German, it’s probably just one word and not multiple words, but I haven’t thought about that yet, yes.

A Snapshot of the Coding Assistants Landscape [01:40]

Olimpiu Pop: Thank you. As you mentioned, you typically write on Martin Fowler’s blog, where you provide updates on current developments in the generative AI space. So, let’s start by saying what’s new, because, in my opinion, the generative AI space is currently moving faster than JavaScript libraries. We just blinked, and probably two new tools in the generative AI space appeared in the meantime.

Birgitta Böckeler: So what’s new? That’s like the frequently asked question right now, right? And I also said this at the talk at QCon that even though I have this as a full-time job right now to stay on top of this space, even I cannot stay on top of everything, right? So nobody has to feel bad if they cannot do that next to their regular day job, right? And even one thing that I said during the talk, an hour after the talk, I saw on the internet that it’s out of date. So that’s the problem in this space right now, right? But of course, there are a few things that are starting to become more settled or there’s certain patterns that we have now in the tools.

And so, when you look back at the evolution of the features in the coding assistants. We had the auto-complete first, the auto-complete on steroids, then we got a simple AI chat, then the chat became more and more powerful. So in coding systems like Copilot or Cursor or Windsurf or all of the many other ones, the chat now also has a lot more context of our code and we can actually ask questions about the whole codebase. And a lot of them, there are a lot of context providers, how we can pull in what is the current git-diff or more and more integration with things, like give me the text of my current JIRA issues, stuff like that, right? And the models have evolved as well, of course, right?

Olimpiu Pop: Okay. Then let’s simplify things. Let’s consider tendencies, because, as you said, the tools are moving quickly, and I agree with that. Everything is moving so fast, as it’s hard to stay on top of it, but then you still have the buckets. As you said, there was a way we interacted. I didn’t like having a chat in the browser and then moving from one side to another, or I didn’t like the part where we were just providing comments. It would be better if we look at it that way.

Currently, we are at the point where we interact with the model and we have an innovative auto insert, or where should we position ourselves, when we are just saying, “Let’s call it an autonomous junior developer next to us. Are we there yet, or are we still not?”

Birgitta Böckeler: Yes, and this autonomous junior developer we can get to if it’s autonomous or not a little bit later. But indeed, the newest thing that has been happening since October, November last year is that this chat in the IDE or in terminal sessions, there are also a bunch of coding assistants where you do this from the terminal. Let’s focus on the IDE right now, that this chat has now gotten a lot more powerful to the extent that you can actually drive your implementation from the chat and the chat can now change multiple files at once.

It can run terminal commands. It can basically access a lot of the things that the IDE has access to. And that’s the thing that provides more automation in the sense of let’s say my coding assistants generate code that doesn’t even compile or has syntax errors, right? So usually, in the past I would have had to go there and tell it, this does not compile, but now it can actually pick up through the IDE on things like linting errors or compile errors and stuff like that and actually, immediately react to that and correct itself.

Or it can say, okay, let’s run the tests and then, it will immediately pick up on if the tests are red or if they’re green and we’ll be able to include that in what it’s doing. And so, that’s what now has led people to use the agent word, the A word for these tools. So that makes it agentic, right? So I think there’s still no good overall comprehensive definition of what an agent is. I think it’s a word that we always have to redefine in the context that we’re in, right? But in the context of coding assistants, it’s a chat in our IDE or in the terminal that can actually access all of these tools like editing files, running terminal commands and so on. And then, yes, do things in a more automated way for us while we’re still watching them.

VS Code-Based Code Assistants vs Plugins of the JetBrains family IDEs [05:43]

Olimpiu Pop: Okay, that makes sense. So now, to just repeat what you said to us, if I got it correctly, currently, we are getting more autonomous. It requires a lot less interactions because at points you needed back and forth multiple times you got to what you wanted. But now, it seems that that’s happening iteratively without so much input from our side. And then let me ask you something else because we’re discussing generically about IDE. Now, from my understanding or the way how I look at the field, there are a couple of big players in this space. You have the JetBrains ones, so IntelliJ IDEA for the Java world, you have PyCharm and all the family.

Those were quite big in terms of IDEs. Then you have VS Code, which is the little brother of Visual Studio, also from Microsoft, and it is widely used. And then, there are the new kids on the block that are coming and bringing those new features. I haven’t tried them yet, such as Windsurf and Cursor, and so on. How are they ranked based on your experience in terms of how they introduce these new usage styles? First, a better question might be whether they are bringing support from external models or external plugins. I’m thinking now about JetBrains, and on the other hand, they have native support.

Birgitta Böckeler: Most of the coding assistance action right now is actually happening in Visual Studio Code, especially if you consider Windsurf and Cursor also to be Visual Studio Code because they are actually forks of Visual Studio Code. And the reason why they forked or cloned as far as I understand it is that, that gives them more access to the core of the IDEs so they can build more powerful features, but because a lot of this progress, I would say is about the integration with the IDE and the developer experience, the user experience in the IDE.

And when you have full access to the core of the IDE, you can do a lot more things. So they’re Microsoft and GitHub have the advantage that they own Visual Studio Code. For GitHub Copilot, they could also delve into the core of that. But then Cursor and Codeium, who are building Windsurf, fork this so that they have more control over this, right? And then, there are things happening on the JetBrains side of things, which for me and my colleagues is a big deal because JetBrains has been the favourite IDE, especially for Java, Kotlin, and some JVM compiler-based languages for a long time.

So, big organisations started paying for IDE licenses because it was so good, right? It was always free before that, wasn’t it, with Eclipse and similar tools? Unfortunately, in the JetBrains ecosystem, things are not moving as quickly. For example, the GitHub co-pilot plugin and JetBrains are often lagging behind in features compared to the VS Code plugin. The things that JetBrains themselves are building are also still in progress, so they’re not working on an agent, for example, but it’s moving a little bit slower. So that’s one of the things that sometimes slows down the adoption of coding systems.

This keeps some developers from experimenting because they’re part of the JetBrains ecosystem and prefer it. There’s also a lot of stuff still to be discovered around where JetBrains assistance is actually powerful, JetBrains assistance enough in some use cases, maybe you don’t even need the AI, right? But yes, JetBrains’ ecosystem is a little bit behind. Most of the action at the moment is happening in Visual Studio Code. They’re also terminal based, coding assistance. So Anthropic recently released Cloud Code, for example, that you run in the terminal.

There’s some open source tools like Goose and Aider that do that. So those are often open source. And in terms of the models used at this point, almost all of the coding assistance I think that I’ve used now allow you to plug in your own API key to actually use models as well from, for example, your Anthropic API key or your OpenAI API key. And in particular, all of them support some kind of access to the Claude Sonnet model series, either by you bringing your API key or by them providing it from Anthropic because the Claude Sonnet series has been shown to be really, really good at coding.

So when I try out different tools, I usually use Claude Sonnet as the model so that at least that part is stable, so I can compare because you always have to use these tools a few times until you get a feeling for … Just a feeling, does this feel better than that other tool, right? It’s really hard to just do with one or two tests and then say this is now better or this is now worse, right? So yes, Claude Sonnet model series, Cursor and Windsurf, I would say are among the most popular. And an open source VS code extension called Cline and a variation of Cline that is called Roo Code. So those four, I would say are the most popular ones in the agentics space right now.

Olimpiu Pop: Let me summarise that. On the tooling side, so on the Hammer side, currently, forks or the Microsoft GitHub are the ones that are leading. And we can also know why a comment is allowed, given that GitHub Copilot already has a couple of years up front, everybody started a long time ago. And then, there are other new kids on the block that are appearing while more traditional tools like JetBrains family, they’re behind when it comes to agentic coding with the proper quotation mark.

Birgitta Böckeler: And by the way, what we’re also seeing is when you look at Cursor and Windsurf, Cursor for … Since they’ve existed, have always come out with really interesting new ideas about the user experience. And then you always … a few months later, you see GitHub Copilot have a similar feature, right? So there’s also this type of dynamic that Copilot has a lot of adoption because a lot of organizations already host their code on GitHub, so they already trust GitHub with their code. So it’s easier to do that, right? And then, they’re often followers to the interesting features in other IDEs, right? That’s at least what it seems like from the outside.

Olimpiu Pop: Yes, that sounds quite close to what I felt as well. I didn’t know that Windsurf, I didn’t follow the space that much as I did last year. I didn’t know that Windsurf is built or developed by Codium, but I know the tools that Codium had before. A lot of tools in the review space, a lot of tools in the testing space. And they always felt that they are ahead of the curve. And now, they got into their closing the gap somehow. But now, that I’m thinking about it, we just jumped in the middle of the problem because we started discussing about coding.

But actually, if you look at the cycle of a project, usually you’ll start with the ideation side and so on and so forth. And then you start by bootstrapping the project, which happens only once. But I’m thinking about this because at some point, me as a company or consultancy, we had to bootstrap at least a couple of times per year a new project. And then, that pushes me to ask the question about this no code or as they used to call them in the medieval days of coding assistance, things like Loveable or I don’t know, some others. I also tried bootstrapping with Copilot sometime ago.

“No Code” Tools Like Replit Are Good for Prototyping, Especially Their Intersection With Serverless [12:56]

What’s your feeling? Are the tools that you mentioned, Cursor, Windsurf, or even VS Code, be as good also for bootstrapping new projects or should we go to something else, have a prototype, see how it feels on the market, and then get to a more, let’s call it traditional space where you’re just getting into coding?

Birgitta Böckeler: Yes, I haven’t used those tools like Loveable, Bolt, Replit. I haven’t used them as much because they also usually come with a platform as a service as well, let’s say, right? So you can immediately deploy them. And usually, with the types of clients we have, that’s not the deployment environment that they have. So they have their own environments. But what I’ve mostly seen my colleagues use this tool for is prototyping, right? And also, the non-coding people, like a designer or something, using this to really quickly get together a real working prototype. And then most of them I’ve heard say that when they look at the code, they feel like it’s still very “prototypey” code, so they wouldn’t want to bring that into production.

But still everybody is really impressed by what those things can do in the prototyping space, but I haven’t tried it myself.

Olimpiu Pop: Okay, well, that’s my feeling as well. To summarise and conclude, we can utilise them to expedite processes on the ideation side. So, rather than having mocks in the traditional space where you have limited options, you can quickly build something like a prototype, gather feedback, and then return to the more conventional space where you make the features upfront.

Birgitta Böckeler: It also depends somewhat on the user. I read an article a few weeks or months ago about Replit and how they have consciously decided not to target coders as their primary audience, but rather non-coders. And part of that strategy is also to lean into the serverless part on the paths, underneath this, right? So to say they can also spin up a database for you and then connect everything, right? Because that is something that noncoders really struggle with, they use coding assistance like Cursor, Windsurf or Cline, which builds up all of the code and then, they don’t know, yes, but what do I do with the database and will that be secure, right?

So I find that space really interesting to monitor in terms of how you can actually create a safe environment with a pass and with serverless for a non-coder than to be on top of that and create something with AI. I don’t know if it will get there, but it can certainly fix some of the gaps maybe in terms of infrastructure knowledge and all of that that noncoders also of course, don’t have the experience with.

Using Generative AI Through the Various Phases of the SDLC [15:22]

Olimpiu Pop: And that leads me to, point one of the, let’s say, anecdotes that you had during your presentation about that guy that made the SaaS in hours and then, he just started crying when he realized that his code is not production ready and then, that’s how we should treat these kind of tools. There are prototypes, but they’re living aside things like operational safety, security, scalability and stuff like that. So it’s better to just be able to split it in smaller chunks and build it as we know to make it safe and robust enough for real life usage.

Okay and now, we touch on what we currently have, but again, the cycle is lengthier than that, and usually SDLC, because we have to call it classical now, even though we didn’t see the proper adoption with it, but we have to call it classical now. How important is SDLC now? Because in my mind always, it was a set of best practices. You had to test it, you had to put to the unit test in place. What do you think would a proper SDLC implemented help with adoption of agentic or whatever it’s called nowadays coding, or should we rely on editing AI and somebody will just take over our problems?

Birgitta Böckeler: For me, SDLC or a software delivery lifecycle is just a description of that there’s multiple phases, for lack of a better word, because of course, by that, we don’t mean waterfall, but we mean many, many, many phases that go round and round in small iterations in an agile way, right? For me, that’s just a description of how we usually do things, right? I guess that now what we have to question as a profession is now that we have these new tools, does that change anything, right? For example, can we skip any of those stages or are some of them becoming even more important or less important, or which of these are best supported with AI and less supported with AI, right?

So in that sense, the concept still is really useful. And it’s actually interesting because I’ve heard this acronym SDLC, I hadn’t heard it in a long time when AI came up again and suddenly, everybody started using the acronym again, at least around me. And I was like, “Oh, what did that mean again?” I don’t know why that is because I think once more as a profession, which we do in cycles all the time, right? We’re looking again at everything that we’re doing and questioning everything and really trying to go back to what is it actually that we are doing and what does AI mean for that?

So it’s a very reflection moment once more in our profession, we seem to be doing this every 10 years or something like that. So far, I haven’t seen anything … I mean, of course, there are a lot of experiments about let’s automate the whole thing or let’s automate the whole dev part, but then the specification is really super important that humans do it. And what about the verification? Do we also want AI to do that or is it kind of like this whole thing that human specify the machine builds and then, we verify again, will it be that, right?

And from what we have right now and also, some of the challenges that we still have with it that we haven’t even talked about yet, because those coding assistants cannot really autonomously build a whole feature for you at the moment, but we can get to that in a few minutes maybe. But yes so, with what we have right now, I don’t see a way to automate the whole thing away, right? In particular, when it comes to specification, when it comes to testing and really knowing that the test is really checking what we’ve really done, right? So testing I think is the space that has to change a lot or is changing at the moment, right?

Yes, all of these questions, should I have AI generate my test and then, do I know what it actually does, right? However, you can definitely utilise AI in some form or another for all of these mini phases and task areas, as it’s a very language-based work.

Olimpiu Pop: I have to relate to one of the keynotes from last year. We had Trac Bannon, talking about the space of the SDLC. She’s looking a lot on how we can augment it. She has a similar research going on her side from what I remember and related to what you’re saying. She said, okay, fix your SDLC, meaning that have the dot on the horizon into continuous deployment space, where you need to have automated testing, you need to have checks in place that ensure that all the phases are properly followed, and then you should be on a safer side.

And about testing, it was either generate the tests and write the code or the other way around, don’t do both of them using generative AI because then it’ll be very skewed towards generative AI not to use left and right,

Birgitta Böckeler: Or it depends on the situation maybe, yes. And also, what I would say, I also see some of our clients asking, “Oh, how can we use AI to fix all of the problems that we have in our path to production, right?” Path to production is maybe also a good word for SDLC, right? And I think, it can be really dangerous when you use it as a band-aid, but you actually have underlying problems where you actually have to fix the root cause in your delivery pipelines or in your testing process, right? Because otherwise, it can become a band-aid that actually makes things worse, right? Yes, because gen AI amplifies indiscriminately.

It can potentially amplify what you have. And if that is really bad or there’s something wrong in the core, then that can be problematic. And the other thing is that because it’s a complex system of things that we do, if we just increase the throughput of one of the things like the coding, for example, with coding assistants, then we’ll get bottlenecks and other second order effects in other places, right? So if you can code faster, can you also review the code faster? Can you fill the backlog faster? Can you create the designs faster? Can you deploy it fast enough? Will you have more technical debt? If you have higher feature throughput, how are you doing product management around that when you have more, right?

And some of those things we can mitigate in other areas with AI as well, but some of those things, you can’t just speed up the machine, if you don’t have a good underlying process with low friction.

Olimpiu Pop: So as you said, the AI is an accelerator. If you’re going in the right direction, you’ll arrive there faster. But if you have a lot of holes, a lot of potholes in front of you, you’ll just get more broken knees and broken ankles because it’ll just drive in the same way faster, but you’ll have more problems, right?

Birgitta Böckeler: Yes, there’s a much higher risk also because of the non-deterministic nature that if you don’t know what you’re doing, then it might actually make it worse, yes.

How to Use Agentic AI in Day-to-Day Coding [21:47]

Olimpiu Pop: But we did have a point earlier where you said that there are challenges. So now thinking about the normal day-to-day job of a developer, a classical developer that knows what she’s doing and what are the limitations, what can we do and what we cannot do.

Birgitta Böckeler: Focus maybe on these agentic modes that we started talking about in the beginning that are now even more powerful than before companions in the IDE where I can say, okay, I need new button on the page, and when the user clicks it, then the following things that happens or something, and then the AI goes and changes one, two, three, 10 files for me and creates a test for me and so on and so on, right? So first of all, I would say it cannot be autonomous. It has to be supervised. As a developer, I still sit in front of it and actually watch the steps, intervening when I see it going in a direction I don’t want it to take.

So especially for larger non-trivial things, I haven’t seen an agent autonomously do anything without my intervention yet, right? I mean, the simplest thing is if the result doesn’t even work, but then that’s obvious, but there’s more insidious things as well about the design that might make it less maintainable and extensible in the future, or I talked about tests, right? So it is quite good sometimes at generating tests, but it can also be a false sense of security, right? So I’ve seen it not enough tests. Then on the flip side, redundant tests, like too many assertions or too many test methods which then make the tests very brittle.

And in the future, every time I change the code, suddenly I get maybe 30 tests that are red, right? I mean, who hasn’t been in that situation with the code base? Often in test, there’s too much mocking. It puts the test in the wrong places, so I might not find them in the future. And it’s also really hard to get it to first do a red test and actually show me the red test, so I can actually use that as a review mechanism to see if it makes sense the way the test is, it would give me a lot of security about reviewing this, right? But it just doesn’t do that.

It immediately goes for the implementation. So there’s this whole testing space, right? And like I said, the design also sometimes just makes it too spaghetti-like. So it is like a junior developer, like you said before. And yes, at this point, I’ve had many examples where I’ve had a light bulb above my head like, “Oh, okay. Yes, I see how it could get that wrong”. And then the next day when you do it, it gets it right. That’s the other thing.

Olimpiu Pop: Okay, so it’s learning based on your feedback or how-

The Probability of the Coding Assistants Getting It Right: What You Need [24:17]

Birgitta Böckeler: No, no, it’s not. No, no, it’s statistics. It’s a probability. One time, it works; one time, it doesn’t. It has nothing to do with learning. Yes.

Olimpiu Pop: Okay, so you’re just throwing the dice.

Birgitta Böckeler: A little bit, yes and then, so our job as a developer becomes one, we of course, have to assess is it worth using the AI in this situation? Is it making me faster or slower? So we also have to know when to quit. So I often use it to just throw something at it where I already know I haven’t described this very well, but let’s just see what it does. And then that helps me think through design, right? And then, I revert and I do it again. I either do it myself or … So we have to assess how can we increase the probability that it’s doing what I want, which we can do through prompting.

Which we can do through features like custom rules through having good documentation, our code-based stuff like that. So how can I increase the probability? But we can never have 100% guarantee that it always gives us what we want.

Olimpiu Pop: Okay, fair enough. For a long period of time, I looked at the space and I just disseminated ideas like pair programming with the AI when I’m writing something and here, it’s generating the test. But then I started thinking and just challenging myself because if it looks only at my code and generates the test, as you said, it’ll generate the green test only based on what I’m doing. So if I introduce the flaw in the code that I wrote, then I don’t have any kind of way of knowing that that was problematic.

Birgitta Böckeler: Exactly. Yes. I’ve also seen it do things like I say, “Oh, the test is red”, and then, how does it know if it needs to fix the test or the code? And sometimes it does it the wrong way around, and then if you don’t pay attention, it actually introduces a bug in the code. Yes.

Olimpiu Pop: Okay. So how I can increase my probability of generating proper code or would proper requirements where I just give them as context would help with that or is it possible to provide, this is the requirement coming from the customer, whatever, the BAE of the team. Would that be possible to bring into this mix to make it better?

Birgitta Böckeler: Just technically, there are more and more ways now to integrate with context providers, right? Like I said before with a JIRA ticket or so, just technically to make it more convenient to pull that in. But then of course, it’s still depends on how is that ticket phrased. I mean, who hasn’t seen those JIRA tickets that just say, “Fix the button”, or I don’t know. I can’t come up with a better example right now, right? So, it increases the probability by being more specific about what you want, right? I’ve also heard this and confirmed it with colleagues who use these tools daily on codebases that are in production.

When they show me examples of things that they implement with it and how they describe it to the AI, it’s often very specific, right? So here are the five fields I need on the UI, this is the database schema we want. So it’s relatively low level and that increases the probability, right? That also increases the amount of time of course, that you have to spend describing it in natural language instead of in code. But in those situations it still often feels to me like it’s reducing my cognitive load and it’s worth using the AI and spending the time describing all the details, right?

That is one way to increase the probability: having a plan already.

Olimpiu Pop: Great, thank you. Now, I have to relate back to something that you said earlier. You mentioned the verification phase in the SDLC, but we also spoke about tests. And now, what I’m thinking is that if I’m thinking about code, I’ll think about the test. I’m thinking about solutions at the solution level. In that case, I’ll think about verification and my feeling after also QCon and some other conferences that attended in this period, my feeling is that verification is becoming very important, especially in the AI space because when we’re talking about the black boxes that AI is, we cannot talk about testing.

Because we don’t have a proper interface, but we talk about verification. So that pushes us to a whole different level from my perspective. Any thoughts on that?

Birgitta Böckeler: Yes. In general, when we use these tools, we always have to think about what is my feedback loop, right? When the AI does something for me, what is my feedback loop to quickly know that it’s the right thing, right? And it can be small things like the IDE now takes some of those things for me, right? When the syntax is totally wrong, the IDE will tell me, just very low level example, right? And then, the higher the abstraction level gets, the more I as the human have to be involved. This even goes all the way to the idea of should we be the ones writing the tests and then the AI does everything else, right?

There’s a video, for example, on Dave Farley’s Continuous Delivery YouTube channel. Think we named it now to the Modern Software Engineering YouTube channel. I don’t remember what the title was, but he was talking about just speculating, will tests be the specification of the future, the coding of the future, so that we write the tests and everything else is done by the AI. And because we wrote the tests and we have to be very specific in tests, so we have to give them as a very specific specification. That’s how we know that it works, and that’s all we do in the future, right?

Writing tests will be coding. So that was one of his speculations, which I found quite interesting.

Olimpiu Pop: Well, I cannot say that I disagree with that because in the end, there was a lot of movement. If you think about Ken Beck and his TDD, and that was very good at the low level side, where you’re just writing the code and the whole architecture, but that was in very small increments. Now, if you look at that, we are looking probably at a different level. It’ll be like probably BDD, behavior-driven development where you’re just thinking about everything that’s upfront. And I have to admit, I tried to work with a couple of product owners to do that where they’re just doing that and then, trying to mimic some kind of DSL and generate the big chunks.

But more than that, my feeling is that it’s more important than ever to have product development mindset, where you’re thinking about the whole thing and you understand it. But all the things that we are discussing now, there are things that you’re learning from trial and error. You need to have to be a seasoned developer, to understand this. And when I’m saying seasoned, I’m thinking about two things. One of them is how to do the coding and how to build the software itself, but also, you understand the industry that you’re operating in and you understand the rules and how the things are working.

The Software Development Career Ladder Might Change [30:32]

You cannot be a junior and graduate to do that. What are we doing with the junior developers? We’ve just eliminated the title of junior developer and we just say that everybody is a senior developer, or how do you help them to get to this position?

Birgitta Böckeler: The meaning of seniority might change, right? Yes, but I mean this definitely frequently asked question. And when I was at QCon at the conference, people ask me this all the time and in parts I also don’t know, we have to see. I always say that I don’t want to romanticize how I learned back in the day and now say, “Oh, the young people, they’re going to do it wrong”. I copied so much stuff just from the internet and just tried, well, does it work or not, before even looking at the code that I pasted, to be honest. So it’s just that I guess there’s more throughput of this now, more speed to this, right?

So we’ll have to see how teams can buffer that. Often it is buffered by senior people on a team. If somebody, junior makes a mistake because of something that they didn’t know yet, then the other people on the team might catch it, right? But now, if the throughput of the mistakes becomes higher, then can you still buffer that with your team? I guess that’s one of the questions, although I would hope that today, you still learn the way we did back then by doing something wrong, and then it gets caught by the safety net around you both automated and in terms of people.

And then, I learned from that. So I hope that it’ll continue like that. And also, it’s often the people that are worried about the junior developers haven’t actually used the AI tools themselves that much. So I think it’s super important that especially experienced people, even when you’re skeptical, and I get it. I also like, I go back and forth on being excited and skeptical about this. We have to use these things and understand what it means because you can’t just learn it from a conference talk or a manual so that when beginners who are coming in and who are all using these tools, let’s be honest.

Then when we tell them no, but you can’t do it that way with AI. And they say, why should I trust you? You’re not even using these tools, right? So we have to use them ourselves so we understand where the risks are and can help the new people coming in and kind of all go through this cognitive shift with each other and hopefully evolve our practices in a responsible way.

Olimpiu Pop: Okay, fair enough. So while listening to you at some point, I was in the position to imagine ways of bringing people up to speed and have a proper ratio of younger folks to more senior developers. And one of the things that I was envisioning at that point was rather than generating tons of documentation on how to do the database, how to do the coding and so on and so forth, incorporate all that information into, let’s call it the CLI, if that helps you with that. I don’t know, let’s make a database in the cloud. And you give it a size, you give it something, the name, and then you just generate it.

And I envision it something like junior developer, a younger person or even someone that just joined the project initially can use the tool, say something that is done or to validate it if it’s about coding. And then, by the day passing, he starts understanding it. He gets to give him the perspective, and then he can be open source looking to how the tool is made. He found the bug, he fixes it. Would approaching it like that where we have guardrails around everything that we are doing. And initially, you are just following the rules and then you can … By the point when you’re just getting to that level, when you understand the status quo, you can challenge it. Would that fix it into your opinion?

Birgitta Böckeler: Yes, maybe not fix them, but definitely, I mean, our safety nets that we’ve always had, we need to double down on them even more than before, maybe in our pipelines and whatever we can automate. But you’re actually, bringing up a good point. People often fixate on the risks with junior developers coming in, but there are also so many new opportunities for them to learn faster and to discover information faster without always having to ask somebody more senior, right? So you were giving the examples of maybe you have descriptions and documentation of things that you actually also then can feed into the AI.

So you can actually now as a more senior developer on the team, by giving everybody else the custom rules for the coding assistant, you can actually amplify a certain conventions that you can use these custom rules as a human as well to read them, to learn about how is this team coding or we’re also at Thoughtworks, experimenting with these prompt libraries for practices as well and not just for coding. If you imagine my favorite example is always threat modeling because it’s security, it’s a daunting practice. A lot of people don’t know how to do it, so they procrastinate it or don’t do it, right?

But you can actually describe certain practices in threat modeling, like the stride model in a prompt. And then, if you give it the context of what you’re working on, AI can give you an example of these things in your particular context. So you don’t have to read the theory and apply it to your particular problem. So that is one example where AI can actually help us understand things a lot faster and apply them to our context, which might also be super helpful for junior people coming in and might actually help them learn things faster than we ever did, right?

Olimpiu Pop: Okay. Thank you. Is there anything else that you think it’s important to touch upon? Something that in our conversation that we need to underline?

Finding the Right Balance Between Gen AI Sceptics and Enthusiasts [35:41]

Birgitta Böckeler: Something that I talk about at the moment a lot is that risk for quality with the increased level of automation or with the agents that generate even more code for us. And because I personally feel it all the time when I use these tools that on the one hand, it feels great and I use them all the time, not just because it’s my role, but also because I like it, but you get sucked into this temptation of “it works”. Maybe I can just push it, right? And I always find things that as a responsible developer, I should still change before I push because I also work on code bases right now where other people work on the code base as well.

So I have to think how this affects them, right? Will they actually understand how this works? Will this get in the way of this other thing that they’re working on or will we be able to maintain this in the future, right? But the temptation is really high to get lazy and to be complacent and just think, “Ah, it’s going to be fine”. And yes, there’s just too many things that I find before I push this that yes, you always have to review, review, review, really pay attention, and it’s like driving an autonomous car and your attention kind of goes down because you just let it drive and then when something happens, you don’t have the attention up.

It’s not a great analogy because it’s very different risk profiles of this, but I find myself just like the agents just going, doing things, doing things, and it feels like such an extra effort and barrier to go over to now review all of this code because we like creating code, not reviewing code, right? So there’s a real reckoning that might be coming at some point here, not just in terms of junior developers, but also senior developers that we just don’t think along anymore.

Olimpiu Pop: I think it’s already coming, if you’re looking at the way how the supply chain malicious code is growing. So the threats in the software supply chain is growing each year. I think we’re already there. Just I had the numbers before our discussion in front of me. And in 2024, we had 700,000 malicious packages, while in the previous year we had only 250. And if you’re saying that it is only, 250 meant the double of the previous three years combined. So the trend is growing exponentially, probably even more than that. So I think as you said, the reckoning is coming and we have to fix things.

Birgitta Böckeler: And that’s the one psychological side of it. And another psychological thing that I see right now as well is this culture of some organizations really going, everybody has to use AI or why are you so slow? You have AI now. And then on the other hand, kind of skeptics that say, “This is also stupid. Why are you even trying this?” So these kind two polar ends of something and then people in the middle, right? And I think we need cultures right now where the enthusiasts pull us up and the skeptics tear us down a little bit again to learn about this and to not be too fast and just run into all of these new security, vulnerabilities and tech vectors and so on.

But at the same time, also not ignore it and say this is going to go away because it’s not going to go away. So we have to all use it to figure out how to do it responsibly. And I think there needs to be a balance of this and the hypers and the skeptics have to work together somehow.

What To Assess, Adopt or Hold From Adopt According to Thoughtworks Technology Radar [38:59]

Olimpiu Pop: Okay, thank you. And to wrap up, given that you are a Thoughtworks representative and the technology radar from Thoughtworks is the north star of a lot of the guys in the field, it was even copied by so many. Let’s try to put on the quadrant some of the things in the AI space and see what should we try and not. And let’s see, coding, agentic coding, how do you see it? Should we hold on it, try it, assess it? How should we embrace it?

Birgitta Böckeler: Yes, I was just thinking I could actually just check what we have on the radar right now. So thanks for the plug, right? The ThoughtWorks technology Radar where twice a year we put together our snapshot of what we see right now on our projects and put it into these rings like adopt, trial, assess and hold. And we don’t have anything in the “adopt category” in this particular space. We do have in the coding assistance space, I think GitHub, Copilot, Cursor and Cline, for example, are in trial right now. So trial is the ring where we put things that we’ve used quote-unquote in production, which in the case of coding assistants means like we are actually using them on projects with our clients for production code.

Windsurf is also on there in assess. There’s also tools like you were talking about Replit and Bolt and all of that. We have v0, which is something from Vercel right now and Assess as well because some of our teams have tried it. We have a few things in the whole ring, which sometimes means don’t do it, or it can also mean just proceed with caution. So one of those is complacency with AI generated code, which I’ve talked about quite a bit just now. We also have in hold replacing pair programming with AI. So Thoughtworks has actually always been a big proponent of pair programming.

And while AI agents can cover some of the pair programming dynamics so that you actually have two brains instead of one, now with the agents, there’s maybe also one of this like, the agent is doing the tactical thinking and I’m doing the strategic thinking. So it does cover some of those things, but pair programming is actually a practice to make the team better. So it’s all about also collaboration and collective code ownership and context sharing and all of those things. So we don’t think that can be replaced with AI.

And also, for some of the risks that we talked about, it can actually be one mitigation to pair program with AI. So to have a pair working with the AI assistant, that can also be a really interesting technique when you have juniors and seniors on a team and you actually want to see how each other, how you’re using the assistance and learn from each other about that, right?

Olimpiu Pop: Okay. So by our whole conversation, it feels that we are not redundant just yet. Maybe tomorrow we might be redundant, but now it seems that it is more important than ever to use our brains, push us in the right direction, and then to find the ground floor, the common ground for us as individuals into skepticism. So playing nicely and careful with everything, but also to embrace the change. Would that sum it up?

Birgitta Böckeler: Yes, that’s good summary.

Olimpiu Pop: Well now, Birgitta, Have a nice day and talk to you soon.

Birgitta Böckeler: Thank you for the conversation, Olimpiu.

Mentioned:

About the Author

.
From this page you also have access to our recorded show notes. They all have clickable links that will take you directly to that part of the audio.

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Article: Best Practices for Managing Shared Libraries in .NET Applications at Scale

MMS Founder
MMS Sergio Vanin

Article originally posted on InfoQ. Visit InfoQ

Key Takeaways

  • While improving efficiency and consistency, shared libraries can become bottlenecks for scalability if not properly managed, especially in microservices architectures.
  • Centralized dependency management tools like .NET’s Central Package Management (CPM) streamline version control across multiple projects, reducing maintenance overhead in mono-repo setups.
  • Using Git submodules in combination with CPM allows multi-repository environments to maintain centralized control over dependencies, but requires disciplined developer workflows and CI/CD integration.
  • Umbrella packages offer a clean, repository-independent way to centralize dependencies, though they still require consumer projects to update package versions to benefit from updates manually.
  • Automated CI/CD pipelines with robust testing (including regression tests) are critical to propagate dependency updates and maintain system stability at scale safely.

This article discusses real-world cases of using shared libraries, their consequences, and possible solutions to blockers caused by using them in many dependent projects.

The challenges and solutions presented here are focused on .NET projects, but the suggested solutions could be adapted to other technologies and languages.

It also seeks to encourage the thoughts of software architects and developers to analyze trade-offs before creating and using shared libraries.

Understanding Shared Libraries

Shared libraries are reusable bundles of code designed to perform common tasks across multiple projects. They save time, ensure consistency, and prevent developers from having to reinvent the wheel.

These libraries are also known as client libraries or packages. They can be internal or public, versioned (enabling developers to control changes), and require ways to manage and distribute them across different projects.

Best Use Cases for Shared Libraries

The primary motivation for using shared libraries is to standardize and simplify development processes. However, there are trade-offs that we must be aware of when deciding to create and use a shared library, as Ben Morris presents, and we summarize here:

Some expected benefits usually considered are:

  • Efficiency improvement and code quality
  • Prevent duplicate work and standardize solutions
  • Better modularity and abstraction
  • Improved collaboration

But shared libraries often create unexpected problems:

  • Introduce coupling between teams
  • Cause version compatibility challenges
  • Risk of potential breaking changes or regressions

Shared Library Management

Once an organization decides to adopt shared libraries as part of its architecture strategy, the next step is to establish an effective way to distribute and manage these libraries across multiple microservices. This is typically achieved by hosting an internal artifact repository. Solutions like AWS CodeArtifact, JFrog Artifactory, or equivalents are commonly used for this purpose. These systems allow teams to publish and consume packages (such as NuGet for .NET), ensuring shared components are accessible and version-controlled.

In the .NET ecosystem specifically, NuGet packages are the standard for managing and distributing shared libraries, making it easy for teams to integrate common functionality across services. However, while maintaining and publishing updates to the shared library itself is a manageable task, the real challenge lies in ensuring that all dependent services consistently adopt these updates.

As the system landscape expands, outdated dependencies scattered across services can quickly evolve from a minor inconvenience into a critical operational risk. These stale versions may lead to inconsistencies in behavior, expose security vulnerabilities, and cause integration issues between services. If left unmanaged, the situation becomes a scaling bottleneck, complicating deployments and slowing down the overall delivery pipeline.

Real-World Challenges

To understand the complexities of managing shared library versions in different projects, let’s analyze the following real-world cases involving internal shared libraries in .NET projects.

Real-World Scenario: Managing Authentication in AWS Lambda

  • Scenario: The team developed a shared library to handle authentication logic for 10 Lambda functions triggered by AWS API Gateway endpoints. These functions were built using .NET 8 and used the shared library to verify whether the customer and user requesting data were trustworthy. The decision to use a shared library was made to consolidate common functionality and reduce code duplication. However, the library evolved to include not only authentication classes but also API content-negotiation classes and error-handling logic, which introduced additional dependencies across multiple services.
  • Challenge: A new authentication parameter was introduced during development, requiring updates to all dependent Lambda functions before deployment.
  • Impact: This dependency delayed the rollout of the new functionality, highlighting the risk of tightly coupled services. If it were in production, this would not happen without breaking some external APIs already consuming those endpoints.

RabbitMQ Integration: A Scalability Challenge

  • Scenario: A specific internal shared library was built to handle connections and queues with RabbitMQ in this case. About 50 microservices run under a Kubernetes cluster and must integrate with RabbitMQ. The same team manages all microservices. The background for this decision is a small company that rewrote its system from a monolith, dividing it into many microservices, but with few changes after they went into production.
  • Challenge: RabbitMQ must be upgraded from version 3.13.7 to 4.0.4 to ensure ongoing support and access to newer queueing models, notably quorum queues. However, this upgrade introduces a breaking change: queue declarations now require a new parameter, incompatible with the current shared library and all existing microservice implementations.
  • Impact: There is a deadlock situation: If RabbitMQ is upgraded before the microservices, all services will fail when attempting to connect or create queues. If microservices are upgraded before RabbitMQ, they will attempt to use quorum queues not yet supported by the current RabbitMQ instance, breaking message flow immediately. This creates a zero-tolerance migration window, requiring all services to be upgraded in sync. This is a highly complex and risky operation given current resource constraints. The upgrade to RabbitMQ 4.0.4 has been postponed until the team finds the best way to migrate all microservices simultaneously. Without a clear and executable migration strategy, the team remains blocked from adopting critical RabbitMQ improvements. This increases technical debt and risks a catastrophic outage in the event of an uncoordinated upgrade or library change.

Insights: Dependency on shared libraries can become a critical blocker in scalability scenarios.

Keeping Services Synchronized with Shared Library Changes

1. Manual Updates: A Traditional Approach

The first method that comes to our minds to handle all dependent service updates is to manage them manually. Each service should have a team responsible for maintaining and incorporating it into the development process to keep it up to date.

However, this method may take a long time until all services are updated, as all teams must be aligned, and the tasks must be prioritized in each team’s backlog. It can also be time-consuming and error-prone.

2. Centralized Dependency Version Management

An alternative to the manual approach is to manage the shared libraries that the services depend on in a centralized way, considering external files or tools/features that handle that. Tools like this and established processes that include Continuous Integration/Continuous Deployment (CI/CD) pipelines with regression tests and automated deployment help achieve these expectations.

When updating a shared library, there is always a risk that the new version introduces breaking changes or unexpected behavior. Regression tests ensure that existing functionality remains intact after updates, reducing the chances of introducing new bugs when integrating a new library version.

Managing dependencies dynamically means frequent updates to services that rely on shared libraries. Automated deployment pipelines ensure that these updates are efficient, consistent, and error-free, minimizing manual intervention and reducing deployment risks.

CI/CD pipelines with regression tests and automated deployment are essential to enable centralized dependency version control. They help to validate dependency updates before they reach production. They automate tasks such as:

  • Running regression tests
  • Checking for compatibility issues
  • Deploying new versions with minimal downtime

The idea is to accelerate the process of updating the dependent projects and enabling new features or tool upgrades while scaling and keeping a fast pace.

Development teams usually decide about structure at the beginning of their projects: either they will use a single repository for their code (known as a mono repo), have a solution containing one or multiple projects, but all kept in that same repository, or they will use multiple repositories to keep the code, creating different solutions with related projects, each solution divided by domain/responsibility.

For any of these decisions, we can manage dependencies using a .NET feature called Central Package Management, which is a feature from .NET 6+ that helps manage the dependencies for all projects in a solution. It makes it easier to control libraries and versions from a single place. We’ll refer to this as CPM for the rest of the article.

3. Strategies Based on Repository Structure

Mono-repo Strategy with CPM

When using CPM, we can manage all package versions from a single file (Directory.Package.props), which must be located in the same folder as the solution file (.sln). This .props file contains a list of all packages (with name and version), and each project file references the package by name that needs to be included in that project. Also, if one of the projects needs to use a version different from the one defined in the .props file, it can override that version specifically.

As we can see, CPM is primarily focused on a single solution containing multiple projects for managing dependencies. It simplifies package version management, as you can specify versions centrally in a single file. This improves solution performance, since all projects reference the same package version, reducing unnecessary restores and downloads. Additionally, it reduces maintenance effort, as updating a package version requires changing it only in a single file.

Multi-repo Strategy: CPM with Git Submodules

The real-world cases presented cover different areas and domains, but in both cases, each microservice has its own solution and respective git repository. When there are scenarios like the RabbitMQ Integration case, where multiple solutions consume the same internal shared library, a suggested approach to help manage these dependencies is to combine CPM with git submodules, leveraging one single central Directory.Package.props across all solutions.

As Git documentation explains, “Submodules allow you to keep a Git repository as a subdirectory of another Git repository. This lets you clone another repository into your project and keep your commits separate”.

For example, imagine you’re building a backend application that relies on a shared authentication module developed in a separate Git repository. Instead of copying the authentication code into your app, which makes it harder to track changes or get updates, you can add it as a Git submodule using git submodule add. This pulls the external repository into your project as a subdirectory, while keeping its commit history and versioning independent. You can then reference a specific version of the authentication module in your app, update it when needed, and maintain a clean separation between your core backend logic and the shared component, just like using a common internal library across multiple microservices.

The main idea here is that one repository stores only the dependencies prop file used by CPM (not a project or solution with code, just the dependencies prop file). In contrast, each service repository keeps only its code, so when we pull the repository, it works properly. We want to treat the two repositories as separate, yet still be able to use one from within the other.

The first step in implementing this idea is to create an empty repository containing only one file called Directory.Packages.props. This file will keep all packages and versions used by other solutions. The file content looks like this:


  
    true
  
  
    
    
    
  

Then, we can have one repository that contains the solution and projects with the code itself. In the example below, we can see the structure of a solution with two projects, sharedlibrarypost and sharedlibraryui. Each project contains its Directory.Packages.props too.

In these projects, the .props file only references the main Directory.Packages.props file (the one that contains the package names and versions). These .props files look like below:


   

In this solution, we can include the first repository as a git submodule through git commands. To achieve that, in the solution folder, we can use the following command:

git submodule add {.props github address} {folder}

An example would be: git submodule add github/example/centralizedpackages packagereferences where github/example/centralizedpackages is the .props github address and packagereferences is the folder.

After running that, the solution structure will look like below:

- /sharedlibrarypost
    ├── sharedlibrarypost.sln
    ├── packagereferences
    │   └── Directory.Packages.props
    ├── sharedlibrarypost
    │   ├── Directory.Packages.props
    │   ├── Program.cs
    │   ├── appsettings.json
    │   ├── sharedlibrarypost.csproj
    │   └── sharedlibrarypost.http
    └── sharedlibraryui
        ├── Directory.Packages.props
        ├── Pages
        ├── Program.cs
        ├── appsettings.json
        ├── sharedlibraryui.csproj

Once we have the structure ready as above, we must always run the following git commands to build the solution, and include the same commands as part of the build pipeline:
git submodule update –init –remote
dotnet build

To use this method, it’s recommended to have CI/CD pipelines with at least build, test, and publish stages. Tests are extremely important and must include unit tests, integration tests, and regression tests, fully covering the components. When a new version of the centralized Directory.Packages.props file is committed, this triggers the dependent projects pipeline, and the solution will pull the latest version to build. With the pipeline running, developers will be aware if an updated package version breaks any builds or tests.

An important caveat to consider when using submodules arises when working locally. The new versions of the Directory.Packages.props reflect automatically through the pipeline output for dependent services, but developers must explicitly run a submodule update to incorporate the latest version when working locally with the repositories. In practice, this can easily lead to situations where developers forget to pull and commit updated submodule references or continue working with outdated versions unknowingly. Such oversights might introduce inconsistencies and slow down development workflows.

4. Umbrella Package Strategy

An alternative approach to Central Package Management and Git submodules is the use of Umbrella Packages (also called meta-packages). This technique involves creating a dedicated NuGet package that groups and declares dependencies on other internal or external packages with specific versions.

By referencing this umbrella package, each project implicitly brings along all the necessary dependencies at predefined versions. This approach centralizes dependency control without needing to manage multiple configuration files or repositories.

For example, instead of having each microservice explicitly reference common packages like MyCompany.Logging, MyCompany.Security, or Newtonsoft.Json, they simply reference a single umbrella package, such as MyCompany.Platform.

The .nuspec file of this umbrella package defines dependencies as follows:


  
  
  

Once the umbrella package is published, let’s consider a microservice project that requires three shared libraries: MyCompany.Logging, MyCompany.Security, and Newtonsoft.Json. Below, we can see the content of the project file for each case:

Without the Umbrella Package:


  
  
  

With the Umbrella Package:


  

Using an umbrella package, the project only needs to reference a single package (MyCompany.Platform) that internally includes all required dependencies at the specified versions. This reduces clutter in the project file, promotes consistency, and simplifies version management.

When new versions of the dependencies are released, only the umbrella package needs to be updated and republished. However, it is important to highlight that consuming/dependent projects must still update the version of the umbrella package they reference in order to benefit from these updates. While this does not happen automatically, it simplifies maintenance since only the umbrella package version needs to be managed, rather than multiple individual package references.

⚠️ Note: Although the umbrella package centralizes dependency management, consumer solutions must manually update their reference version to receive the latest dependencies.

Advantages:

  • Simplifies dependency management across multiple repositories.
  • Reduces the risk of inconsistent versions between services.
  • Keeps project files clean, as they reference only the umbrella package.
  • Avoids the need to manage shared configuration files or submodules.
  • Promotes standardization and internal dependency control.

Considerations:

  • Requires maintaining and publishing the umbrella package regularly.
  • Projects must update their references to the umbrella package version to receive updates to dependencies.
  • Potential for version conflicts if consumer projects also reference packages independently.

5. Comparative Analysis

Trade-offs Between CPM and Umbrella Packages

Both Umbrella Packages and Central Package Management (CPM) aim to simplify dependency version control across multiple projects, but they differ in approach and suitability depending on the architecture.

Each approach offers distinct benefits and limitations that align with team workflows, repository structures, and release practices in different ways.

Explicitly outlining these trade-offs helps teams make informed architectural decisions based on their specific context, rather than adopting tools based solely on technical familiarity or convenience.

Aspect Umbrella Package Central Package Management (CPM)
Repository Structure Works well for multi-repo environments; no need for shared files across repos. Best suited for mono-repo or tightly linked solutions; requires shared configuration files (such as Directory.Packages.props).
Project References Projects reference only the umbrella package, keeping project files clean and minimal. Projects reference packages individually but get versions from a centralized file.
Version Updates Requires updating the umbrella package version in each consumer to adopt new dependencies. Requires updating the central .props file and ensuring all consumers pull the latest version (with Git submodule or similar strategy).
Automation Can benefit from tools like Dependabot or Renovate to automate version bumps of the umbrella package in consumers. Updates propagate when the central .props file is updated and consumed, but require Git submodule sync in multi-repo setups.
Flexibility Less flexible if services need varying versions of the same package. More granular control; individual projects can override versions if needed.
Maintenance Overhead Requires maintaining and publishing the umbrella package as part of internal feeds. Requires maintaining the .props file and ensuring sync across repos, especially in multi-repo scenarios.

Use Umbrella Packages when working in multi-repository environments where minimizing configuration overhead in individual projects is essential. This approach is ideal for teams that prefer referencing a single meta-package that encapsulates all necessary dependencies, promoting consistency without managing multiple package references. It is particularly useful when repository independence is a priority and when teams are comfortable manually updating the umbrella package version to incorporate changes. Umbrella packages eliminate the need for shared configuration files or Git submodules, offering a clean and standardized way to manage dependencies across diverse services.

Use Central Package Management (CPM) in mono-repo setups or tightly coupled repository structures where fine-grained control over package versions is needed. CPM allows teams to manage all dependencies centrally through a Directory.Packages.props file, streamlining updates, and ensuring consistency across multiple projects within the same repository. This approach simplifies maintenance, as version updates only require a single change, and is well-suited for teams that prioritize strong alignment and efficient dependency management within a unified codebase.

Use CPM with Git Submodules when operating in multi-repo environments, but still seeking centralized control over package versions across repositories. By combining CPM with Git submodules, teams can share a centralized Directory.Packages.props file across multiple repositories, ensuring consistent dependency versions while maintaining repository autonomy. This method requires disciplined workflows to keep submodules updated both locally and in CI/CD pipelines, but it offers a balance between centralized version management and flexible, distributed development practices.

Conclusion

Shared libraries can either accelerate development or become a significant scalability bottleneck if not properly managed. To mitigate these risks, teams should adopt structured strategies tailored to their repository architecture.

Central Package Management (CPM) offers an easier approach for mono-repo setups, allowing centralized version control through a single configuration file. In multi-repository environments, integrating CPM with Git submodules provides centralized control while maintaining repository independence, though it demands disciplined workflows and consistent CI/CD pipeline support.

Alternatively, Umbrella packages offer a repository-agnostic solution, encapsulating all dependencies into a single package for simpler integration. However, they require manual updates to package references.

By leveraging these strategies with automated CI/CD pipelines and rigorous regression testing, teams can maintain system stability and scalability while efficiently managing shared dependencies.

About the Author

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Schonfeld Strategic Advisors LLC Has $5.95 Million Stock Holdings in MongoDB, Inc …

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

Schonfeld Strategic Advisors LLC decreased its holdings in MongoDB, Inc. (NASDAQ:MDBFree Report) by 19.1% during the 4th quarter, according to the company in its most recent filing with the SEC. The institutional investor owned 25,556 shares of the company’s stock after selling 6,018 shares during the quarter. Schonfeld Strategic Advisors LLC’s holdings in MongoDB were worth $5,950,000 at the end of the most recent quarter.

Other institutional investors have also bought and sold shares of the company. Strategic Investment Solutions Inc. IL bought a new stake in shares of MongoDB during the 4th quarter valued at about $29,000. Hilltop National Bank raised its stake in MongoDB by 47.2% in the fourth quarter. Hilltop National Bank now owns 131 shares of the company’s stock worth $30,000 after buying an additional 42 shares in the last quarter. NCP Inc. purchased a new position in MongoDB during the fourth quarter valued at $35,000. Versant Capital Management Inc boosted its stake in MongoDB by 1,100.0% during the fourth quarter. Versant Capital Management Inc now owns 180 shares of the company’s stock worth $42,000 after acquiring an additional 165 shares in the last quarter. Finally, Wilmington Savings Fund Society FSB purchased a new stake in shares of MongoDB in the 3rd quarter worth approximately $44,000. Institutional investors own 89.29% of the company’s stock.

Insider Buying and Selling at MongoDB

In other news, Director Dwight A. Merriman sold 3,000 shares of the stock in a transaction dated Monday, February 3rd. The shares were sold at an average price of $266.00, for a total transaction of $798,000.00. Following the transaction, the director now owns 1,113,006 shares of the company’s stock, valued at $296,059,596. This represents a 0.27 % decrease in their position. The transaction was disclosed in a document filed with the Securities & Exchange Commission, which is available through this link. Also, CEO Dev Ittycheria sold 18,512 shares of the firm’s stock in a transaction on Wednesday, April 2nd. The shares were sold at an average price of $173.26, for a total transaction of $3,207,389.12. Following the completion of the transaction, the chief executive officer now directly owns 268,948 shares in the company, valued at $46,597,930.48. This trade represents a 6.44 % decrease in their ownership of the stock. The disclosure for this sale can be found here. Insiders have sold 39,345 shares of company stock worth $8,485,310 over the last 90 days. Company insiders own 3.60% of the company’s stock.

Wall Street Analysts Forecast Growth

A number of equities analysts have recently issued reports on the company. Canaccord Genuity Group dropped their price target on MongoDB from $385.00 to $320.00 and set a “buy” rating on the stock in a research note on Thursday, March 6th. Barclays dropped their target price on shares of MongoDB from $330.00 to $280.00 and set an “overweight” rating for the company in a research report on Thursday, March 6th. Robert W. Baird decreased their price objective on MongoDB from $390.00 to $300.00 and set an “outperform” rating on the stock in a report on Thursday, March 6th. Oppenheimer lowered their price target on shares of MongoDB from $400.00 to $330.00 and set an “outperform” rating for the company in a report on Thursday, March 6th. Finally, Loop Capital cut their price objective on shares of MongoDB from $400.00 to $350.00 and set a “buy” rating on the stock in a research report on Monday, March 3rd. Eight analysts have rated the stock with a hold rating, twenty-four have given a buy rating and one has issued a strong buy rating to the company. According to MarketBeat.com, the stock currently has an average rating of “Moderate Buy” and an average price target of $294.78.

View Our Latest Research Report on MDB

MongoDB Price Performance

Shares of NASDAQ MDB traded down $0.55 during midday trading on Friday, hitting $171.64. The stock had a trading volume of 1,905,966 shares, compared to its average volume of 1,854,248. The firm has a market cap of $13.94 billion, a P/E ratio of -62.64 and a beta of 1.49. MongoDB, Inc. has a fifty-two week low of $140.78 and a fifty-two week high of $379.06. The company’s 50-day simple moving average is $184.75 and its 200 day simple moving average is $244.98.

MongoDB (NASDAQ:MDBGet Free Report) last announced its quarterly earnings results on Wednesday, March 5th. The company reported $0.19 earnings per share (EPS) for the quarter, missing the consensus estimate of $0.64 by ($0.45). MongoDB had a negative return on equity of 12.22% and a negative net margin of 10.46%. The business had revenue of $548.40 million during the quarter, compared to the consensus estimate of $519.65 million. During the same quarter last year, the company earned $0.86 earnings per share. Equities analysts expect that MongoDB, Inc. will post -1.78 EPS for the current fiscal year.

MongoDB Company Profile

(Free Report)

MongoDB, Inc, together with its subsidiaries, provides general purpose database platform worldwide. The company provides MongoDB Atlas, a hosted multi-cloud database-as-a-service solution; MongoDB Enterprise Advanced, a commercial database server for enterprise customers to run in the cloud, on-premises, or in a hybrid environment; and Community Server, a free-to-download version of its database, which includes the functionality that developers need to get started with MongoDB.

See Also

Institutional Ownership by Quarter for MongoDB (NASDAQ:MDB)

Before you consider MongoDB, you’ll want to hear this.

MarketBeat keeps track of Wall Street’s top-rated and best performing research analysts and the stocks they recommend to their clients on a daily basis. MarketBeat has identified the five stocks that top analysts are quietly whispering to their clients to buy now before the broader market catches on… and MongoDB wasn’t on the list.

While MongoDB currently has a Moderate Buy rating among analysts, top-rated analysts believe these five stocks are better buys.

View The Five Stocks Here

Free Today: Your Guide to Smarter Options Trades Cover

Learn the basics of options trading and how to use them to boost returns and manage risk with this free report from MarketBeat. Click the link below to get your free copy.

Get This Free Report

Like this article? Share it with a colleague.

Link copied to clipboard.

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Advent International L.P. Increases Stock Position in MongoDB, Inc. (NASDAQ:MDB)

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

Advent International L.P. raised its holdings in shares of MongoDB, Inc. (NASDAQ:MDBFree Report) by 51.4% during the 4th quarter, according to the company in its most recent filing with the Securities and Exchange Commission. The fund owned 54,500 shares of the company’s stock after purchasing an additional 18,500 shares during the period. MongoDB makes up about 0.3% of Advent International L.P.’s investment portfolio, making the stock its 18th largest holding. Advent International L.P. owned 0.07% of MongoDB worth $12,688,000 at the end of the most recent quarter.

A number of other institutional investors and hedge funds have also modified their holdings of the business. Morse Asset Management Inc purchased a new stake in MongoDB during the 3rd quarter valued at $81,000. Wilmington Savings Fund Society FSB bought a new stake in shares of MongoDB during the 3rd quarter valued at about $44,000. Tidal Investments LLC increased its position in shares of MongoDB by 76.8% during the 3rd quarter. Tidal Investments LLC now owns 7,859 shares of the company’s stock valued at $2,125,000 after purchasing an additional 3,415 shares during the last quarter. Principal Financial Group Inc. increased its position in shares of MongoDB by 2.7% during the 3rd quarter. Principal Financial Group Inc. now owns 6,095 shares of the company’s stock valued at $1,648,000 after purchasing an additional 160 shares during the last quarter. Finally, Versant Capital Management Inc lifted its stake in shares of MongoDB by 1,100.0% in the 4th quarter. Versant Capital Management Inc now owns 180 shares of the company’s stock worth $42,000 after purchasing an additional 165 shares during the period. Institutional investors own 89.29% of the company’s stock.

Wall Street Analysts Forecast Growth

A number of brokerages have commented on MDB. Rosenblatt Securities reaffirmed a “buy” rating and set a $350.00 target price on shares of MongoDB in a report on Tuesday, March 4th. China Renaissance started coverage on MongoDB in a research note on Tuesday, January 21st. They issued a “buy” rating and a $351.00 price objective for the company. Mizuho lowered their target price on MongoDB from $250.00 to $190.00 and set a “neutral” rating on the stock in a research report on Tuesday, April 15th. Oppenheimer reduced their price target on MongoDB from $400.00 to $330.00 and set an “outperform” rating for the company in a research report on Thursday, March 6th. Finally, Cantor Fitzgerald assumed coverage on MongoDB in a research report on Wednesday, March 5th. They set an “overweight” rating and a $344.00 price objective on the stock. Eight investment analysts have rated the stock with a hold rating, twenty-four have assigned a buy rating and one has issued a strong buy rating to the stock. Based on data from MarketBeat, MongoDB presently has an average rating of “Moderate Buy” and a consensus price target of $294.78.

Get Our Latest Stock Report on MongoDB

Insider Activity at MongoDB

In related news, CEO Dev Ittycheria sold 18,512 shares of the business’s stock in a transaction that occurred on Wednesday, April 2nd. The shares were sold at an average price of $173.26, for a total value of $3,207,389.12. Following the completion of the transaction, the chief executive officer now owns 268,948 shares of the company’s stock, valued at $46,597,930.48. The trade was a 6.44 % decrease in their ownership of the stock. The sale was disclosed in a filing with the SEC, which is available at the SEC website. Also, CAO Thomas Bull sold 301 shares of the company’s stock in a transaction on Wednesday, April 2nd. The stock was sold at an average price of $173.25, for a total value of $52,148.25. Following the sale, the chief accounting officer now owns 14,598 shares of the company’s stock, valued at approximately $2,529,103.50. This represents a 2.02 % decrease in their position. The disclosure for this sale can be found here. Insiders sold 39,345 shares of company stock worth $8,485,310 over the last three months. 3.60% of the stock is owned by corporate insiders.

MongoDB Price Performance

MDB stock opened at $172.19 on Friday. MongoDB, Inc. has a 12 month low of $140.78 and a 12 month high of $380.94. The firm’s 50-day moving average is $186.51 and its 200 day moving average is $245.99. The company has a market capitalization of $13.98 billion, a PE ratio of -62.84 and a beta of 1.49.

MongoDB (NASDAQ:MDBGet Free Report) last issued its earnings results on Wednesday, March 5th. The company reported $0.19 EPS for the quarter, missing analysts’ consensus estimates of $0.64 by ($0.45). The company had revenue of $548.40 million for the quarter, compared to analysts’ expectations of $519.65 million. MongoDB had a negative return on equity of 12.22% and a negative net margin of 10.46%. During the same period last year, the business posted $0.86 earnings per share. Sell-side analysts forecast that MongoDB, Inc. will post -1.78 EPS for the current fiscal year.

About MongoDB

(Free Report)

MongoDB, Inc, together with its subsidiaries, provides general purpose database platform worldwide. The company provides MongoDB Atlas, a hosted multi-cloud database-as-a-service solution; MongoDB Enterprise Advanced, a commercial database server for enterprise customers to run in the cloud, on-premises, or in a hybrid environment; and Community Server, a free-to-download version of its database, which includes the functionality that developers need to get started with MongoDB.

Further Reading

Want to see what other hedge funds are holding MDB? Visit HoldingsChannel.com to get the latest 13F filings and insider trades for MongoDB, Inc. (NASDAQ:MDBFree Report).

Institutional Ownership by Quarter for MongoDB (NASDAQ:MDB)

This instant news alert was generated by narrative science technology and financial data from MarketBeat in order to provide readers with the fastest and most accurate reporting. This story was reviewed by MarketBeat’s editorial team prior to publication. Please send any questions or comments about this story to contact@marketbeat.com.

Before you consider MongoDB, you’ll want to hear this.

MarketBeat keeps track of Wall Street’s top-rated and best performing research analysts and the stocks they recommend to their clients on a daily basis. MarketBeat has identified the five stocks that top analysts are quietly whispering to their clients to buy now before the broader market catches on… and MongoDB wasn’t on the list.

While MongoDB currently has a Moderate Buy rating among analysts, top-rated analysts believe these five stocks are better buys.

View The Five Stocks Here

2025 Gold Forecast: A Perfect Storm for Demand Cover

Unlock the timeless value of gold with our exclusive 2025 Gold Forecasting Report. Explore why gold remains the ultimate investment for safeguarding wealth against inflation, economic shifts, and global uncertainties. Whether you’re planning for future generations or seeking a reliable asset in turbulent times, this report is your essential guide to making informed decisions.

Get This Free Report

Like this article? Share it with a colleague.

Link copied to clipboard.

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Swift 6.1 Enhances Concurrency, Introduces Package Traits, and More

MMS Founder
MMS Sergio De Simone

Article originally posted on InfoQ. Visit InfoQ

Swift 6.1, included in Xcode 16.3, introduces several improvements to the language and the Swift Package Manager, including type-wide global actor inference control, support for trailing comma in lists, package traits for conditionally exposing features based on the platform, and enhancements to Swift Testing.

Prior to Swift 6.1, the nonisolated keyword could be used to indicate that a property or function was safe to call from any concurrent context. In Swift 6.1, this support extends to types and extensions. There are two main use cases: when working with an actor-isolated protocol that includes methods which don’t require isolation, or when opting out of actor isolation in a protocol for which isolation is globally inferred.

For the first case, consider the following code:

@MainActor
protocol IsolatedStruct {
  // mutable state requiring isolation
}

extension IsolatedStruct: CustomStringConvertible, Equatable {
  // properties and methods not requiring isolation
  var description: String {
    ...
  }

  static func ==(lhs: S, rhs: S) -> Bool {
    ...
  }
}

The methods in the extension do not need to be actor-isolated, still the Swift 6 compiler still requires them to run on @MainActor unless they are explicitly marked nonisolated. In Swift 6.1, you can now write nonisolated extension IsolatedStruct:... to achieve the same effect.

For the second case, without diving into the complex rules governing global actor inference, it’s enough to say that if you want to override the compiler’s automatic decision about which global actor a type is assigned to, perhaps due to a property wrapper or another hard-to-trace dependency, you can mark the entire type as nonisolated.

A minor but welcome change to the language syntax is the ability to use trailing commas in tuples, parameter and argument lists, generic parameter lists, closure capture lists, and string interpolations, in addition to the previously supported collection literals. This will be especially useful in plugins and macros, as generating comma-separated lists no longer requires a special case for the final element.

Package traits are a new Swift Package Manager feature used to define a set of traits a package offers. This makes it possible to express conditional compilation and optional dependencies, making it easier to offer different APIs and features when used in specific environments, such as Embedded Swift and WebAssembly.

Speaking of Swift Testing, the new TestScoping protocol let you define custom test traits that modify the scope in which a test execute. This can be useful, for example, to bind a local value to a mocked resource. Moreovoer, Swift Testing now provides updated versions of the #expect(throws:) and #require(throws:) macros which return the error they caught.

Swift 6.1 includes many more improvements than we can cover here, so be sure to check out the official Swift evolution documentation for the full details.

As mentioned, Swift 6.1 is included in Xcode 16.3. It can also be installed on Linux using swiftly and on Windows using WinGet or Scoop.

About the Author

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.