MongoDB Target of Unusually High Options Trading (NASDAQ:MDB) – Defense World

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

MongoDB, Inc. (NASDAQ:MDBGet Free Report) saw unusually large options trading on Wednesday. Stock investors bought 36,130 call options on the stock. This is an increase of approximately 2,077% compared to the average daily volume of 1,660 call options.

Insider Transactions at MongoDB

In other news, CAO Thomas Bull sold 301 shares of the firm’s stock in a transaction that occurred on Wednesday, April 2nd. The shares were sold at an average price of $173.25, for a total value of $52,148.25. Following the transaction, the chief accounting officer now owns 14,598 shares in the company, valued at approximately $2,529,103.50. This trade represents a 2.02 % decrease in their ownership of the stock. The sale was disclosed in a document filed with the Securities & Exchange Commission, which is accessible through this hyperlink. Also, Director Dwight A. Merriman sold 3,000 shares of the stock in a transaction on Monday, February 3rd. The shares were sold at an average price of $266.00, for a total transaction of $798,000.00. Following the sale, the director now directly owns 1,113,006 shares of the company’s stock, valued at $296,059,596. The trade was a 0.27 % decrease in their position. The disclosure for this sale can be found here. Over the last 90 days, insiders sold 39,345 shares of company stock valued at $8,485,310. 3.60% of the stock is owned by corporate insiders.

Institutional Investors Weigh In On MongoDB

A number of institutional investors have recently bought and sold shares of MDB. Cloud Capital Management LLC purchased a new stake in MongoDB in the 1st quarter worth $25,000. Strategic Investment Solutions Inc. IL purchased a new stake in MongoDB in the 4th quarter worth $29,000. Hilltop National Bank increased its stake in MongoDB by 47.2% in the 4th quarter. Hilltop National Bank now owns 131 shares of the company’s stock worth $30,000 after buying an additional 42 shares in the last quarter. NCP Inc. purchased a new position in shares of MongoDB during the fourth quarter valued at about $35,000. Finally, Versant Capital Management Inc grew its position in shares of MongoDB by 1,100.0% during the fourth quarter. Versant Capital Management Inc now owns 180 shares of the company’s stock valued at $42,000 after purchasing an additional 165 shares in the last quarter. Hedge funds and other institutional investors own 89.29% of the company’s stock.

MongoDB Stock Performance

<!—->

Shares of MongoDB stock opened at $172.19 on Friday. MongoDB has a fifty-two week low of $140.78 and a fifty-two week high of $380.94. The company has a market capitalization of $13.98 billion, a P/E ratio of -62.84 and a beta of 1.49. The company has a fifty day moving average of $186.51 and a 200 day moving average of $245.99.

MongoDB (NASDAQ:MDBGet Free Report) last issued its quarterly earnings results on Wednesday, March 5th. The company reported $0.19 earnings per share (EPS) for the quarter, missing analysts’ consensus estimates of $0.64 by ($0.45). MongoDB had a negative return on equity of 12.22% and a negative net margin of 10.46%. The business had revenue of $548.40 million during the quarter, compared to analysts’ expectations of $519.65 million. During the same quarter in the previous year, the company posted $0.86 EPS. Analysts forecast that MongoDB will post -1.78 EPS for the current year.

Analyst Upgrades and Downgrades

Several research firms have recently commented on MDB. Royal Bank of Canada dropped their target price on shares of MongoDB from $400.00 to $320.00 and set an “outperform” rating for the company in a research report on Thursday, March 6th. Rosenblatt Securities reaffirmed a “buy” rating and issued a $350.00 target price on shares of MongoDB in a research report on Tuesday, March 4th. Truist Financial dropped their target price on shares of MongoDB from $300.00 to $275.00 and set a “buy” rating for the company in a research report on Monday, March 31st. Macquarie lowered their price target on shares of MongoDB from $300.00 to $215.00 and set a “neutral” rating for the company in a research report on Friday, March 7th. Finally, China Renaissance started coverage on shares of MongoDB in a research report on Tuesday, January 21st. They set a “buy” rating and a $351.00 price target for the company. Eight equities research analysts have rated the stock with a hold rating, twenty-four have issued a buy rating and one has issued a strong buy rating to the company. Based on data from MarketBeat, the company currently has a consensus rating of “Moderate Buy” and an average target price of $294.78.

Get Our Latest Stock Analysis on MongoDB

MongoDB Company Profile

(Get Free Report)

MongoDB, Inc, together with its subsidiaries, provides general purpose database platform worldwide. The company provides MongoDB Atlas, a hosted multi-cloud database-as-a-service solution; MongoDB Enterprise Advanced, a commercial database server for enterprise customers to run in the cloud, on-premises, or in a hybrid environment; and Community Server, a free-to-download version of its database, which includes the functionality that developers need to get started with MongoDB.

Further Reading



Receive News & Ratings for MongoDB Daily – Enter your email address below to receive a concise daily summary of the latest news and analysts’ ratings for MongoDB and related companies with MarketBeat.com’s FREE daily email newsletter.

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Presentation: Moving Your Bugs Forward in Time: Language Trends That Help You Catch Your Bugs at Build Time Instead of Run Time

MMS Founder
MMS Chris Price

Article originally posted on InfoQ. Visit InfoQ

Transcript

Price: I’m going to be talking about moving your bugs forward in time. This is the topic that I’ve been thinking about on and off for many years. Before we get into the meat of the talk, how many folks are up to date on the Marvel Cinematic Universe? Those of you who are not, no problem. I’m going to start with a little story related to it. In the recent movies and TV shows, they’ve been building on this concept of the multiverse, where there are all these different parallel universes that have different timelines from one another. They differ by small details along the timelines. There’s one show in particular called Loki. He’s kind of a hero/villain. In his show, there are all these different timelines where the difference in each timeline is like Loki is slightly different in each one. In one of them, for example, Loki is an alligator, which, as you might imagine, leads to all sorts of shenanigans.

In that show, there’s this concept of the sacred timeline. There’s the one timeline that they’re trying to keep everything on line with. All of the other timelines somehow diverge into some weird apocalyptic situation, so they’re working to try to keep everything on this sacred timeline. I’m going to talk about what a bug might look like on this sacred timeline. We start off with, a developer commits a bug. We don’t like for this to happen, but it’s inevitable. It happens to all of us. What’s important is what happens after that. This is a little sample bit of toy Python code where we’ve got a function called divide_by_four. It takes in an argument.

Then it just returns that argument divided by four. Somewhere else in our codebase, some well-meaning developer creates a variable that is actually a string variable, and they try to call this function and pass it as the argument. What happens in Python is that’s a runtime error where you get this error that says, unsupported operand type for the divide by operator. On the sacred timeline, we’ve got something in our CI that catches this after that commit goes in. We may have some static analysis tool that we’re running. We may have test coverage that exercises that line of code. The important thing is our CI catches it, and that prevents us from shipping this bug to production. What happens next? The developer fixes the bug.

Then the CI passes, and they’re able to successfully ship that feature to prod. This is a pretty short, pretty simple to reason about timeline. The cost of that bug was basically like one engineer, like one hour of their time, something on that order. Fixing the bug probably actually took less than an hour, but dealing with monitoring the CI, monitoring the deployment, maybe it takes an hour of their time. Still not a catastrophic expense for our business.

Now we’re going to look at that bug on an alternate timeline. We’ll refer to this as the alligator Loki timeline. In this timeline, the developer commits the bug. For whatever reason, we’re not running the static analysis tool or we don’t have test coverage, and the bug does not get caught by our CI. Then we have our continuous delivery pipeline. It goes ahead and deploys this bug to production. Say we’re a regional service that deploys to regions one at a time to reduce blast radius, so we deploy this bug to U.S.-west-1. Then, for whatever reason, this code path where the bug exists isn’t something that gets frequently exercised by all of our users, so we don’t notice it. Maybe a day passes. Then the time passes. The bug ends up getting deployed to U.S.-east. Some more time passes, we still haven’t noticed there’s a bug. Deploys to Europe. Some more time passes.

Then alligator Loki eats the original developer, or maybe something more realistic happens, like they transfer to another team, or get a promotion, or whatever. That developer is not around anymore. Our bug keeps going through the pipeline and deploys to Asia. Now we have this big problem. It turns out that in that region, there is a customer who uses that code path that we didn’t catch in the earlier regions when we deployed. Now we’ve gotten this alert from this very important customer that they’re experiencing an outage and we’ve got to do something about it. Our operator gets paged. The manager gets paged because the operator doesn’t immediately know what’s going on. Maybe some more engineers get added to a call to try to address the situation. They start going through the version control history to figure out where this bug might have come in. They identify the bad commit, so now they know where the bug came in, but this has been days.

Several other commits have probably come in since then, and now they have to spend some time thinking about whether or not it’s safe to roll back to the commit prior to that, or whether that’s going to just cause more problems. They spend some time talking about that, decide if the rollback is safe. Then, they decide it’s safe. They do the rollback in that one region, and then they confirm that that customer’s impact was remediated. That’s great. We’re taking a step in the right direction. Now we have to deal with all those other regions that we rolled it out to. Got to do rollbacks in those as well. Depending on how automated our situation is, that may be a lot of work. Then this could just keep going for a long time, but I’m going to stop here.

When we think about the cost of this bug, compared to the one on the sacred timeline, the first and most important thing is there was a visible customer outage. Depending on how big your company is and how important that customer was, that can be a catastrophic impact for your business. We also spent time and money on the on-call being engaged, the manager being engaged, additional engineers being engaged, executing all these rollbacks however much time that ended up taking. Re-engineer the feature. Now we have to assign somebody new to go figure out what that original developer was trying to achieve, redo the work in a safe way, get it fixed. They’ve also got to make sure that whatever other commits got rolled back in that process, that we figure out how to get those reintroduced safely as well.

Then the one that we don’t talk about enough is opportunity cost. Every person who was involved in this event could have spent that time on something else that was more valuable to your business, working on other features, whatever it may be. When we compare the cost of these two timelines, the first timeline looks so quaint in comparison. It looks so simple. The cost was really not that big of a deal. On the second timeline, it bubbled into this big giant mess that sucked up a whole bunch of people’s time and potentially cost us a customer. The cost is just wildly different between the two. We want to really avoid this alligator timeline. What’s the difference between those two timelines? The main difference is that in the sacred timeline, we caught that bug at build time. In the alligator timeline, we caught it at runtime. That subtle difference is the key branching factor that ends up determining where you end up between these two scenarios.

Background

That’s what my talk’s going to be about, when I say moving your bugs forward in time. I’m talking about moving them from runtime to build time. Thankfully, I think that a lot of modern programming languages have been building more features in to the language to help make sure that you can catch these bugs earlier. That’s what I want to talk about. My name is Chris Price. I am a software engineering manager/software engineer at Momento. We’re a serverless caching and messaging company. Previous to that, I worked at AWS with a lot of other folks that are at Momento now. I worked on video streaming services and some of us worked at DynamoDB. Before that, I worked at Puppet doing infrastructure as code.

Maintainability

Then zooming out before I get into the weeds on this, this phenomenon I’m talking about, about moving bugs from runtime to build time is really a subset of maintainability, which as I’ve progressed through my career as a software engineer, I’ve really found more of that maintainability is one of the most important things that you can strive for, one of the most important skills that you can have as a software engineer.

When I first got started straight out of college, I thought that the only important thing about my job was how quickly I could produce code, how fast can I get a feature out the door, how many features can I ship and how quickly. As I got more experience in the industry and worked on larger codebases with more diverse teammates, what I realized is that that’s not really the most important skill for a software engineer. It’s way more important to think about what your code can do tomorrow and how easy it’s going to be for your teammates and your future teammates that you haven’t even met yet to be able to understand and modify and have confidence in their changes that they’re making to your code. That’s going to be the central theme of this talk.

Content, and Language Trends

These are the six specific language features that I’m going to dive into. First, we’ll talk about static types and null safety. Then we’re going to talk about immutable data and persistent collections. Then we’ll wrap up by talking about errors as return types, and exhaustive pattern matching. Some of the languages that have influenced the points that I’ll be making in this talk. I spent a lot of time working in Clojure a while back, and that is where I really got the strong sense for how valuable it is to use immutable data structures, how much that improves the maintainability of your code. Rust is one of the places where I really got used to doing a lot of pattern matching statements. Go is the first language that I worked in that really espoused this pattern of treating errors as return types rather than exceptions.

Then, Kotlin is a language that I really love because I feel like it takes a lot of these ideas that come from some of these more functional programming languages, and it makes them really approachable and really accessible in a language that runs on the JVM. You can adopt it in your Java codebase without boiling the ocean. You can ease your way into some of these patterns without having to switch out of a completely object-oriented mindset overnight. It’s a really awesome, approachable language. Two engineers have had a lot of influence on my thinking, Rich Hickey, the creator of Clojure, Martin Odersky, the creator of Scala.

If you get a chance to watch any talks that these gentlemen have given in the past, I highly recommend them. They’re always really informative, and they’ve been really foundational for me. I also highly recommend if you can find a way to buy yourself some time to do a side project in a functional programming language. The time that I spent writing Clojure, I think, was more formative for me and improved my skills as an engineer more than any other time throughout my whole career, even though I haven’t written a line of Clojure code in quite some time now.

Static Types, Even in Dynamic Languages

We’ll start off with static types, and I’m saying even in dynamic languages. I realize that that may be controversial to some folks. We’re going to go back to this bug that we started off with on the sacred timeline and the alligator timeline, where we passed the wrong data type into this Python function. A lot of times when I try to talk to people about opting into static typing in some of these dynamic languages, I hear responses like this, “I can build things faster with dynamic types, and I can spend my time thinking about my business logic rather than having to battle with this complicated type system”. Or, another thing I hear is, “I can avoid those runtime type errors that you’re talking about as long as I have good test coverage that exercises all the code”. I used to believe these two things, and they’re still definitely very reasonable opinions to have, but I’ve drifted away from these.

Working at AWS was probably the place where I really started to drift away from these. Inside of AWS, there’s a lot of language and a lot of shared vocabulary that gets used to try to give people a shared context about how you’re thinking about your work. One of the ones that really stuck with me was this one, “Good intentions don’t work, mechanisms do”. This is a Jeff Bezos quote, but it’s really widely spread through a lot of AWS blogs and other literature. Mechanisms here just means some kind of automated process that takes a little bit of the error-prone decision-making stuff out of the hands of a human and makes sure that the thing just happens correctly. It takes away your reliance on the good intentions of engineers. That’s going to be another key theme of this talk is that a lot of these types of bugs that we’re talking about, they come to play when you have something in your codebase that relies on the good intentions of your engineers to stick to the best pattern.

We’ve got these beloved dynamic languages like Clojure, JavaScript, Ruby, Python. My claim is that if you opt into the static type systems in these languages, you just completely avoid shipping that class of bug to production, no if, ands, or buts about it. That particular bug that we started with on the sacred timeline and the alligator timeline, it just goes away. What’s really powerful about it is you’re taking away this reliance on the good intentions of your engineers. You may have best practices established in your engineering work that whenever you’re using a dynamic language, you better make sure you have thorough test coverage that’s going to prevent you from having one of these kinds of bugs go to production, but you’re relying on the engineers to adhere to that best practice.

Then you hire new people to your team and they don’t know the best practices yet and they’re prone to making mistakes sometimes. Putting that power in the hands of the compiler instead of the humans, it just eliminates that class of bugs. That doesn’t mean that we have to abandon our favorite dynamic languages. Pretty much all of these languages have added opt-in tools that you can use to get static analysis and static typing. Python has mypy. JavaScript, obviously TypeScript is becoming much more popular over the last five years or so. Ruby has a system called rbs. Clojure has several things including Typed Clojure. Whenever you opt into one of these, you can usually do it pretty gradually. You don’t have to boil the ocean with your codebase. It really just boils down to just adding a few little type int to the method signatures. That little action changes this bug from a runtime bug to a build time bug where mypy is going to catch this up front and say you can’t pass a string to this function. That allows us to avoid that alligator timeline.

Null Safety

Second one I’ll talk about is null safety. You’ve probably all heard the phrase about this being like the million-dollar mistake in programming. If you’ve written any Java, you’re probably really familiar with this pattern where like every time you write a new function, there’s 15 lines of boilerplate of checking all the arguments for nulls up front. Same thing in C#. These are again relying on good intentions. The first thing is you’re relying on your developers to remember to put all those null checks into place. Then, even worse, if they do put the null checks into place, it’s still a runtime error that’s getting thrown, so you’re still subject to the same kind of bugs that led us to the alligator timeline. A lot of the newer languages like Kotlin have started almost taking away support for assigning nulls to normally typed variables. In Kotlin, if you declare a variable as of type string, you just can’t assign a null to it. That won’t compile.

If you know that you need it to accept null, then you can put this special question mark operator on the type definition, and that allows you to assign a null to it. Now once you’ve done that, you can no longer call the normal string methods directly on that object. The compiler will fail right there. Instead, the compiler will enforce that you’ve either done an if-else and handled the null case, or you can use these special question mark operators to say that you’re willing to just tolerate passing the null along. In either case, the compiler has made you made an explicit decision upfront about what you’re going to do in case it’s null rather than you essentially finding out about this bug at runtime. Rust is another language where there is no null.

In Rust, the closest thing you have to null is this option type. Any option in Rust is either an instance of None or an instance of Some. This is similar to optional in Java, but in Rust, it’s much more of a first-class concept. In this code here, you can see I declared this function called foo, and I’d said its argument is a string. I cannot call that function and pass a None in. That’s a compile time error. Bar, I said it’s an option of string. I can pass a None in or I can pass a Some in, but again I’ve had to be explicit about it and make the decision upfront. Compile time null safety, most languages have some support for this these days.

The languages that have been around the longest like Python, C#, Java, those languages have to deal with a lot of backward compatibility concerns. They can’t just flip a switch and adopt this behavior. In those languages, you’ll probably have to work a little bit harder to figure out how to configure your build tools to disallow nulls, but they all have some support for it. An experiment that I suggest is just writing an intentional bug where you pass a null to something that you know should not accept a null, and then play with your build tool configuration until it catches that at build time rather than allowing it to possibly happen at runtime.

Immutable Variables, and Classes

Now we’ll move on to number three, which is immutable variables and classes. There are very few things that I’ve worked with in my career that I feel improve the maintainability of my code as much as leaning into immutable variables as much as humanly possible. The main reason for this is that they dramatically reduce the amount of information that you need to keep in your brain when you’re reading a piece of code in order to reason about it and make assertions about it. As an example of that, here’s some Java code. I’ve got this function called doSomething that takes in a foo as an argument. Foo is just a regular POJO in this case. Then it calls doSomething else and passes that foo along. Now here’s some calling code that appears in some other file where I construct an instance of the foo, and then I pass it to that doSomething function. Then imagine we have maybe 100 lines of code right here or maybe even more than that.

Then we eventually get to this line of code where we print out foo. If I’m an engineer working on a feature in this codebase and the change that I want to make is somewhere around this line that’s doing the print statement, what can I assert about the state of my foo at this point in the code? Were there any statements in between those two that might have modified my foo? It’s certainly possible, so I’m going to have to read all that code to find out. Was my foo passed by reference to any functions that might’ve mutated it? Yes, it was passed to do something and then that passed it along to do something else. Does that mean that I need to go examine the source code of all of those functions in order to be able to reason about the state that this variable is going to be in when I get to this line of code? The answer to that is basically yes. Without knowing what’s happening in every one of those pieces of code, I have no idea whether this variable got mutated in between those two points in time.

Then the situation gets infinitely more difficult if you have concurrency in your program. If potentially this doSomething else function is passing that reference to some pool of background threads that may be doing work in the background, then you can imagine a scenario where I add another print line here, just two print lines in a row printing this variable out twice. I can’t even assert that it’s going to have the same value in between those two print statements because some background thread might’ve changed it in between the two.

Again, I have to go read all of the code everywhere in my application to know what I can and can’t assume about this variable at this point in time. That just slows me down a lot. An alternate way to handle this with newer versions of Java is rather than foo being a POJO, we use this new keyword called record, which basically makes it a data class. It means that it’s going to have these two properties on it and they can’t change ever. It’s an immutable piece of data.

Then I also add this final keyword, which says that nobody can reassign this variable anywhere else in this scope here. With those two changes in place, I know that nobody can have reassigned my foo to a different foo object because that would have been a compile time error. I also know that nobody can have modified this inner property, this myString, because that also would be a compile error. I don’t care anymore that we passed a reference to this variable to the Bar.doSomething method, because no matter what it does, it can’t have modified my data. I don’t have to worry about that. Also, if there’s 100 lines of code here, I know that they can’t have modified it. I no longer have to spend any time thinking about the state that might have changed in between these lines of code.

When I get down here to this print statement, I know exactly what it’s going to print. That means I can just move on with my changes that I want to make to the code without getting distracted by having to page all of the rest of this application into my brain and think about it. Most languages have some support for this these days. Kotlin definitely has data classes. Clojure, everything’s immutable by default. Java has records and final. You can find this in pretty much any programming language. TypeScript and Rust, you have to roll your own a little bit, but it’s definitely possible to follow these patterns.

Persistent Collections, and Immutable Collections

That leads us into a related but slightly different topic, which is about collections. I also want my collections to be immutable for the same reasons, but that’s a little bit harder. You can see this line of code here where I’m constructing a Java ArrayList. I’m using this final keyword because I want this to be immutable. I want to be able to make those assumptions about the state of my list without having to spend a bunch of time reading my other code. The problem is this ArrayList provides these mutation functions, the .add, .remove, whatever else. I’m right back in the world where I was before, where these other functions that I’m calling, these other lines of code that might happen here, they can mutate that list in any number of ways.

Again, I cannot make any mental assertions about what this list has in it by the time I get to this point in my code. Recent versions of Java have added some stuff, like there’s this new list of factory function that actually does produce an immutable list, which is what I want. Now I don’t have to worry about the fact that I’ve called doSomething because I know that this list is immutable. I do, again, know that by the time I get to this print statement, I know what my list has in it. The flaw with that is you’ll notice this list.of factory function is still returning the normal list interface.

That list interface provides these mutating functions like add, remove, whatever else. Even worse, if I call those now, it’s a runtime error. The compiler won’t detect that this is a problem, but the program won’t throw an error at runtime. Now I’m back to the world of relying on good intentions. Now I’ve got this immutable list, which is what I wanted, but if I’m passing it around to all these other functions and only advertising it as a list, then they may try to call the mutation functions on it and then we get a bug at runtime.

Some of the more modern languages like Kotlin, they’ve solved this problem by, in Kotlin, collections are immutable by default. If I say listOf, then I get an immutable list and it doesn’t have any methods on it like add. Again, compile time error if somebody tried to call that. It does also have mutable variants of those collections. If I really need one, I can have one, but the key here is that it has a separate interface for the two. I can lean into the immutable interface in all the places where I want to make sure that I don’t have to worry about somebody modifying the collection underneath me.

Whenever I talk about this concept of these immutable collections, people ask me, what about performance? Your code is going to make changes to the collection over time, otherwise your code’s not doing anything interesting. Doesn’t that mean that we have to clone the whole collection every time we need to make a modification to it, and isn’t that super slow and memory intensive? The answer to that is, no, thankfully. There’s a really cool talk from QCon 2009, from Rich Hickey, the author of Clojure, about persistent data structures, which is the data structures that he built in as the defaults in the language of Clojure. They present themselves to you as a developer as immutable at all times. When you have a reference to one, it’s guaranteed to be immutable. It provides modifier functions like add, remove, but what they do is they produce a new data structure and they give you a reference to it.

Now you can have two references, one to the old one, one to the new one, and neither one of them can be modified by other code out from underneath you. The magic is, behind the scenes, they’re implemented via trees and they use structural sharing to share most of the memory that makes up the collection. It’s actually not nearly as expensive as you might fear. This was a hard thing for me to wrap my head around when I first started writing Clojure. I was like, that can’t possibly be performant. It’s a really nice solution to the problem. In practice, the way that they’re implemented, you almost never need to clone more than about four nodes in the tree in order to make a modification to it, even if there’s millions of nodes in the tree. This is a slide from Rich’s QCon talk where he talks about how these are implemented. What you can see here is two trees. The one on the left with the red outline, that’s the root node of the original collection. It has all these values in it.

On the right, he’s showing us, so we want to add a new child node to this purple node with the red outline. We’re going to try to add a new child node to it. To implement that, what we actually do is we just clone all the parent nodes that go down to that one, and we add the new child node there. Then the rest of the child nodes of all of these new nodes that we’ve created, they just point back to the same exact memory from the original data structure. We’ve cloned four tiny little objects and retained 99% of the memory that we were using from the original collection. With this pattern, you can have your cake and eat it too. You can have a collection that presents itself to you as immutable so that you know that it can’t be modified out from underneath you while you’re working on it. You don’t have to sacrifice performance when other threads, for example, need to change it.

This is hugely powerful in concurrent programming because there’s all kinds of problems that you can run into with shared collections across multiple threads in your concurrent code, where you either have to do a lot of locking to make sure that one thread doesn’t modify it while another thread is using it, or you can end up just running into these weird race conditions that cause runtime errors. With this pattern, any thread, once it grabs a reference to this collection, you know that that collection’s not going to change while you’re consuming it. After it’s done with it, it can go grab a new reference to the collection, which might have been updated somewhere else, but again, that one will be immutable, and we don’t have to worry about it being modified from underneath this either. Clojure and Scala have these kinds of collections built right into their standard library, but every other programming language that I’ve looked into has great libraries available on GitHub for this, and they’re usually pretty well-consumed and battle-tested.

Errors as Return Types – Simple, Predictable Control Flows

Now we’re going to move on to errors as return types. This one has mostly to do with control flow. When I’m talking about this one, I like to reflect on the history of Java and how at the beginning of Java, it was really common for us to have these checked exceptions versus unchecked exceptions. Method signatures would be really weird depending on whether they’re using checked or unchecked exceptions. These are trying to do exactly what I’m advocating for in this talk. They were trying to give us compile time safety to make sure that we were handling these errors that might happen.

In practice, we just collectively decided we did not like the ergonomics of how it was implemented and we drifted away from it over time. I think one of the funniest examples of that evolution is in the standard library of Java itself, the basic URI class that you use for everything that has to do with networks. It throws a checked exception called URISyntaxException whenever you call its constructor, which means you literally cannot construct one of these objects without the compiler forcing you to put this try-catch there, or without you changing your method signature to advertise that you’re going to rethrow that.

Then everybody else who’s calling your function now has to deal with the same problem. Everybody hated that because the odds that we were going to actually pass something in there that would cause one of these exceptions were really low and drove people crazy. A couple releases later in Java, they added this static factory function called create that literally all it does is call the constructor and then catch the exception and rethrow it as a runtime exception. They put that into the standard library. That was an interesting trend to observe. Likewise, all of the JVM languages that have appeared in the last 10, 15 years, Kotlin, Scala, Clojure, they’ve all basically gotten rid of these checked exceptions in favor of runtime exceptions. That means now all of our errors are runtime errors. That again is really against the grain of what I’m pitching in this talk. It means now we have to go read the docs or the code for every function we’re calling and make sure we know what kinds of exceptions it could be throwing, and handle them successfully. We’re back here, good intentions.

Go is the first language recently that I’ve tickled something in my brain for thinking about different ways to solve this problem. Go really leaned into the syntax of, if you’re going to call a function that might cause some kind of error, instead of there being an exception with weird control flow semantics and relying on this weird try-catch syntax, just returns a tuple instead. You either get your result back or your error back. One of those is going to be nil whenever you call this function. Then the compiler can force that, that you’ve done some checking on that nil, and you’ve decided how to handle it.

This is again, like the compiler is now doing this work rather than relying on good intentions. The other thing that I really like about this is we’re just using an if-else statement to interact with this error. It’s not a new special language construct that differs from how we’re dealing with all the other pieces of data in our code, like a try-catch is. It’s just like the same type of code we’d write for any other piece of data. We got more clear control flow. It allows the compiler to enforce more explicit handling, prevents us from silently swallowing types of exceptions. Yes, again, we can use our normal language constructs rather than the special try-catch stuff. Here’s a Rust equivalent of that. In Rust, there’s this type called result. Any instance of result is either error or ok.

Then it’s a generic type. If it’s a success, if it’s an ok, then the type is going to be this integer 32 bit. If it’s an error, then the value is going to be a string. Then we can use this pattern match statement and say, if it’s ok, then I’m going to do something with the success case. If it’s an error, then I’m going to do something with the error case. In these case statements, we get back the types that we declared in the result declaration.

Exhaustive Pattern Matching, and Algebraic Data Types

Errors as return types help us move our bugs from runtime to build time. I’ve shown you that they’re pretty ingrained in the languages in both Go and Rust, but can we do this in other languages? That leads me into my last topic that I want to talk about, which is about exhaustive pattern matching and algebraic data types. I’m going to explain what those are a little bit, and then I’m going to close the loop on the error handling part of this. What is an algebraic data type? It’s basically like a polymorphic class. You can imagine if you had a parent class called shape, and then you had child classes called circle, square, octagon. It’s basically just that, except for the compiler knows upfront all of the existing subtypes that can exist rather than it being open-ended. Most modern languages have some way of expressing these now.

Then they have these pattern matching statements that you can use to branch on which ones of the types that you end up getting. Here’s an example in TypeScript. You can see I’ve declared this type called shape, and this little or operator just means I’m unioning together several other different types. The key in TypeScript is that I have this common property, which I happen to call type, but you could call it whatever you wanted. As long as all of the types that you’re declaring have that property, and they all have a unique value for it, then the compiler can tell the difference between all of these types. Then I can do a pattern match statement on that variable, and then I can do these case statements to handle the individual branch. This is really cool because the compiler is smart enough to know once I get inside this circle branch, that I’m going to have a radius property available, and I’m not going to have a width and height. If I tried to reference width or height here, the compiler would fail, and it wouldn’t allow me to write that code.

Conversely, the same thing with the rectangle. It gives me a lot of type safety. Exhaustive pattern matching is basically just that same concept, but the compiler can give you a build time error if your pattern match statement doesn’t cover all the possible cases. This is why algebraic data types are important, because we want the compiler to know all of the legal types that are available. Most of the languages that have this stuff, they have the support for an exhaustive pattern match statement. Not all of them have it enabled by default. In TypeScript, you’ve got to turn that on as a compiler option. If you turn it on, then this becomes an exhaustive pattern match statement. What that means is if I go modify the definition of my shape type, and I add a third one in here called square, now this shape definition may be in one file somewhere in my code, and I may have these pattern match statements scattered throughout lots of other places in my code. They’re not guaranteed to live right next to each other.

As an engineer, if I come in here and I add in this new square type, then the next thing I got to do is search all over my codebase and find all the places where I might have been doing one of these pattern matches and make sure that I add support for the square. If I don’t do that, then we can get some weird runtime failure. With an exhaustive pattern match, you’re telling the compiler that you want it to fail if it finds a pattern match statement where you’re not explicitly handling all the cases. This would fail to compile in TypeScript because I don’t have a handling for the square case here, and I have to go add it before it’ll build. That’s really powerful.

Similar concept in Kotlin. In Kotlin, these algebraic data types are called sealed classes. You can see here I’ve got one where it can either be a success1 or a success2. I’ve got this win statement that I can use as a pattern match on it. What I want to show here is, if the function that I’m using to get this result might throw an error, this is where I’m going to tie this back into the error handling, this thing might cause an error. I have to put in this try-catch statement, and I have to know what exception type might get thrown here. We’re in good intentions land again. I might forget to put that try-catch statement in there. I might not handle all of the different types of exceptions that could possibly get thrown by that function.

If I make a small change to the way I model this, and I just add the error in as a different branch of this result, sealed class, then now I get to take advantage of all this other stuff that I’ve just shown you all. This is what my code looks like now. I just have a new branch in my pattern match statement that handles the result case. Now I’m not relying on good intentions to put the try-catch into the code. The compiler, because this is an exhaustive pattern match statement, will fail to build if I haven’t added the branch to handle this error. This code is just cleaner and simpler. It doesn’t involve this extra level of nesting and weird special case code. Highly recommend looking into the support for this in various languages. This is one of the more recent trends that I’ve seen. Like in Java, it didn’t come in until Java 17. In Python, it was in Python 3.10 when it got introduced. You can find something that will allow you to do this in pretty much whatever programming language you’re using.

Key Takeaways

Allowing bugs to surface at runtime can be really expensive. That can put us on that alligator timeline that we’re trying to avoid. Modern language trends are giving us really cool tools to catch bugs at build time instead of allowing ourselves to be subject to this problem. These same trends, I think, have this nice side benefit that they make the code more maintainable and easy to reason about anyway. It’s like a double win. More maintainable code obviously leads to increased developer productivity. It makes it easier for your teammates, present and future, to understand your codebase and feel confident about making changes to it. What we’re really trying to do here is find places where we can avoid relying on good intentions to solve problems.

The specific language features that I am advocating here, leaning into type checkers for dynamic languages. Configuring your build tools to disallow nulls. Using immutable variables and data classes wherever you can. Finding a persistent collections library in your language if it’s not built into the standard library. Surface errors as return values, not exceptions. Using exhaustive pattern matching with algebraic data types to allow the compiler to make sure that you’re handling all the cases whenever you can. It just allows you to model your business logic a little bit more concretely as well. It’s really nice.

Questions and Answers

Participant 1: Are there any good examples in the open-source world that you could point to that use a lot of the best practices you were talking about?

Price: I’ve found that the way to find the good examples is to find projects that are built in these languages that make this stuff be core constructs. Any project that you find in Rust is going to be forced to follow a lot of these paradigms just because that’s how the language was designed. Many or most Kotlin projects that I’ve seen really lean into the immutable variables and the pattern matching stuff. Mostly I think about that by language more so than by specific projects.

Participant 2: The principles that you mentioned, let’s say primarily I’m a Java workshop and it’s one of those languages, like many of those principles are checked. Would you suggest that I try navigating Go or Rust and start moving your platform, like a mix of these languages or would you say that stick to just one which checks everything?

Price: It probably depends a lot on your team and their interest and willingness to branch out into different languages. There is obviously a cost for managing codebases in multiple different languages. With Java in particular, like inside of AWS over the last five years, there’s been a really big shift towards teams that had big existing Java applications starting to add new features to them using Kotlin, because Kotlin has really good JVM interop, and so you don’t have to rewrite the rest of your code. You can just start adding Kotlin classes to it. When you write your Kotlin code initially, you can choose to write it in a way that makes it look almost exactly like Java code, so it’s really familiar to your engineers that already have experience with that. You can start experimenting with the features over time and gradually migrating things over.

I think in most big existing codebases, that’s always going to be a more successful long-term strategy than trying to just cut everything over all at once because that just ends up usually not being practical given the business requirements for delivering new features and stuff like that. That’s one thing to consider. If you have some isolated project, like a new microservice that isn’t tightly coupled to your existing codebase, then that’s a reasonable place to consider trying a new language. Then, like I mentioned before, just finding some little toy side project when you have the time and interest to play around with these different languages and get a sense for how you feel about them. That really helps you decide whether it’s something that you want to lean into or not.

Participant 3: When you talked about exhaustive pattern matching and you showed the switch statement, my mind immediately went to traditional interfaces and virtual classes. Why would I want to do, adding a new enumeration to my result rather than adding a new implementation to do the different things? It was more when you did the shapes thing. When I wanted to add a square, why couldn’t I just have a shape interface class and I just have three different implementations? When I want to add in a polynomial or whatnot, I don’t have to go everywhere, I just have an interface of array or area that I would have to implement.

Price: There’s a lot of ways to skin this cat. The thing that I’m really advocating for here is choosing one that gives you the exhaustive pattern matching so that when you do make that addition to the parent type that the compiler can automatically catch all the places in the code where you haven’t added support for it. There are definitely other ways to handle that besides this one. I like this best in TypeScript because in TypeScript, if you use interfaces or subclasses, then you have to start using this weird instanceof keyword, and it gets into that realm of JavaScript where it behaves really differently for one type of data than it does for another type.

In JavaScript, if you ever had a piece of code that’s trying to check and see if a variable is a string, it may be like, if is instanceof string, or thing.type equals string, or several other conditions that you have to check just because JavaScript gets wonky when you start trying to do reflection type stuff on that. Of the different patterns that I have personally played around with in TypeScript that really work well with this exhaustive pattern matching, this is the one that has been the most foolproof for me. This is the one that Google uses in their implementation of Protobuf. Protobuf has this concept of OneOf where you can say that a piece of data is either this thing, or this thing, or this thing. If you look at the way that Google’s Protobuf libraries generate TypeScript code to handle those OneOfs, this is the way that they do it. It’s just worked really well for us when we tried it out. It’s not the only solution though.

See more presentations with transcripts

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Cultivating a Culture of Resilience in Software Organizations

MMS Founder
MMS Ben Linders

Article originally posted on InfoQ. Visit InfoQ

Resilience helps individuals and organizations respond to challenges. According to Kathleen Vignos, personal resilience is built through adapting, technical resilience by mastering a variety of tools, and organizational resilience through flexibility and strong networks. In fast-changing software industries, recognizing tech shifts and fostering learning, flexibility, and collaboration, enhances resilience.

Kathleen Vignos gave a talk about cultivating a culture of resilience at QCon San Francisco.

Something that is resilient can bend without breaking. You find out how resilient something is once you’ve stretched, pushed, and tested it, Vignos said.

Personal resilience is built when things don’t go the way you planned, and you have to learn how to quickly adjust, Vignos mentioned. Technical resilience is built when you have to continually switch familiar tools with new ones, and organizational resilience is built along with your professional network so that if one organization fails, you have more options at hand, she added.

Any company on the edge of rapid innovation needs resilience to survive, as Vignos explained:

Few industries change as quickly as software. The skills and tools evolve, the players evolve, the funding evolves, and even the users evolve. Constant external factors influence the business and require nimble adjustments to roadmaps, org charts, and skill sets.

Technology evolves in waves and patterns, understanding these cycles can make us more resilient to change. Historically, we’ve seen major shifts such as the rise of the internet, mobile, cloud computing, and now AI, Vignos mentioned. Each wave follows a pattern: emergence, hype, disruption, maturity, and commoditization. Organizations and individuals that recognize these phases early can position themselves to be resilient to these shifts, she said.

To become more resilient to technology shifts, it helps to employ modular architectures with pluggable APIs, to keep abreast of emerging technology trends, and to inspire a culture of continuous learning, Vignos said.

To build a resilient culture in organizations, organizational leaders need to enable adaptability, learning, and sustained performance during periods of change within their organizations, Vignos mentioned. She recalled that from 2016-2018 many companies needed to respond to the General Data Protection Regulation (GDPR), set to be enforced in 2018. In order to operate in the EU, companies needed to comply with rules that gave users the right to access their data, erase their data, restrict the use of their data, and be informed about the use of their data:

At that time, I was involved in a company-wide project to prepare for GDPR. The teams and individuals most important to the success of the project exhibited curiosity and a growth mindset to learn how to protect users; a willingness to change processes and development lifecycles to adapt; and a collaborative spirit to join together with new teams and divisions to coordinate getting a massive amount of work done in a relatively short period of time. We avoided multi-hundred-million dollar fines and maintained the ability to serve tens of millions of users in the EU.

Promoting a growth mindset and experimentation, fostering autonomy, and strengthening cross-functional collaboration all contributed to a resilient culture, paving the way to compliance and ensuring the company’s ongoing financial survival, Vignos said.

Artificial intelligence can help enhance resilience, Vignos mentioned, enabling faster decision-making by analyzing large data sets, automating tedious tasks, and supplying predictive insights, helping organizations and employees adapt, recover, and thrive in the face of change.

Every hardship and challenge throughout my career has provided an opportunity for me to build resilience, Vignos said. I like the way Oprah says it: “Turn your wounds into wisdom”, she concluded.

InfoQ interviewed Kathleen Vignos about cultivating a culture of resilience.

InfoQ: Why does resilience matter for software companies?

Kathleen Vignos: As an example, I can remember when the Clubhouse app became popular in 2020. Suddenly tech entrepreneurs, venture capitalists and celebrities were chatting informally with each other and attracting large audiences. Meanwhile, at Twitter, we had acquired Periscope a few years before, and had used the technology internally to host all company meetings with thousands of users (and speaking of scaling, shoutout to the engineer who could spin up a new AWS region for Periscope with about 15 lines of code).

Clubhouse initially could not scale beyond 1000 listeners, creating a prime opportunity for Twitter to leverage Periscope technology to launch Twitter Spaces in just a few months. With massive scaling capabilities and the advantage of a network of over 300M monthly active users to draw upon for listeners, Twitter Spaces (and other platforms like Facebook Live Audio) contributed to Clubhouse’s decline.

This story captures a point in time when one company’s leaders, employees, and technology exhibited resilience and agility over another. The one best equipped to respond to change survived – at least for a couple more years.

InfoQ: How can software developers increase their personal resilience?

Vignos: I can think of several examples through my development career when I got partway through delivering a project, only to have the roadmap change. Whether it was because of funding changes, or a leadership change, or some other reason, it felt demotivating not to be able to finish the work I’d started and become invested in.

The advice is to hold the work lightly. For long projects, identify milestones that can give you a sense of shorter-term accomplishment, no matter the long-term outcome. Stay flexible to hand off work to others, and be able to shift to take on new work when those opportunities arise.

I used to be paid hourly as a freelance developer – so I reminded myself that I got paid even when the client changed their mind and scrapped the work. Let it go!

About the Author

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Cloudflare AutoRAG Streamlines Retrieval-Augmented Generation

MMS Founder
MMS Sergio De Simone

Article originally posted on InfoQ. Visit InfoQ

Cloudflare has launched a managed service for using retrieval-augmented generationin LLM-based systems. Now in beta, CloudFlare AutoRAG aims to make it easier for developers to build pipelines that integrate rich context data into LLMs.

Retrieval-augmented generation can significantly improve how accurately LLMs answer questions involving proprietary or domain-specific knowledge. However, its implementation is far from trivial, explains Cloudflare product manager Anni Wang.

Building a RAG pipeline is a patchwork of moving parts. You have to stitch together multiple tools and services — your data storage, a vector database, an embedding model, LLMs, and custom indexing, retrieval, and generation logic — all just to get started.

To make matters worse, the whole process must be repeated each time your knowledge base changes.

To improve on this, Cloudflare AutoRAG automates all steps required for retrieval-augmented generation: it ingests the data, automatically chunks and embeds it, stores the resulting vectors in Cloudflare’s Vectorize database, performs semantic retrieval, and generates responses using Workers AI. It also monitors all data sources in the background and reruns the pipeline when needed.

The two main processes behind AutoRAG are indexing and querying, explains Wang. Indexing begins by connecting a data source, which is ingested, transformed, vectorized using an embeddings model, and optimized for queries. Currently, AutoRAG supports only Cloudflare R2-based sources and can to process PDFs, images, text, HTML, CSV, and more. All files are converted into structured Markdown, including images for which a combination of object detection and vision-to-language transformation is used.

The querying process starts when an end user makes a request through the AutoRAG API. The prompt is optionally rewritten to improve its effectiveness, then vectorized using the same embeddings model applied during indexing. The resulting vector is used to search the Vectorize database, returning the relevant chunks and metadata that help retrieve the original content from the R2 data source. Finally, the retrieved context is combined with the user prompt and passed to the LLM.

On Linkedn, Stratus Cyber CEO Ajay Chandhok noted that “in most cases AutoRAG implementation requires just pointing to an existing R2 bucket. You drop your content in, and the system automatically handles everything else”.

Another benefit of AutoRAG, says BBC senior software engineer Nicholas Griffin, is that it “makes querying just a few lines of code”.

Some skepticism surfaced on X, where Poojan Dalal pointed out that “production grade scalable RAG systems for enterprises have much more requirements and components than just a single pipeline” adding that it’s not just about semantic search.

Engineer Pranit Bauva, who successfully used AutoRAG to create a RAG app, also pointed out several limitations in its current form: few options for embedding and chunking, slow query rewriting, and an AI Gateway that only works with Llama models—possibly due to an early-stage bug. He also noted that retrieval quality is lacking and emphasized that, for AutoRAG to be production-ready, it must offer a way to evaluate whether the correct context was retrieved to answer a given question.

About the Author

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Investors eye activist intervention as MongoDB struggles with growth and valuation

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

Investing.com — MongoDB (NASDAQ:MDB), once a high-flying software darling, is facing calls from investors for a shake-up, as shares remain down more than 50% from their 52-week high and full-year guidance reveals a sharp deceleration in growth. With pressure building, investors are increasingly hoping that a large activist investor will step in to force operational changes and potentially push the company to explore a sale of the business.

The database platform provider reported solid fiscal Q4 results, beating earnings and revenue expectations with EPS of $1.28 versus consensus estimates of $0.66 and revenue of $548.4 million against $520.5 million expected. However, the market’s focus quickly shifted to FY 2026 guidance, which disappointed on both the top and bottom line. Revenue is projected at $2.24 billion to $2.28 billion, below the $2.32 billion analysts had forecast, while full-year EPS is expected to come in between $2.44 and $2.62, well under the $3.39 consensus estimate.

The stock reaction has been stark. Despite MongoDB’s long record of innovation and substantial revenue growth since its 2017 IPO, the market is now digesting the reality that the company is transitioning from a high-growth narrative to a more mature, slower-growth business model. And with that transition, investors say, should come a reassessment of cost structure and strategy.

One area of particular focus is MongoDB’s operating expenses, which remain steep relative to its current cash flow profile. The company spent nearly $600 million on research and development in fiscal 2025, roughly four times the $150 million it generated in operating cash flow. General and administrative costs totaled an additional $220 million. Investors see an opportunity for significant margin improvement through more aggressive cost controls.

Randian Capital, an investor who has followed MongoDB since its early days as a public company, pointed to this misalignment between growth and expense as a key issue, in exclusive comments made to Investing.com. “MDB is spending almost $600mm per year on R&D, relative to a company that generated $150mm in cash from operations in 2025,” Randian wrote. The firm believes “the time is right for MDB to cut costs meaningfully across R&D and the $220mm in annual G&A costs.”

Beyond cost discipline, investors, such as Randian, believe MongoDB should consider strategic alternatives, including a possible sale. With a growing list of slowing software businesses becoming acquisition targets, some argue that MongoDB’s product and market position make it highly attractive to both strategic buyers and private equity. Large tech players such as Amazon (NASDAQ:AMZN), Oracle (NYSE:ORCL), IBM (NYSE:IBM), and SAP have been floated as potential suitors, and an LBO has also been seen as a viable option.

“MongoDB should explore a sale process,” Randian added, noting that “MDB presents a rare case of a business that has a large cost cut opportunity and clear visibility of many years of growth ahead.”

While MongoDB’s leadership under CEO Dev Ittycheria has garnered praise for guiding the company from a niche open-source project to a widely adopted enterprise platform, the business has entered a more mature phase. FY 2026 marks what may be the first year of consistent low double-digit revenue growth after years of 30%-plus expansion. For some investors, that inflection point makes the case for external involvement to reassess capital allocation and long-term positioning.

A properly executed turnaround, paired with a potential monetization event, could help rebuild investor confidence, many argue. MongoDB’s highly differentiated technology, particularly its appeal to developers working on flexible, scalable applications, remains valuable in a software market looking for durable platforms.

For now, no activist investor has taken a public stake, but the conditions —profitability potential, underperformance, and strategic interest —are increasingly aligned. With renewed scrutiny on costs and a growing call to evaluate all options, the company may soon be forced to respond to the pressures building from its investor base.

Related articles

Investors eye activist intervention as MongoDB struggles with growth and valuation

Barclays upgrades NetApp on growth in cloud, recurring revenue

Stanley Black & Decker to raise prices and move supply chains due to U.S. tariffs

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


MongoDB, Inc. (NASDAQ:MDB) Shares Acquired by Boothbay Fund Management LLC

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

Boothbay Fund Management LLC increased its stake in MongoDB, Inc. (NASDAQ:MDBFree Report) by 352.6% during the 4th quarter, according to the company in its most recent Form 13F filing with the Securities and Exchange Commission. The institutional investor owned 10,066 shares of the company’s stock after acquiring an additional 7,842 shares during the period. Boothbay Fund Management LLC’s holdings in MongoDB were worth $2,343,000 at the end of the most recent reporting period.

Several other hedge funds have also modified their holdings of MDB. Morse Asset Management Inc acquired a new stake in shares of MongoDB in the third quarter valued at $81,000. Virtu Financial LLC raised its stake in shares of MongoDB by 351.2% in the third quarter. Virtu Financial LLC now owns 10,016 shares of the company’s stock valued at $2,708,000 after acquiring an additional 7,796 shares in the last quarter. Wilmington Savings Fund Society FSB acquired a new stake in MongoDB in the third quarter valued at $44,000. Tidal Investments LLC raised its stake in MongoDB by 76.8% in the third quarter. Tidal Investments LLC now owns 7,859 shares of the company’s stock valued at $2,125,000 after buying an additional 3,415 shares in the last quarter. Finally, Principal Financial Group Inc. raised its stake in MongoDB by 2.7% in the third quarter. Principal Financial Group Inc. now owns 6,095 shares of the company’s stock valued at $1,648,000 after buying an additional 160 shares in the last quarter. 89.29% of the stock is currently owned by institutional investors and hedge funds.

MongoDB Stock Performance

NASDAQ:MDB opened at $173.50 on Monday. The stock has a fifty day moving average price of $195.15 and a two-hundred day moving average price of $248.65. The stock has a market capitalization of $14.09 billion, a price-to-earnings ratio of -63.32 and a beta of 1.49. MongoDB, Inc. has a 1 year low of $140.78 and a 1 year high of $387.19.

MongoDB (NASDAQ:MDBGet Free Report) last released its quarterly earnings data on Wednesday, March 5th. The company reported $0.19 earnings per share (EPS) for the quarter, missing the consensus estimate of $0.64 by ($0.45). The business had revenue of $548.40 million during the quarter, compared to analysts’ expectations of $519.65 million. MongoDB had a negative return on equity of 12.22% and a negative net margin of 10.46%. During the same quarter in the previous year, the company posted $0.86 EPS. On average, analysts predict that MongoDB, Inc. will post -1.78 EPS for the current year.

Analyst Ratings Changes

A number of research firms have weighed in on MDB. Needham & Company LLC cut their price target on MongoDB from $415.00 to $270.00 and set a “buy” rating on the stock in a research note on Thursday, March 6th. Piper Sandler cut their price target on MongoDB from $280.00 to $200.00 and set an “overweight” rating on the stock in a research note on Wednesday, April 23rd. Canaccord Genuity Group cut their price target on MongoDB from $385.00 to $320.00 and set a “buy” rating on the stock in a research note on Thursday, March 6th. Redburn Atlantic upgraded MongoDB from a “sell” rating to a “neutral” rating and set a $170.00 price target on the stock in a research note on Thursday, April 17th. Finally, Wells Fargo & Company cut MongoDB from an “overweight” rating to an “equal weight” rating and cut their price target for the company from $365.00 to $225.00 in a research note on Thursday, March 6th. Eight investment analysts have rated the stock with a hold rating, twenty-four have issued a buy rating and one has issued a strong buy rating to the company’s stock. According to MarketBeat.com, the stock currently has an average rating of “Moderate Buy” and an average price target of $294.78.

Get Our Latest Analysis on MongoDB

Insider Buying and Selling at MongoDB

In related news, CAO Thomas Bull sold 301 shares of the stock in a transaction on Wednesday, April 2nd. The stock was sold at an average price of $173.25, for a total transaction of $52,148.25. Following the sale, the chief accounting officer now owns 14,598 shares of the company’s stock, valued at $2,529,103.50. This trade represents a 2.02 % decrease in their ownership of the stock. The transaction was disclosed in a legal filing with the Securities & Exchange Commission, which is available at the SEC website. Also, CFO Srdjan Tanjga sold 525 shares of the stock in a transaction on Wednesday, April 2nd. The shares were sold at an average price of $173.26, for a total value of $90,961.50. Following the sale, the chief financial officer now directly owns 6,406 shares in the company, valued at approximately $1,109,903.56. The trade was a 7.57 % decrease in their position. The disclosure for this sale can be found here. Over the last 90 days, insiders have sold 47,680 shares of company stock valued at $10,819,027. Insiders own 3.60% of the company’s stock.

MongoDB Profile

(Free Report)

MongoDB, Inc, together with its subsidiaries, provides general purpose database platform worldwide. The company provides MongoDB Atlas, a hosted multi-cloud database-as-a-service solution; MongoDB Enterprise Advanced, a commercial database server for enterprise customers to run in the cloud, on-premises, or in a hybrid environment; and Community Server, a free-to-download version of its database, which includes the functionality that developers need to get started with MongoDB.

Featured Stories

Institutional Ownership by Quarter for MongoDB (NASDAQ:MDB)

This instant news alert was generated by narrative science technology and financial data from MarketBeat in order to provide readers with the fastest and most accurate reporting. This story was reviewed by MarketBeat’s editorial team prior to publication. Please send any questions or comments about this story to contact@marketbeat.com.

Before you consider MongoDB, you’ll want to hear this.

MarketBeat keeps track of Wall Street’s top-rated and best performing research analysts and the stocks they recommend to their clients on a daily basis. MarketBeat has identified the five stocks that top analysts are quietly whispering to their clients to buy now before the broader market catches on… and MongoDB wasn’t on the list.

While MongoDB currently has a Moderate Buy rating among analysts, top-rated analysts believe these five stocks are better buys.

View The Five Stocks Here

Reduce the Risk Cover

Market downturns give many investors pause, and for good reason. Wondering how to offset this risk? Enter your email address to learn more about using beta to protect your portfolio.

Get This Free Report

Like this article? Share it with a colleague.

Link copied to clipboard.

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


MongoDB, Inc. (NASDAQ:MDB) Given Consensus Recommendation of “Moderate Buy” by Analysts

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

Shares of MongoDB, Inc. (NASDAQ:MDBGet Free Report) have been given an average recommendation of “Moderate Buy” by the thirty-three research firms that are currently covering the company, Marketbeat Ratings reports. Eight equities research analysts have rated the stock with a hold recommendation, twenty-four have given a buy recommendation and one has issued a strong buy recommendation on the company. The average twelve-month target price among analysts that have updated their coverage on the stock in the last year is $294.78.

A number of research analysts have weighed in on the company. Rosenblatt Securities reiterated a “buy” rating and set a $350.00 target price on shares of MongoDB in a report on Tuesday, March 4th. Truist Financial decreased their price objective on MongoDB from $300.00 to $275.00 and set a “buy” rating for the company in a report on Monday, March 31st. Monness Crespi & Hardt upgraded shares of MongoDB from a “sell” rating to a “neutral” rating in a report on Monday, March 3rd. Scotiabank reiterated a “sector perform” rating and issued a $160.00 price target (down from $240.00) on shares of MongoDB in a research note on Friday. Finally, KeyCorp lowered shares of MongoDB from a “strong-buy” rating to a “hold” rating in a research note on Wednesday, March 5th.

Check Out Our Latest Report on MDB

Insider Buying and Selling

<!—->

In other news, Director Dwight A. Merriman sold 3,000 shares of the business’s stock in a transaction on Monday, February 3rd. The shares were sold at an average price of $266.00, for a total transaction of $798,000.00. Following the completion of the sale, the director now owns 1,113,006 shares of the company’s stock, valued at $296,059,596. The trade was a 0.27 % decrease in their position. The sale was disclosed in a legal filing with the SEC, which is available at this link. Also, CAO Thomas Bull sold 301 shares of the firm’s stock in a transaction on Wednesday, April 2nd. The stock was sold at an average price of $173.25, for a total value of $52,148.25. Following the completion of the transaction, the chief accounting officer now directly owns 14,598 shares in the company, valued at approximately $2,529,103.50. This represents a 2.02 % decrease in their ownership of the stock. The disclosure for this sale can be found here. Insiders sold 39,345 shares of company stock valued at $8,485,310 over the last 90 days. Company insiders own 3.60% of the company’s stock.

Institutional Trading of MongoDB

Hedge funds and other institutional investors have recently bought and sold shares of the stock. Cloud Capital Management LLC bought a new stake in MongoDB during the first quarter valued at about $25,000. Strategic Investment Solutions Inc. IL purchased a new stake in shares of MongoDB during the fourth quarter worth about $29,000. Hilltop National Bank raised its stake in MongoDB by 47.2% during the 4th quarter. Hilltop National Bank now owns 131 shares of the company’s stock valued at $30,000 after purchasing an additional 42 shares during the period. NCP Inc. purchased a new position in MongoDB in the 4th quarter worth approximately $35,000. Finally, Versant Capital Management Inc boosted its stake in MongoDB by 1,100.0% in the 4th quarter. Versant Capital Management Inc now owns 180 shares of the company’s stock worth $42,000 after purchasing an additional 165 shares during the period. Hedge funds and other institutional investors own 89.29% of the company’s stock.

MongoDB Price Performance

NASDAQ MDB opened at $174.69 on Wednesday. The business’s 50-day moving average is $190.43 and its 200 day moving average is $247.31. The stock has a market capitalization of $14.18 billion, a PE ratio of -63.76 and a beta of 1.49. MongoDB has a 1 year low of $140.78 and a 1 year high of $387.19.

MongoDB (NASDAQ:MDBGet Free Report) last issued its quarterly earnings data on Wednesday, March 5th. The company reported $0.19 earnings per share for the quarter, missing analysts’ consensus estimates of $0.64 by ($0.45). MongoDB had a negative return on equity of 12.22% and a negative net margin of 10.46%. The business had revenue of $548.40 million during the quarter, compared to the consensus estimate of $519.65 million. During the same quarter in the prior year, the business posted $0.86 earnings per share. As a group, equities research analysts anticipate that MongoDB will post -1.78 earnings per share for the current year.

MongoDB Company Profile

(Get Free Report

MongoDB, Inc, together with its subsidiaries, provides general purpose database platform worldwide. The company provides MongoDB Atlas, a hosted multi-cloud database-as-a-service solution; MongoDB Enterprise Advanced, a commercial database server for enterprise customers to run in the cloud, on-premises, or in a hybrid environment; and Community Server, a free-to-download version of its database, which includes the functionality that developers need to get started with MongoDB.

Recommended Stories

Analyst Recommendations for MongoDB (NASDAQ:MDB)



Receive News & Ratings for MongoDB Daily – Enter your email address below to receive a concise daily summary of the latest news and analysts’ ratings for MongoDB and related companies with MarketBeat.com’s FREE daily email newsletter.

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Nebula Research & Development LLC Purchases New Holdings in MongoDB, Inc. (NASDAQ:MDB)

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

Nebula Research & Development LLC purchased a new stake in MongoDB, Inc. (NASDAQ:MDBFree Report) during the fourth quarter, according to its most recent 13F filing with the SEC. The firm purchased 5,442 shares of the company’s stock, valued at approximately $1,267,000.

Other institutional investors and hedge funds have also bought and sold shares of the company. Strategic Investment Solutions Inc. IL acquired a new position in shares of MongoDB in the 4th quarter valued at $29,000. Hilltop National Bank boosted its stake in shares of MongoDB by 47.2% during the 4th quarter. Hilltop National Bank now owns 131 shares of the company’s stock worth $30,000 after acquiring an additional 42 shares in the last quarter. NCP Inc. acquired a new position in shares of MongoDB during the 4th quarter worth about $35,000. Versant Capital Management Inc raised its position in shares of MongoDB by 1,100.0% during the 4th quarter. Versant Capital Management Inc now owns 180 shares of the company’s stock worth $42,000 after purchasing an additional 165 shares during the last quarter. Finally, Wilmington Savings Fund Society FSB acquired a new position in shares of MongoDB during the 3rd quarter worth about $44,000. 89.29% of the stock is owned by hedge funds and other institutional investors.

MongoDB Stock Up 0.1 %

Shares of MDB stock traded up $0.18 during trading hours on Tuesday, hitting $174.69. 1,468,630 shares of the company were exchanged, compared to its average volume of 1,842,724. The firm has a market cap of $14.18 billion, a PE ratio of -63.76 and a beta of 1.49. MongoDB, Inc. has a 1 year low of $140.78 and a 1 year high of $387.19. The firm has a fifty day moving average of $192.74 and a 200 day moving average of $247.82.

MongoDB (NASDAQ:MDBGet Free Report) last posted its quarterly earnings results on Wednesday, March 5th. The company reported $0.19 EPS for the quarter, missing the consensus estimate of $0.64 by ($0.45). MongoDB had a negative net margin of 10.46% and a negative return on equity of 12.22%. The business had revenue of $548.40 million during the quarter, compared to analysts’ expectations of $519.65 million. During the same quarter in the previous year, the business posted $0.86 EPS. Analysts expect that MongoDB, Inc. will post -1.78 earnings per share for the current year.

Insider Activity at MongoDB

In other news, CEO Dev Ittycheria sold 18,512 shares of the business’s stock in a transaction on Wednesday, April 2nd. The shares were sold at an average price of $173.26, for a total transaction of $3,207,389.12. Following the transaction, the chief executive officer now directly owns 268,948 shares of the company’s stock, valued at approximately $46,597,930.48. The trade was a 6.44 % decrease in their ownership of the stock. The transaction was disclosed in a document filed with the SEC, which is available at this hyperlink. Also, CAO Thomas Bull sold 301 shares of the firm’s stock in a transaction on Wednesday, April 2nd. The stock was sold at an average price of $173.25, for a total transaction of $52,148.25. Following the transaction, the chief accounting officer now directly owns 14,598 shares of the company’s stock, valued at approximately $2,529,103.50. This represents a 2.02 % decrease in their ownership of the stock. The disclosure for this sale can be found here. Over the last quarter, insiders have sold 39,345 shares of company stock worth $8,485,310. Company insiders own 3.60% of the company’s stock.

Wall Street Analysts Forecast Growth

MDB has been the subject of a number of recent research reports. Canaccord Genuity Group cut their price objective on shares of MongoDB from $385.00 to $320.00 and set a “buy” rating for the company in a report on Thursday, March 6th. Truist Financial decreased their target price on MongoDB from $300.00 to $275.00 and set a “buy” rating on the stock in a research note on Monday, March 31st. Guggenheim upgraded MongoDB from a “neutral” rating to a “buy” rating and set a $300.00 price target on the stock in a report on Monday, January 6th. Royal Bank of Canada dropped their price objective on MongoDB from $400.00 to $320.00 and set an “outperform” rating on the stock in a report on Thursday, March 6th. Finally, Oppenheimer dropped their price target on MongoDB from $400.00 to $330.00 and set an “outperform” rating on the stock in a research note on Thursday, March 6th. Eight equities research analysts have rated the stock with a hold rating, twenty-four have given a buy rating and one has issued a strong buy rating to the company’s stock. According to data from MarketBeat, MongoDB presently has a consensus rating of “Moderate Buy” and an average target price of $294.78.

Get Our Latest Analysis on MDB

About MongoDB

(Free Report)

MongoDB, Inc, together with its subsidiaries, provides general purpose database platform worldwide. The company provides MongoDB Atlas, a hosted multi-cloud database-as-a-service solution; MongoDB Enterprise Advanced, a commercial database server for enterprise customers to run in the cloud, on-premises, or in a hybrid environment; and Community Server, a free-to-download version of its database, which includes the functionality that developers need to get started with MongoDB.

Featured Stories

Institutional Ownership by Quarter for MongoDB (NASDAQ:MDB)

Before you consider MongoDB, you’ll want to hear this.

MarketBeat keeps track of Wall Street’s top-rated and best performing research analysts and the stocks they recommend to their clients on a daily basis. MarketBeat has identified the five stocks that top analysts are quietly whispering to their clients to buy now before the broader market catches on… and MongoDB wasn’t on the list.

While MongoDB currently has a Moderate Buy rating among analysts, top-rated analysts believe these five stocks are better buys.

View The Five Stocks Here

Ten Starter Stocks For Beginners to Buy Now Cover

Just getting into the stock market? These 10 simple stocks can help beginning investors build long-term wealth without knowing options, technicals, or other advanced strategies.

Get This Free Report

Like this article? Share it with a colleague.

Link copied to clipboard.

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Docker Bridges Agents and Containers with New MCP Catalog and Toolkit

MMS Founder
MMS Sergio De Simone

Article originally posted on InfoQ. Visit InfoQ

Docker has announced two new AI-focused tools—the Docker MCP Catalog and the Docker MCP Toolkit—to bring container-grade security and developer-friendly workflows to agentic applications, helping build a developer-centric ecosystem for Model Context Protocol (MCP) tools.

The Docker MCP Catalog is a centralized platform for developers to discover MCP tools. Docker’s COO Mark Cavage and head of enfinnering Tushar Jain compare the current AI landscape to the early days of cloud computing and containers, highlighting the need for standardized tooling and secure, scalable development workflows.

Back in the early days of the cloud, Docker brought structure to chaos by making immutability and isolation the standard, building in authentication, and launching Docker Hub as a central discovery layer. It didn’t just streamline deployment – it redefined how software gets built, shared, and trusted.

Docker has partnered with companies across cloud, developer tooling, and AI, to build a catalog of over 100 MCP servers, all hosted on Docker Hub. The catalog includes tools from Stripe, Elastic, Neo4j, and more. Each tool is curated, verified, and versioned to ensure reliability and consistency.

The Docker MCP Toolkit allows developers to run, authenticate, and manage MCP tools from the Docker MCP Catalog directly on their development machines using the new docker mcp CLI command.

With one-click launch from Docker Desktop, you can spin up MCP servers in seconds and connect them to clients like Docker AI Agent, Claude, Cursor, VS Code, Windsurf, continue.dev, and Goose – no complex setup required

The toolkit also includes built-in credentials and OAuth support along with a Gateway MCP Server that dynamically exposes enabled tools to compatible clients.

Introduced by Anthropic, the Model Context Protocol is an open standard for integrating external resources and tools into LLM-centered apps. Built on a client-server architecture, MCP enables an app to use an MCP client to connect to MCP servers that provide access to datasources or external tools. Anthropic’s official documentation shows how a developer can implement an MCP server using Python to wrap calls to a public weather service. Any MCP-compliant app, such as Claude for Desktop, can then access this server without modifications.

Since its introduction, MCP has seen wide adoption—most recently from GitHub and Cloudflare—and has inspired the creation of several static and dynamic MCP server catalogs.

About the Author

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Google’s Gemma 3 QAT Language Models Can Run Locally on Consumer-Grade GPUs

MMS Founder
MMS Anthony Alford

Article originally posted on InfoQ. Visit InfoQ

Google released the Gemma 3 QAT family, quantized versions of their open-weight Gemma 3 language models. The models use Quantization-Aware Training (QAT) to maintain high accuracy when the weights are quantized from 16 to 4 bits.

All four Gemma 3 model sizes are now available in QAT versions: 1B, 4B, 12B, and 27B parameters. The quantized versions require as little as 25% of the VRAM needed by the 16 bit models. Google claims that the 27B model can run on a desktop NVIDIA RTX 3090 GPU with 24GB VRAM, while the 12B model can run on a laptop NVIDIA RTX 4060 GPU with 8GB VRAM. The smaller models can run on mobile phones or other edge devices. By using Quantization-Aware Training, Google was able to reduce the accuracy loss from quantization, as much as 54%. According to Google, 

While top performance on high-end hardware is great for cloud deployments and research, we heard you loud and clear: you want the power of Gemma 3 on the hardware you already own. We’re committed to making powerful AI accessible, and that means enabling efficient performance on the consumer-grade GPUs found in desktops, laptops, and even phones…Bringing state-of-the-art AI performance to accessible hardware is a key step in democratizing AI development…We can’t wait to see what you build with Gemma 3 running locally!

InfoQ covered Google’s initial launch of the Gemma series in 2024, which was quickly followed by Gemma 2. The open-source models achieved performance competitive with models 2x larger by incorporating design elements from Google’s flagship Gemini LLMs. The latest iteration, Gemma 3, has performance improvements that make it the “top open compact model,” according to Google. Gemma 3 also added vision capabilities, except in the 1B size.

While the unquantized Gemma 3 models exhibit impressive performance for their size, they still require substantial GPU resources. For example, the unquantized 12B model requires an RTX 5090 with 32GB of VRAM. To allow the quantization of model weights without sacrificing performance, Google used QAT. This technique simulates inference-time quantization during training, instead of simply quantizing the model after it’s trained.

Google dev Omar Sanseviero wrote about using the QAT models in a thread on X and suggested there was still room for improvement:

We still recommend playing with the models (e.g. we didn’t quantize the embeddings, some people even did 3-bit quantization and it was working better than naive 4 bits)

Users praised the QAT models’ performance in a discussion on Hacker News:

I have a few private “vibe check” questions and the 4 bit QAT 27B model got them all correctly. I’m kind of shocked at the information density locked in just 13 GB of weights. If anyone at Deepmind is reading this — Gemma 3 27B is the single most impressive open source model I have ever used. Well done!

Django Web Framework co-creator Simon Willison wrote about his experiments with the models and said:

Having spent a while putting it through its paces via Open WebUI and Tailscale to access my laptop from my phone I think this may be my new favorite general-purpose local model. Ollama appears to use 22GB of RAM while the model is running, which leaves plenty on my 64GB machine for other applications.

The Gemma 3 QAT model weights are available on HuggingFace and in several popular LLM frameworks, including Ollama, LM Studio, Gemma.cpp, and llama.cpp.

About the Author

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.