MongoDB Acquires Voyage AI to Enhance AI-Powered Search and Retrieval

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

MongoDB, a database for contemporary apps, has announced the acquisition of Voyage AI, a company specializing in embedding and reranking models for AI-powered applications. This integration will strengthen MongoDB’s database capabilities by improving information retrieval accuracy within AI applications. Businesses often face challenges with AI-generated inaccuracies, particularly in critical fields including healthcare, finance, and legal services. Voyage AI’s technology addresses this issue by ensuring that AI models extract precise and relevant data, reducing the risk of incorrect or misleading outputs. The company’s models, recognized for their high performance, will help organizations apply AI more effectively across specialized domains, including legal and financial documents, enterprise knowledge bases, and unstructured data.

Become a Subscriber

Please purchase a subscription to continue reading this article.

Subscribe Now

MongoDB plans to integrate Voyage AI’s retrieval capabilities into its database platform, allowing businesses to build more reliable AI applications. According to MongoDB CEO Dev Ittycheria, this acquisition redefines the role of databases in AI by enabling trustworthy and meaningful AI-driven solutions. Voyage AI’s technology will remain accessible through its platform, AWS Marketplace, and Azure Marketplace, with additional integrations expected later this year. This acquisition reinforces MongoDB’s commitment to advancing AI applications by providing businesses with enhanced data retrieval and accuracy, making AI solutions more practical for real-world use cases.

Read more


Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Presentation: Recommender and Search Ranking Systems in Large Scale Real World Applications

MMS Founder
MMS Moumita Bhattacharya

Article originally posted on InfoQ. Visit InfoQ

Transcript

Bhattacharya: We’re going to talk about large-scale recommender and search systems. Initially, I’ll motivate why we need recommendation and search systems. Then I’ll further motivate by giving one example use case from Netflix for both recommendations and search. Then identify some common components between these ranking systems, our search and recommendation system. What usually ends up happening for a successful deployment of a large-scale recommendation or search system. Then, finally, wrap it up with some key takeaways.

Motivation

I think it’s no secret that most of the product, especially B2C product, have in-built search and recommendation systems. Whether it’s video streaming services such as Netflix, music streaming services such as Spotify, e-commerce platforms such as Etsy, Amazon, usually have some form of recommendations and some form of search systems. The catalogs in each of these products are ever-growing. The user base is growing. The complexity of these models and this architecture and these overall systems keeps growing.

This whole talk is about, with examples, trying to motivate what it takes to build one of these large recommendation or search systems at scale in production. The reality of any of the B2C, business to customer products are that there are almost like, often depending on the product, there could be 100 million plus users, like Netflix has more than 280 million users, 100 million plus products. In general, to rank for these many users, these many products at an admissible latency, is almost impossible. There are some tricks that we do in industry to still keep the relevance of what items we are showing to our users, but be able to be realistic in the time it takes to render the service.

Typically, any ranking system, whether it’s recommendation system or search system, we break it down into two steps. One is candidate set selection, or oftentimes also referred to as first pass ranking, wherein you take all these items for user and instead of those millions of items, you narrow it down to hundreds and thousands of items. That’s basically called candidate set selection. You are selecting a candidate that you can then rank. In this kind of set selection, we’ll typically try to retain the recall. We want to ensure that it’s high recall system. Then, once these hundreds of thousands of items are selected, then we have a more complex machine learning model that does second pass ranking. That leads to the final set of recommendation or results for search, query that then gets shown to the user. Beyond this stratification of first pass and second pass ranker, there are many more things that need to be considered.

Here is an overview of certain components that we think about, we look into irrespective of whether it’s a search system or a recommendation system. First is the first pass ranking that I showed before. The second one is the second pass ranking. For the first pass ranking, typically, depending on the latency requirement, one needs to decide whether we can have a machine learning model, or we can build some rules and heuristics, like for query, you can have lexical components that retrieves candidates. Versus for recommendation, you can have simple non-personalized model that can retrieve candidates. The second pass ranking is where usually a lot of heavy machine learning algorithms are deployed. There, again, there are many subcomponents like, what should be the data for training the model? What are the features, architecture, objectives, rewards? I’m going to go into much more details on some example second pass ranking.

Then there is a whole system of offline evaluation. What kind of metrics should we use? Should we use ranking metrics? Should we use human in the loop, like human annotation for quality assessment, and so on? Then there is the aspect of biases where when we deploy a system, all our users are seeing the results from that system. There is a selection bias that loops in. How we typically address that is by adding some explore data. How can we set up this explore data while not hitting the performance of the model? Then there is a component of inference where once the model is trained, we want to deploy it and then do the inference within the acceptable latency, throughput, what is the compute cost, GPU, and so on? Ultimately, any experience typically in a B2C product, we are A/B testing it as well.

Then in the A/B test, we need to think about the metrics. I wanted to show this slide first. If you want to take away one thing, you can just take away this, that these are the different things you need to think about and consider. During this talk, I’m going to focus on the second pass ranking, offline evaluation, and inference setup with some examples. Just in case you were not able to see some of the details in the previous slide, here are the sub-bullet points where data, feature, model architecture, evaluation metric, explore data, all these things are crucial for any of these ranking systems.

Recommendation: A Netflix Use Case

Now let’s take a recommendation use case from Netflix. On Netflix, when a user comes to Netflix, usually there is a lot to watch from, and we often hear in our user research, it sometimes feels overwhelming. How do we find what I want to watch in the moment? Netflix oftentimes has so many recommendations on the homepage. It just feels overwhelming. One approach that our members do take is they often go to search, and then they will type something. For example, here, show me a stand-up comedy. The results from these search ranking systems are either personalized or just relevant to the query. I think 60% of our users are on TV, and typing query is still a very tedious task. Just to motivate the search use case, most of the discovery still happens on homepage on Netflix, but 20% to 30% of discovery happens from search, second only to the homepage. There is a nice paper that talks about the challenges of search in the context of Netflix linked here.

Usually, when a member comes to search, there are three different types of member intent. Either the member directly knows what they want, so typing, Stranger Things, and then you specifically want Stranger Things. Versus you know something, but you don’t know exactly, so that’s find intent. Then there is explore intent where you type as broad things like, it’s a Saturday evening, show me comedy titles, or something like that. Depending on this different intent, how the search ranking system responds are different.

Going back to the aspect of a member coming from homepage to search and having to type this long query on a TV remote, which is very tedious. What if we can anticipate what the member is about to search and update the recommendation before the member needs to start typing? That’s why this particular example I’m referring to as a recommendation use case, even though it is after you click on the search page. Internally, we refer to as pre-query, but in industry it often is also referred to as no-query systems. This is a personalized recommendation canvas, which is also trying to capture search intent for a member. Here, let me motivate the purpose of this canvas a little bit more. On a Thursday evening, Miss M is trying to decide whether she goes to Netflix, HBO, Hulu, and so on.

Then she comes to Netflix because she heard that they have good Korean content. There is a model that understands this member’s long-term preference, but in the moment, she recently heard that Netflix has some Korean content that is really good from her friend, and her intent changed. What she did is she browsed on the homepage with some horror titles, and then she browsed on the homepage with some Korean titles. Now she still didn’t find the title that she wants to start watching on homepage. Now she went on search. In this moment, if you’re able to understand this user’s long-term preference, but also the short-term intent, that is, she’s looking for a Korean movie, and before she has to search Korean horror or something, we can just update the recommendation to show her a mix of Korean and Korean horror movies on the no-query, pre-query canvas. We can really capture her intent without the struggle of typing.

If you imagine, to build a system like this, there is, of course, modeling consideration, but a large part of it is also software and infrastructural consideration, and that’s what I’m going to try to highlight. This is anticipatory search because we want to anticipate before the user has to search based on the in-session signal and browsing behavior that the member did in that current session. Overall, pre-query recommendation needs this kind of approach where it not only learns from long-term preference, but also utilizes short-term preference. We have seen in industry that being able to leverage browsing signals in the session, it’s able to help the model capture user short-term intent, while the research question there is, how do we balance the short-term and long-term intent and not make the whole recommendation just Korean horror movies for this member?

There are some advantages of these kinds of in-session signals. One is, as you can imagine, freshness, where if the model is aware of the user in-the-moment behavior, then it will not go into a filter bubble of only showing a certain taste, so you can break out of that filter bubble. It can help inject diversity. Of course, it’ll introduce novelty because you’re not showing the same old long-term preference to the member. Make it easy for findability because you’re training the model or you’re tuning the model to be attuned to the user’s short-term intent. It also helps user and title cold starting. Ultimately, how we call it in Netflix is it sparks member joy. We see in our real experience, so this is a real production model, that it ultimately reduces abandoned session. In the machine learning literature, there is a ton of research of how do we trade off between user long-term interest with short-term interest.

In the chronological order of research done many years ago to more recent, earlier we used to use Markov chain, Markovian methods, then there is reinforcement learning, there are some papers that tries to use reinforcement learning. Then, more recently, there is a lot of transformer and sequence model that captures the user long-term preference history while also adding some short-term intent as a part of the sequence and how they balance the tradeoff. I’m not going into details about these previous studies, but some common consideration if you want to explore this area is, what sequence length to consider. How long back in the history should we go to capture user long-term and short-term interest? What are the internal order of actions? In the context of e-commerce, for example, purchase is the most valuable action, add to cart is a little less, click might be much less informative than purchase, and the different types of action.

What is the solution that we built? I’ll go into the model itself later, but first I wanted to show the infrastructure overview. A member comes to Netflix and the client tells the server that the member is on pre-query canvas, fetch the recommendation. In the meantime, as JIT’s just-in-time server request call happens, we are also in parallel accessing every engagement that the member did elsewhere on the product. There has to be a data source that can tell, one second ago the member thumbs up a title and two seconds ago member clicked on a title. That information needs to come just in time to be sent to the server. While we also, of course, need to train the future model, so we also set up logging.

Ultimately, this server then makes a real-time call with the in-session signals as well as the historical information to the online model, which is hosted somewhere. This online model was trained previously and has been hosted, but it’s capable of taking these real-time signals to make the prediction. Ultimately, this model then returns a ranked list of results within a very acceptable latency. In this case, the latency is lower than 40 milliseconds, and ultimately sends the results to a client. In the process, we are also saving the server engagement, client engagement into logging so that the offline model training can happen in the future. There is a paper in RecSys on this particular topic. If you’re more interested, feel free to dig deeper. That is the overall architecture.

Here are some considerations that we had to think through when implementing a system like that. Actually, one of the key things for a system like that was this just-in-time server call. We really have to make the server call or access the model when the member is on that canvas. We have to return the result before the member even realizes, because we want to take all the in-session signals that the member did the browsing on the product in that session to the model. Because otherwise we lose the context. Let’s say in the Korean horror movie context, the member is seeing a Korean horror movie and immediately goes to search, and if you’re not aware of the interaction that the member did on homepage, then we will not really be able to capture the member intent. The recommendations will not be relevant to the member in that short-term intent of the member. Here are some considerations. The server call pattern is the most important thing we needed to figure out in this world.

More interestingly, different platform, I don’t know if that’s the case for other companies. In this particular case, different platforms had different server call patterns. How do you figure out and work together with engineers and infra teams to change the service call pattern and make sure that the latency of the model and the end-to-end latency is within acceptable bound that the member doesn’t realize that so much action is happening within such few milliseconds. Of course, throughput SLA becomes even more important depending on the platform, depending on the region, and so on. Because we want to do absolute real-time inference to capture the user in-session browsing signals, we had to remove caching. Any kind of caching had to be removed or the TTL had to be reduced a lot. These three components really differentiated a work like this from more traditional recommendation where you can prefetch the recommendation for a member. You can do offline computation.

The infrastructural and software constraint is much lenient in a more traditional prefetching recommendation versus this system has to be really real-time. Then, of course, the regular things like logging. We need to make sure client-side and server-side logging is correctly done. Near real-time browsing signal is available through some data source, or there is a Kafka stream or something also making sure those streams have very low latency so that the real-time browsing signal can become available to the model during inference time without much delay. Ultimately, then the model comes, that the model needs to be able to handle these long-term and short-term preference and be able to predict relevant recommendation. There is a reason why I did a priority listing like that. The first three components are really more important than the model itself in this particular case, which is server call, latency, and caching.

What is the model? The model itself in this case is a multi-task learning, deep learning architecture, which is very similar to traditional content-based recommendation model where we have a bunch of different types of signals that gets trained, go into the model. It’s, I think, a few layered deep learning model with some residual connection and some real sequence information of the user. There is a profile context. That is where the user is from, country, language, and so on. Then there is video-related data as well, things like tenure of the title, how new or old the title is, and so on. Then there is synopsis and these other information about the video itself. Then, more importantly, there is video and profile information, so engagement data. Those are really powerful signals, whether the member had thumbs up a title in the past, or is this a re-watch versus a new discovery, new title that the member is discovering? In this particular work, there was this addition of browsing signals that we had added.

This is where the short-term member intent is being captured, where in real time, we know whether the member did a my list add on this title or thumbs up this title or thumbs down some title. Negative signal is also super important. That immediately goes and feeds into the model during inference time, letting the model then trade off between short-term and long-term. We do have some architectural consideration here to trade off between short-term and long-term. Unfortunately, that’s something I could not talk about. I’m just giving you this thought that it’s important to trade off between short-term and long-term in this model architecture. Overall, with this improvement of absolute real-time inference as well as the model architecture incorporating in-session browsing signal, offline we saw over 6% improvement, and it is currently 100% in production. All of Netflix users, when you go to pre-query canvas, this is the model that shows you your recommendation.

Here is a real example. This was the initial recommendation when the user session started. This is on a test user. Then the member went and did a bunch of browsing on homepage, and did browsing on woman in the lead role shows and movies. Then they came back to no-query or pre-query page, and their recommendation immediately within that session got updated to have shows like, Emily in Paris, New Girl, and so on, which has woman in the lead character. Then they did, again, go back to category page or homepage and did some shows related to cooking or baking. Ultimately, in the same session, when they went back to search page, their recommendation immediately changed. You can see it’s a combination of all three. It’s not just souping the whole thing to make it baking show or something. This is where the tradeoff between short-term and long-term preference comes into play. That you want to capture what member is doing in the session, but don’t want to overpower the whole recommendation with that short-term intent only.

Challenges and Other Considerations

What were some of the challenges and other considerations that we took into account? I think something that I alluded to in the previous slide, where filter bubble and concentration effect is a big problem, and still an open question in the space of recommendation and search, where, how do we make sure that when we understand a member need, we are not saying this is the only need you have, and the whole page, whole product gets overwhelmed with that one taste profile or one kind of recommendation. In this, both short-term, long-term tradeoff is important, but also explore-exploit or reinforcement learning, these are areas that are usually explored to break out of filter bubble and avoid concentration effect. Because this is such a real-time system, as you would imagine, depending on the latency and the region the model has been served, sometimes there is increased timeout, which leads to increased error rate. What we don’t want is a member to see an empty page.

There was a lot of infrastructural debugging and stuff we had to do to make sure that the error rate and increased timeout is not affected, which include multi-GPU inference, but also thinking about how do we deploy the model and additional consideration like the feature computation, whether there is some caching in some of the features that doesn’t need to be real-time and so on. Overall, we also want to be careful about not making the recommendation too dynamic. We do want to capture the user’s short-term intent and hence update the recommendation in real-time, but we also don’t want to completely move the floor behind the member’s feed by just changing the page every time the member is coming into that part of the product. We want to have a tradeoff between how much we are changing versus how much we are keeping constant. Because it’s such a real-time system and because it’s so dynamic, it is more difficult to debug.

Also, it becomes more susceptible to network unreliability, which can ultimately cause degraded user experience. Another thing that is important is depending on how you are building these short-term signals, browsing signals. Some of these signals are very sparse, like how many times do you actually thumbs up a show when you enjoy something on Netflix, or on Hulu, or somewhere. Signals like thumbs up, my list add, or thumbs down, they are usually very sparse. Typically, we need to do something to the model to generalize these signals that are otherwise very sparse, and make sure the model is not over-anchoring on one signal versus another. That was my recommendation use case.

Defining Ranking – How and When It Was Right

Participant 1: You mentioned ranking, and I’m assuming that after you computed and you had a list of things, you had to rank them based on, because that page is limited, so you can only show so much. How did you guys go about defining that ranking? How did you know it was right or when did you know it was right?

Bhattacharya: This is where that happens. When we train the model, that’s the example that I shared, the deep learning model with short-term, long-term intent and so on. Then we have offline evaluation. With offline evaluation, we evaluate for ranking. Some metrics for ranking is NDCG and MRR. What rankings mean typically is the model generates a likelihood of something. In this case, let’s say likelihood of playing a title. Then we order that likelihood in decreasing order and cut it off. Let’s say top 20, if you just want to show the member top 20 titles, we rank the probability scores and then take top 20 probability scores for a given context. In this case, let’s imagine the context is just profile ID.

Then we take that as top-k, and then we use some metric, for example, NDCG or an MRR, to evaluate how good the model is doing. There’s something called golden test set here, where usually we would build a temporarily independent dataset, temporarily independent to the training data, to evaluate how good the model is doing. That’s the offline metric. Then we go to the A/B test, which tells us what we saw offline, whether that’s what our members are seeing. A/B test gives us a real test.

Balancing What Is Happening During Searches vs. Tagging with Metadata

Participant 2: As customers are changing their language about how they’re searching, as well as the metadata that’s associated with all of the content that’s available inside of Netflix, it seems like there is this constant change, as you maybe had missed some metadata, because what was being pulled back in terms of recall and precision wasn’t matching actually what the customer’s language was trying to represent. How are you all trying to balance how things are tagged with metadata versus what is taking place during searches?

Bhattacharya: We usually try to incorporate some of those metadata in the model as features so that that correspondence between the query or the user engagement with other titles and the metadata is used for the model to learn the connection. Usually the metadata is static, but the query is dynamic. When the query comes in, depending on the title and metadata that the model thinks is the right relevant result for that query, it gets pulled as top-k ranking. In general, there is also certain lexical and certain filters as well as guardrails in the actual production system. There are some boosting or some lexical matching that happens as well to make sure the model do not surface something that is completely irrelevant to the query or the context.

Search Use Case: A Netflix Use Case

The next use case is a search use case. Although it is a search use case, it’s actually a search and recommendation use case. We built this model called UniCoRn, Unified Contextual Recommender. I’ll get to in a couple of slides why is it called UniCoRn. Similar to what I already motivated, many products, especially B2C products, have both search and recommendations use case. In the context of Netflix, here is an example where we have traditional search, which is query to video ranker. For example, if you type P-A-R-I, we want to make sure the model is being able to show you Emily in Paris or some kind of P-A-R-I related titles.

Then there is purely recommendations use case, which is what we saw in the previous slide, example is no-query or pre-query. Then there is other kind of recommendations, such as title-title recommendation, video-video recommendation. In the context of e-commerce, Canvas is more like this, or in the context of Netflix, it’s more like this, wherein you click on a title, here, Emily in Paris, and you see other titles that are similar to it. That’s a recommendation use case as well.

The overarching thesis for this work was, can we build a single model for both search and recommendation task? Is it possible? Do we need different bespoke models, or can we build one model? The answer is, yes, we can build one model, we don’t need different models, because both of them, the search and recommendation task, are two sides of the same coin. They are ultimately ranking tasks, and ultimately, we want, for a given context, top-k results that are relevant to the context. What really changes is part of this example: how we went about identifying what are the differences between search and recommendation tasks, and how we built one model.

What are the differences between search and recommendation tasks, typically? The most important difference is the context itself. When you think about search in the context, we think about query. We type a query, we see results. Query is in the context. Whereas when we think about recommendation, we usually think about the person, it’s usually personalized, so profile ID. Similarly, for more like this, or video-video, or title-title recommendation, the context is the title itself. You are looking at Emily in Paris, you want to see similar shows to Emily in Paris. The next big difference is the data itself, which is a manifestation of the product. They’re in different parts of the product. The data that is collected based on the engagement are different.

For example, when you go to search, you type a query, you see the results, you engage with the result, you start watching it, you start purchasing it. Versus when you go on homepage, you are seeing the recommendation, which is a laid-back recommendation, then you engage on it. The data, how it’s being logged, and what the engagement the user is, is different. Similarly, the third difference would be candidate set retrieval itself, where for query, you might want to make sure that there is lexical relevance. For personalization, a purely recommendation task, the candidate set itself, the first pass, could be different. Ultimately, to the previous question, there is usually canvas specific or product specific business logic that puts guardrails on what is allowed on that part of the product. What we do is first identify what are these differences, and then we set a goal to set out to combine these differences.

Overall, the goal is to develop a single contextual recommender system that can serve all search and recommendation canvases. Not two different models, five different models, just one model that is aware of all these different contexts. What are the benefits? I think the first benefit is these different tasks learn from each other. When you go on Netflix, if you’re typing, Stranger Things, the result that you see versus on more like this, when you click on Stranger Things, the recommendations that you see, what we see from our members is they don’t want different results for the same context on different parts of the canvas. Or, do they? We want the model to learn this information. We want to leverage these different tasks for benefiting the other tasks.

Then, innovation applied to one task can be immediately scaled to other tasks. The most important benefit is, instead of five models, now we have to maintain one model. It’s overall much reduced tech debt and much lower maintenance cost. Overall, engineering cost reduces, PagerDuty, like overall, on-calls become easier because instead of debugging five models and their issues, you’re debugging one model. It’s an overall pretty big win-win.

How we go about doing it? Essentially, we unify the differences. The first important difference was context. We, instead of having a small context, training one model, gathering data and feature for the small context, we expand the context. Then we do the same things, gathering data, features for the whole context. Instead of just having query or just profile ideas context, we build a model that has this large context, query, country, entity in the context of Netflix’s video ID, and a task type. Task type is telling the model that this is a search task, this is a more like this task, this is a pre-query task, and so on. In a way, we are injecting this information in the data while giving all the information this particular task needs as one dataset. Then, in this particular case, in the context of Netflix, entity here refers to beyond just video, we have out-of-catalog videos.

For example, we often get queries like Game of Thrones, and we have to tell our users, we don’t have Game of Thrones. First, to tell our users, we need to identify what is Game of Thrones. It is an out-of-catalog entity. Similarly, person, like people search Tom Cruise. We need to understand, what is Tom Cruise? It’s not a title. It’s person. Similarly, genre, and so on. An example of context for a specific task would be, for search, the context is query, country, language, and task is search. For title-title recommendation, the context is source video ID. In our example, Emily in Paris, the ID of it. Then country, language, and the task is, title-title recommendation. They’re different tasks. Then the data, which is what we have logged in different parts of the product, we merge them all together while adding this context, this task type, as a part of the data collection. Ultimately, we know which engagement is coming from which task or which part of the product that is associated with which task, but we let the model learn those tradeoffs.

Finally, the target itself, whether it’s a video ranker, whether it’s an entity ranker, like now on Netflix, we also have games, whether it’s a game ranker, so we unify that as well and make the model rank the same entity for all these different tasks.

Here’s a setup. We basically build a multi-task learning model, but multi-task via model sharing. Actually, I’m not sure here if people have built multi-task learning model. Typically, we would have different parts of the objective. Let’s say an example would be, train a model to learn play, thumbs up, and click. There are three parts of the objective, and we are asking the model to learn all the three objectives and learn the tradeoff between these objectives. Whereas in our case, we did the multi-task through data, where we mix all the data, with the context and with the task type tagged to the data, and we’re asking the model to learn the tradeoff between these different tasks from the data itself without explicitly calling out the objectives. Similar to the previous example use case of recommendation that I showed, here also there are different types of features, the big one being the entity features, which is basically the video features or the game features.

Then now a big difference compared to traditional recommendation or search system is, here we have context features, which is much larger. We have query-based features. We have profile features. We have video ID features. We have task-specific features as well. Because the context is so broad, this information has to be expanded. Then we have context and entity features. All these different types, when it’s numeric feature, it gets feed into the model in a different way, versus if it’s a categorical feature, we have the corresponding embeddings in the model. Then, ultimately, the model is a similar architecture to the previous one, which is a large deep learning model with a lot of residual connection, and some sequence features, and so on. Ultimately, the target or the objective of this model is to learn probability of positive engagement for a given profile and context, and title, because we are ultimately ranking the titles.

Let’s take an example. This same model, when a user comes to Netflix and types a query, P-A-R-I, the same model takes that context query and does create all these features and ultimately generates the likelihood of all the videos that are relevant to this query, P-A-R-I. Then the same model, when it’s used on more like this canvas, when a user clicks on Emily in Paris, it generates all these features for the context 12345. Let’s say that’s the ID of Emily in Paris, and generates the likelihood of all the titles in our catalog that are similar to Emily in Paris. That’s the power of unifying this whole model where even though the product itself are in different parts of canvases of the product, we are just using the same infra, same ML model to make inference and ultimately generate rank list of given tasks for a given context.

How is this magic happening? Here are some of the hypotheses based on a lot of ablation studies that we have done. I think the key benefit of an effort like this is each task benefits from each other, each of the auxiliary tasks. In this case, search is one of the tasks is benefiting for all these different recommendation tasks. This model replaced four different ML models. We were able to sunset and deprecate four different ML models and replace it with one model. Clearly, there was benefit from one task to another task. The task type as a context was very important. Then the feature specific to these different tasks, was allowing the model to learn tradeoffs between these different tasks. Another key important thing is, how do we handle these different contexts and missingness of these different contexts? We took like an approach of imputing the missing context.

For example, in the context of more like this, we don’t really have a query, but we can think of some heuristic and impute query. Also, things like feature crossing, which is a specific ML architecture consideration helped. Then, with this unification, we were able to achieve either a lift or parity in performance for different tasks. As a first step, we wanted to just at least be able to replace the four different models and not take a hit in performance. Then, once we were able to do that, we brought in all sorts of innovation, which was immediately applicable to four different parts of the product rather than one place. Here’s an example where, ultimately, we replace initially pre-query search and more like this canvas with this one model. Then we also brought in personalization in it. This is a traditional UniCoRn model. Then we took a user foundation model that is trained separately and merged with this UniCoRn model.

Then, ultimately, immediately we were able to bring personalization to pre-query, to search, and more like this. In the previous world where we had three different models for three different tasks, we would have to bring in these similar features to three different models. Instead of taking three quarters, we ended up doing it in one quarter. Again, there is a recent paper on this work in RecSys. Feel free to take a look. Offline, we got an improvement of 7% and 10% lift on search and recommendation tasks by combining these. That makes the point that these different tasks are benefiting from each other.

This is a redundant slide, just the way I showed you before that we are able to merge in a personalization signal, a separate TF graph into the UniCoRn model to bring personalization to all canvases. Here is an example. After we deployed UniCoRn in production and then we deployed the personalized version of UniCoRn, I usually don’t see this on my profile, so I clicked on s as a query, and I don’t see kids show. Before personalization, I was getting some kids show here, Simon, Sahara. Then, after the personalization model was pushed, all those kids show disappeared and then these were very relevant personalized titles for me for the very broad query, s. Go give it a try because currently Netflix production search more like this and entity suggestion is being powered by these models, this specific model, UniCoRn.

Considerations

What were the considerations? In addition to the other infra considerations that I shared in my previous use case, here, because we are merging search and recommendation, a very big consideration is how we make sure the tradeoff between personalization and relevance. What does relevance mean here? Relevance to the context. If you type s and on Netflix you see a lot of titles that starts with a, I think you’ll find it a pretty bad experience. If you’re typing s, you would expect things to have s, so that’s lexical relevance. Similarly, if you’re typing a genre of romance and you start seeing a lot of action movies on the results, it will be irrelevant. Even though it might be very personally relevant to you, but in the context of the query, it’s irrelevant. We want to make sure that we trade off between personalization and relevance pretty well.

Then, because query is real-time, all these engagements are real-time, we want to make sure that our model is really good, but it’s not hurting latency. We don’t want our member to wait around after typing a query, for 5 minutes. In fact, the latency consideration is very strict, like something around 40 milliseconds to 100 milliseconds, P50. Similarly, depending on the region the app Netflix has been opened, throughput becomes important. Handling missing context is important for this particular case because we are expanding the context quite a lot. Features specific to the context, and ultimately, what kind of task-specific retrieval logic we have becomes important. In this case, one thing to note is we just unify the second pass ranker and not the first pass ranker. The retrieval or the retrieval logic remains different for different tasks.

Some additional consideration. In general, when you’re building ranking systems, in addition to everything I showed, there are things like negative sampling. What should be the negative sampling? Should you look at random negative sampling or should you look at impressions as negative sampling? Overall sample weighting, is one action more important than another action? Then, a very important thing is cost of productization. Even though it’s a winning experience during A/B test, we might not be able to productize because it’s too expensive. We ended up training it on too many GPUs that the company cannot support. Multi-GPU training, even for inference if there are GPUs used, cost of productization becomes a very critical thing to be considered. Then, ultimately, during online A/B testing, what kind of metrics to look at. How do we analyze and tell a story of what really is happening? Debugging from what the members are really liking in an experience becomes very important.

Key Takeaways

Overall, it’s beneficial to identify common components among production ranking systems, because then we can really unify those differences and reduce tech debt, improve efficiency, and have less on-call issues. A single model aware of diverse contexts can perform and improve both search and recommendation tasks. The key advantages in consolidating these tech stacks and model, is that these different tasks can benefit from one another. It reduces tech debt, and higher innovation velocity. Real-time in-session signals are important to capture member short-term intent, while we want to be sure of also trading off with long-term interest.

Overall, infrastructural considerations are equally important as the model itself. I know, saying machine learning, modeling is really the cool or the sexy part, but in real production model, infrastructure becomes even more important. Oftentimes, I’ve seen that being the bottleneck rather than the model itself. Latency, throughput, training, and inference time efficiency, all these are super critical considerations when we build something for at-scale production real-time member.

Questions and Answers

Participant 3: You mentioned a lot about how having one model can benefit the search and recommendation system. Beside the fact that there’s consideration to take into training and CPU, what were the real drawbacks having to condense four different models into just one?

Bhattacharya: The first point here is the biggest drawback or something to keep in mind. We spend a bunch of time trying to ensure that personalization is not overpowering relevance, because recommendation tasks typically over-anchors our personalization, where a search task is more relevance-oriented. How we merged in this particular context here, the same picture, the left-hand side is bringing personalization and the right-hand side is the relevance server. How we merge these two is very important. Because if we do some of these merges very early on, it could hurt relevance. If you don’t merge it with the right architectural consideration, then it could not bring in the personalization for the relevant queries. Going back to this, I think the first one is the personalization-relevance tradeoff, which is a difficult thing to achieve and you have to do a bunch of experimentation.

Then, in general, bigger model helps, but bigger model come with higher latency. How do we do that? We have a few tricks that we used to address latency, which I cannot share because we haven’t publicly written that in the paper. Latency becomes a big consideration and can be one of the blockers to be able to combine.

Participant 4: In terms of unifying the models between search and recommender system, the number of entities in context of Netflix is limited to a number of genres, then movie titles, and the person. Let’s say if it was something like X, social media platform, where the entities would be of unlimited number, will the approach of unifying those models still scale in terms of those kind of applications?

Bhattacharya: I think that’s where this disclaimer, that this is a second pass ranker unification. Prior to Netflix, I was in Etsy, which is an e-commerce platform where the catalog size was much bigger than Netflix catalog size. We usually do first pass ranking and then second pass ranking. This unification is a second pass ranker. I believe this would scale to any other product application. As long as we have first pass ranking, which retrieves the right set of candidate and has high recall, then usually the second pass ranking, the candidate set size is much smaller. To also be able to unify the first pass or the retrieval phase, actually, there are a few papers now with generative retrieval, but this world did not focus on that.

See more presentations with transcripts

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Java News Roundup: Jakarta NoSQL 1.0, Spring 7.0-M3, Maven 4.0-RC3, LangChain4j 1.0-beta2

MMS Founder
MMS RSS

Posted on nosqlgooglealerts. Visit nosqlgooglealerts

This week’s Java roundup for March 10th, 2025 features news highlighting: OpenJDK JEPs targeted and proposed to target for JDK 25; the release of Jakarta NoSQL 1.0; the third milestone release of Spring Framework 7.0; the third release candidate of Maven 4.0; and the second beta release of LangChain4j 1.0.

OpenJDK

JEP 502, Stable Values (Preview), has been elevated from Proposed to Target to Targeted for JDK 25. Formerly known as Computed Constants (Preview), this JEP introduces the concept of computed constants, defined as immutable value holders that are initialized at most once. This offers the performance and safety benefits of final fields, while offering greater flexibility as to the timing of initialization.

JEP 503, Remove the 32-bit x86 Port, has been elevated from Candidate to Proposed to Target for JDK 25. This JEP proposes to “remove the source code and build support for the 32-bit x86 port.” This feature is a follow-up from JEP 501, Deprecate the 32-bit x86 Port for Removal, to be delivered in the upcoming release of JDK 24. The review is expected to conclude on March 18, 2025.

JDK 24

Build 36 remains the current build in the JDK 24 early-access builds. Further details may be found in the release notes.

JDK 25

Build 14 of the JDK 25 early-access builds was also made available this past week featuring updates from Build 13 that include fixes for various issues. More details on this release may be found in the release notes.

For JDK 24 and JDK 25, developers are encouraged to report bugs via the Java Bug Database.

GlassFish

GlassFish 7.0.23, the twenty-third maintenance release, delivers bug fixes, dependency upgrades and improvements: SSH managed node connections on both the Linux and Windows environments; and support for the org.glassfish.envPreferredToProperties system property that, when set to true, allows environment variables to take precedence when resolving variable references in JVM options. Further details on this release may be found in the release notes.

Jakarta EE

In his weekly Hashtag Jakarta EE blog, Ivar Grimstad, Jakarta EE Developer Advocate at the Eclipse Foundation, provided an update on Jakarta EE 11, writing:

Jakarta NoSQL 1.0 has passed its release review and is now publicly available. This is a major milestone for the project. Congrats to the team!

The Jakarta EE 11 Web Profile is as good as ready for the release review ballot to start. The final version of the TCK has been staged, and Eclipse GlassFish passes it on both JDK 17 and JDK 21. I expect the ballot to start early next week, as soon as all the materials have been gathered.

The release of the Jakarta NoSQL 1.0 specification features notable changes such as: an improved Template interface that increases productivity on NoSQL operations; the removal of the Document, Key-Value and Column Family APIs as they are now maintained in the Jakarta Data specifications; and the addition of new annotations, @MappedSuperclass, @Embeddable, @Inheritance, @DiscriminatorColumn and @DiscriminatorValue for improved support of NoSQL databases. More details on this release may be found in the changelog.

The road to Jakarta EE 11 included four milestone releases, the release of the Core Profile in December 2024, and the potential for release candidates as necessary before the GA releases of the Web Profile in 1Q 2025 and the Platform in 2Q 2025.

Spring Framework

The third milestone release of Spring Framework 7.0.0 delivers bug fixes, improvements in documentation, dependency upgrades and new features such as: first-class support for registering an instance of the GenericApplicationContext class via the new BeanRegistrar interface; and support for the Java Optional class with null-safety and Elvis operators defined in the Spring Expression Language (SpEL). Further details on this release may be found in the release notes.

Similarly, the release of Spring Framework 6.2.4 and 6.1.18 ship with bug fixes, improvements in documentation, dependency upgrades and new features such as: avoid unnecessary CGLIB processing on classes annotated with @Configuration that do not declare, or inherit, any instance-level methods annotated with @Bean; and improvements to the BeanFactory and ObjectProvider interfaces to select only one default candidate among non-default candidates if the bean name is volatile or not visible to application. More details on these releases may be found in the release notes for version 6.2.4 and version 6.1.18.

The second milestone release of Spring Data 2025.0.0, also known as Spring Data 3.5.0, provides new features such as: Interface Projections are now properly throwing a NullPointerException if a getter method return value is null even if the method is defined to return a non-nullable value; and allow the use of bean validation callbacks in reactive flows with the Spring Data MongoDB ValidatingEntityCallback and ReactiveValidatingEntityCallback classes. Further details on this release may be found in the release notes.

Four days after the release of version 0.4.0, the release of Spring gRPC 0.5.0 provides notable changes such as: the addition of a Spring Boot compatibility check workflow; and a fix in the docs.yml workflow to add the package command. More details on this release may be found in the release notes.

Open Liberty

IBM has released version 25.0.0.3-beta of Open Liberty featuring compliance with FIPS 140-3, Security Requirements for Cryptographic Modules, for the IBM SDK, Java Technology Edition 8.

LangChain4j

The second beta release of LangChain4j 1.0.0, provides notable changes such as: a migration to using the Java HttpClient class as a first step towards decoupling modules from the OkHttpClient class; and support for the OpenAI Java Library. Breaking changes include: removal of the deprecated generate() and onNext()/onComplete() methods in the ChatLanguageModel and TokenStream interfaces, respectively. Further details on this release may be found in the release notes.

Micrometer

The third milestone release of Micrometer Metrics 1.15.0 delivers bug fixes, dependency upgrades and new features such as: allow the TimedAspect and CountedAspect classes to inject a Java Function interface to create tags based on method result; and improvements to the OtlpMetricsSender interface that removes a possible inconsistency where the sender could be given an instance of the OtlpConfig interface that differs from the one passed to OtlpMeterRegistry class. More details on these releases may be found in the release notes.

The third milestone release of Micrometer Tracing 1.5.0 provides notable dependency upgrades such as: Micrometer Metrics 1.14.5; Zipkin Brave 6.1.0; and Testcontainers for Java 1.20.6. Further details on this release may be found in the release notes.

Piranha Cloud

The release of Piranha 25.3.0 delivers bug fixes, dependency upgrades, improvements in documentation and notable changes such as: support for JDK 24 in the experimental workflow; and various Jakarta EE Core Profile TCK certifications for the Piranha Core Profile. More details on this release may be found in the release notes, documentation and issue tracker.

Project Reactor

The first milestone release of Project Reactor 2025.0.0 provides dependency upgrades to reactor-core 3.8.0-M1, reactor-netty 1.3.0-M1, reactor-pool 1.2.0-M1. There was also a realignment to version 2025.0.0-M1 with the reactor-addons 3.5.2, reactor-kotlin-extensions 1.2.3 and reactor-kafka 1.3.23 artifacts that remain unchanged. Further details on this release may be found in the release notes.

Similarly, Project Reactor 2024.0.4, the fourth maintenance release, providing dependency upgrades to reactor-core 3.7.4 and reactor-netty 1.2.4. There was also a realignment to version 2024.0.4 with the reactor-addons 3.5.2, reactor-pool 1.1.2, reactor-kotlin-extensions 1.2.3 and reactor-kafka 1.3.23 artifacts that remain unchanged. More details on this release may be found in the changelog.

Maven

The third release candidate of Maven 4.0.0 ships with notable changes such as: a migration from the Java EE 8 javax.inject package to Maven Dependency Injection; support for ${project.rootDirectory} property in GitHub repositories; and improve validation error messages and removal of direct support for ${project.baseUri} property in the DefaultModelValidator class. Further details on this release may be found in the release notes.

About the Author

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Swiss National Bank Trims Holdings in MongoDB, Inc. (NASDAQ:MDB) – MarketBeat

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

Swiss National Bank cut its stake in MongoDB, Inc. (NASDAQ:MDBFree Report) by 4.1% in the fourth quarter, according to the company in its most recent filing with the Securities and Exchange Commission. The firm owned 208,700 shares of the company’s stock after selling 9,000 shares during the period. Swiss National Bank owned approximately 0.28% of MongoDB worth $48,587,000 as of its most recent filing with the Securities and Exchange Commission.

Several other hedge funds and other institutional investors have also recently modified their holdings of MDB. Jennison Associates LLC boosted its stake in shares of MongoDB by 23.6% during the 3rd quarter. Jennison Associates LLC now owns 3,102,024 shares of the company’s stock worth $838,632,000 after buying an additional 592,038 shares during the last quarter. Raymond James Financial Inc. acquired a new position in MongoDB during the fourth quarter valued at approximately $90,478,000. Amundi increased its position in shares of MongoDB by 86.2% during the 4th quarter. Amundi now owns 693,740 shares of the company’s stock worth $172,519,000 after purchasing an additional 321,186 shares during the last quarter. Assenagon Asset Management S.A. raised its stake in shares of MongoDB by 11,057.0% during the 4th quarter. Assenagon Asset Management S.A. now owns 296,889 shares of the company’s stock worth $69,119,000 after purchasing an additional 294,228 shares in the last quarter. Finally, Avala Global LP acquired a new stake in shares of MongoDB in the 3rd quarter valued at approximately $47,960,000. Institutional investors and hedge funds own 89.29% of the company’s stock.

Insider Activity

In other news, CAO Thomas Bull sold 169 shares of the company’s stock in a transaction dated Thursday, January 2nd. The shares were sold at an average price of $234.09, for a total transaction of $39,561.21. Following the transaction, the chief accounting officer now owns 14,899 shares in the company, valued at approximately $3,487,706.91. This represents a 1.12 % decrease in their ownership of the stock. The sale was disclosed in a document filed with the Securities & Exchange Commission, which is accessible through this link. Also, CFO Michael Lawrence Gordon sold 1,245 shares of the company’s stock in a transaction that occurred on Thursday, January 2nd. The stock was sold at an average price of $234.09, for a total value of $291,442.05. Following the transaction, the chief financial officer now directly owns 79,062 shares in the company, valued at approximately $18,507,623.58. This trade represents a 1.55 % decrease in their ownership of the stock. The disclosure for this sale can be found here. Insiders have sold 44,314 shares of company stock valued at $11,642,583 in the last ninety days. 3.60% of the stock is owned by company insiders.

MongoDB Stock Performance

Shares of MDB stock traded up $7.68 on Monday, hitting $193.05. The company had a trading volume of 2,039,974 shares, compared to its average volume of 1,683,732. MongoDB, Inc. has a twelve month low of $173.13 and a twelve month high of $387.19. The firm has a market cap of $14.38 billion, a P/E ratio of -70.46 and a beta of 1.30. The stock’s 50-day moving average is $256.72 and its two-hundred day moving average is $272.55.

MongoDB (NASDAQ:MDBGet Free Report) last released its quarterly earnings results on Wednesday, March 5th. The company reported $0.19 earnings per share (EPS) for the quarter, missing the consensus estimate of $0.64 by ($0.45). MongoDB had a negative return on equity of 12.22% and a negative net margin of 10.46%. The firm had revenue of $548.40 million for the quarter, compared to the consensus estimate of $519.65 million. During the same quarter in the previous year, the business posted $0.86 EPS. Equities research analysts predict that MongoDB, Inc. will post -1.78 EPS for the current year.

Analyst Ratings Changes

Several research firms have issued reports on MDB. China Renaissance initiated coverage on shares of MongoDB in a research report on Tuesday, January 21st. They set a “buy” rating and a $351.00 price target on the stock. Monness Crespi & Hardt upgraded MongoDB from a “sell” rating to a “neutral” rating in a report on Monday, March 3rd. Canaccord Genuity Group lowered their price target on MongoDB from $385.00 to $320.00 and set a “buy” rating on the stock in a research report on Thursday, March 6th. Truist Financial reduced their target price on MongoDB from $400.00 to $300.00 and set a “buy” rating for the company in a research note on Thursday, March 6th. Finally, Barclays dropped their price target on MongoDB from $330.00 to $280.00 and set an “overweight” rating on the stock in a research note on Thursday, March 6th. One analyst has rated the stock with a sell rating, seven have given a hold rating and twenty-three have issued a buy rating to the company’s stock. According to data from MarketBeat.com, the company presently has an average rating of “Moderate Buy” and an average target price of $319.87.

View Our Latest Report on MDB

MongoDB Profile

(Free Report)

MongoDB, Inc, together with its subsidiaries, provides general purpose database platform worldwide. The company provides MongoDB Atlas, a hosted multi-cloud database-as-a-service solution; MongoDB Enterprise Advanced, a commercial database server for enterprise customers to run in the cloud, on-premises, or in a hybrid environment; and Community Server, a free-to-download version of its database, which includes the functionality that developers need to get started with MongoDB.

Featured Stories

Institutional Ownership by Quarter for MongoDB (NASDAQ:MDB)

Before you consider MongoDB, you’ll want to hear this.

MarketBeat keeps track of Wall Street’s top-rated and best performing research analysts and the stocks they recommend to their clients on a daily basis. MarketBeat has identified the five stocks that top analysts are quietly whispering to their clients to buy now before the broader market catches on… and MongoDB wasn’t on the list.

While MongoDB currently has a Moderate Buy rating among analysts, top-rated analysts believe these five stocks are better buys.

View The Five Stocks Here

The 10 Best Stocks to Own: Spring 2025 Cover

Discover the 10 best stocks to own in Spring 2025, carefully selected for their growth potential amid market volatility. This exclusive report highlights top companies poised to thrive in uncertain economic conditions—download now to gain an investing edge.

Get This Free Report

Like this article? Share it with a colleague.

Link copied to clipboard.

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Atria Investments Inc Sells 686 Shares of MongoDB, Inc. (NASDAQ:MDB) – Defense World

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

Atria Investments Inc decreased its stake in MongoDB, Inc. (NASDAQ:MDBFree Report) by 31.5% during the 4th quarter, according to the company in its most recent disclosure with the SEC. The firm owned 1,489 shares of the company’s stock after selling 686 shares during the period. Atria Investments Inc’s holdings in MongoDB were worth $347,000 as of its most recent filing with the SEC.

A number of other hedge funds and other institutional investors have also recently added to or reduced their stakes in MDB. Jennison Associates LLC increased its holdings in MongoDB by 23.6% in the 3rd quarter. Jennison Associates LLC now owns 3,102,024 shares of the company’s stock worth $838,632,000 after acquiring an additional 592,038 shares during the last quarter. Geode Capital Management LLC boosted its position in MongoDB by 2.9% during the third quarter. Geode Capital Management LLC now owns 1,230,036 shares of the company’s stock worth $331,776,000 after purchasing an additional 34,814 shares during the period. Westfield Capital Management Co. LP increased its stake in shares of MongoDB by 1.5% in the third quarter. Westfield Capital Management Co. LP now owns 496,248 shares of the company’s stock worth $134,161,000 after purchasing an additional 7,526 shares in the last quarter. Holocene Advisors LP raised its position in shares of MongoDB by 22.6% in the third quarter. Holocene Advisors LP now owns 362,603 shares of the company’s stock valued at $98,030,000 after purchasing an additional 66,730 shares during the period. Finally, Assenagon Asset Management S.A. lifted its stake in shares of MongoDB by 11,057.0% during the 4th quarter. Assenagon Asset Management S.A. now owns 296,889 shares of the company’s stock valued at $69,119,000 after buying an additional 294,228 shares in the last quarter. 89.29% of the stock is currently owned by hedge funds and other institutional investors.

Analyst Ratings Changes

A number of research firms recently weighed in on MDB. Monness Crespi & Hardt upgraded shares of MongoDB from a “sell” rating to a “neutral” rating in a research note on Monday, March 3rd. The Goldman Sachs Group reduced their price target on MongoDB from $390.00 to $335.00 and set a “buy” rating for the company in a research report on Thursday, March 6th. Guggenheim upgraded MongoDB from a “neutral” rating to a “buy” rating and set a $300.00 price objective on the stock in a research note on Monday, January 6th. Bank of America cut their target price on MongoDB from $420.00 to $286.00 and set a “buy” rating for the company in a research note on Thursday, March 6th. Finally, China Renaissance started coverage on MongoDB in a research report on Tuesday, January 21st. They set a “buy” rating and a $351.00 price target on the stock. One research analyst has rated the stock with a sell rating, seven have assigned a hold rating and twenty-three have given a buy rating to the company’s stock. Based on data from MarketBeat, MongoDB currently has an average rating of “Moderate Buy” and an average target price of $319.87.

<!—->

Check Out Our Latest Report on MDB

Insiders Place Their Bets

In related news, Director Dwight A. Merriman sold 885 shares of the firm’s stock in a transaction dated Tuesday, February 18th. The stock was sold at an average price of $292.05, for a total value of $258,464.25. Following the sale, the director now directly owns 83,845 shares of the company’s stock, valued at $24,486,932.25. This trade represents a 1.04 % decrease in their ownership of the stock. The transaction was disclosed in a document filed with the Securities & Exchange Commission, which is available through this link. Also, Director Hope F. Cochran sold 1,175 shares of the business’s stock in a transaction dated Tuesday, December 17th. The shares were sold at an average price of $266.99, for a total value of $313,713.25. Following the transaction, the director now owns 17,570 shares in the company, valued at approximately $4,691,014.30. This represents a 6.27 % decrease in their position. The disclosure for this sale can be found here. Insiders have sold 44,314 shares of company stock valued at $11,642,583 in the last ninety days. Company insiders own 3.60% of the company’s stock.

MongoDB Stock Performance

MDB stock opened at $185.37 on Monday. The firm has a market cap of $13.80 billion, a P/E ratio of -67.65 and a beta of 1.30. The firm’s fifty day moving average is $256.72 and its 200 day moving average is $272.55. MongoDB, Inc. has a 52 week low of $173.13 and a 52 week high of $387.19.

MongoDB (NASDAQ:MDBGet Free Report) last announced its quarterly earnings data on Wednesday, March 5th. The company reported $0.19 earnings per share (EPS) for the quarter, missing analysts’ consensus estimates of $0.64 by ($0.45). The company had revenue of $548.40 million for the quarter, compared to the consensus estimate of $519.65 million. MongoDB had a negative net margin of 10.46% and a negative return on equity of 12.22%. During the same period last year, the business posted $0.86 earnings per share. As a group, analysts forecast that MongoDB, Inc. will post -1.78 EPS for the current year.

MongoDB Profile

(Free Report)

MongoDB, Inc, together with its subsidiaries, provides general purpose database platform worldwide. The company provides MongoDB Atlas, a hosted multi-cloud database-as-a-service solution; MongoDB Enterprise Advanced, a commercial database server for enterprise customers to run in the cloud, on-premises, or in a hybrid environment; and Community Server, a free-to-download version of its database, which includes the functionality that developers need to get started with MongoDB.

Recommended Stories

Institutional Ownership by Quarter for MongoDB (NASDAQ:MDB)



Receive News & Ratings for MongoDB Daily – Enter your email address below to receive a concise daily summary of the latest news and analysts’ ratings for MongoDB and related companies with MarketBeat.com’s FREE daily email newsletter.

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Steward Partners Investment Advisory LLC Increases Stock Position in MongoDB, Inc …

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

Steward Partners Investment Advisory LLC grew its position in MongoDB, Inc. (NASDAQ:MDBFree Report) by 12.9% in the 4th quarter, according to the company in its most recent Form 13F filing with the Securities and Exchange Commission (SEC). The fund owned 1,168 shares of the company’s stock after purchasing an additional 133 shares during the period. Steward Partners Investment Advisory LLC’s holdings in MongoDB were worth $272,000 at the end of the most recent reporting period.

Several other institutional investors and hedge funds also recently made changes to their positions in the company. Janney Montgomery Scott LLC purchased a new stake in MongoDB during the third quarter worth about $861,000. Principal Financial Group Inc. increased its holdings in MongoDB by 2.7% during the third quarter. Principal Financial Group Inc. now owns 6,095 shares of the company’s stock worth $1,648,000 after buying an additional 160 shares during the last quarter. Atria Investments Inc increased its holdings in MongoDB by 6.6% during the third quarter. Atria Investments Inc now owns 2,175 shares of the company’s stock worth $588,000 after buying an additional 135 shares during the last quarter. Versor Investments LP purchased a new stake in MongoDB during the third quarter worth about $404,000. Finally, GSA Capital Partners LLP increased its holdings in MongoDB by 38.0% during the third quarter. GSA Capital Partners LLP now owns 1,598 shares of the company’s stock worth $432,000 after buying an additional 440 shares during the last quarter. Hedge funds and other institutional investors own 89.29% of the company’s stock.

Insider Buying and Selling

In related news, CFO Michael Lawrence Gordon sold 5,000 shares of MongoDB stock in a transaction on Monday, December 16th. The shares were sold at an average price of $267.85, for a total value of $1,339,250.00. Following the sale, the chief financial officer now owns 80,307 shares in the company, valued at $21,510,229.95. The trade was a 5.86 % decrease in their ownership of the stock. The transaction was disclosed in a legal filing with the SEC, which is available at this link. Also, insider Cedric Pech sold 287 shares of the business’s stock in a transaction on Thursday, January 2nd. The shares were sold at an average price of $234.09, for a total transaction of $67,183.83. Following the sale, the insider now owns 24,390 shares in the company, valued at approximately $5,709,455.10. This trade represents a 1.16 % decrease in their position. The disclosure for this sale can be found here. Insiders have sold a total of 49,314 shares of company stock valued at $12,981,833 over the last three months. 3.60% of the stock is owned by company insiders.

Analysts Set New Price Targets

<!—->

A number of analysts recently issued reports on the stock. Loop Capital decreased their price objective on shares of MongoDB from $400.00 to $350.00 and set a “buy” rating for the company in a report on Monday, March 3rd. Morgan Stanley cut their target price on shares of MongoDB from $350.00 to $315.00 and set an “overweight” rating for the company in a research report on Thursday, March 6th. Truist Financial cut their target price on shares of MongoDB from $400.00 to $300.00 and set a “buy” rating for the company in a research report on Thursday, March 6th. KeyCorp lowered shares of MongoDB from a “strong-buy” rating to a “hold” rating in a research report on Wednesday, March 5th. Finally, China Renaissance began coverage on shares of MongoDB in a research report on Tuesday, January 21st. They set a “buy” rating and a $351.00 target price for the company. One analyst has rated the stock with a sell rating, seven have issued a hold rating and twenty-three have assigned a buy rating to the stock. According to data from MarketBeat.com, MongoDB currently has an average rating of “Moderate Buy” and a consensus price target of $319.87.

View Our Latest Report on MongoDB

MongoDB Price Performance

Shares of MDB opened at $185.37 on Friday. The firm’s 50 day simple moving average is $256.72 and its 200 day simple moving average is $272.06. The company has a market capitalization of $13.80 billion, a P/E ratio of -67.65 and a beta of 1.30. MongoDB, Inc. has a 1-year low of $173.13 and a 1-year high of $387.19.

MongoDB (NASDAQ:MDBGet Free Report) last posted its quarterly earnings results on Wednesday, March 5th. The company reported $0.19 EPS for the quarter, missing the consensus estimate of $0.64 by ($0.45). The business had revenue of $548.40 million for the quarter, compared to analysts’ expectations of $519.65 million. MongoDB had a negative return on equity of 12.22% and a negative net margin of 10.46%. During the same period last year, the business posted $0.86 earnings per share. On average, equities analysts anticipate that MongoDB, Inc. will post -1.78 earnings per share for the current year.

MongoDB Company Profile

(Free Report)

MongoDB, Inc, together with its subsidiaries, provides general purpose database platform worldwide. The company provides MongoDB Atlas, a hosted multi-cloud database-as-a-service solution; MongoDB Enterprise Advanced, a commercial database server for enterprise customers to run in the cloud, on-premises, or in a hybrid environment; and Community Server, a free-to-download version of its database, which includes the functionality that developers need to get started with MongoDB.

Featured Articles

Want to see what other hedge funds are holding MDB? Visit HoldingsChannel.com to get the latest 13F filings and insider trades for MongoDB, Inc. (NASDAQ:MDBFree Report).

Institutional Ownership by Quarter for MongoDB (NASDAQ:MDB)



Receive News & Ratings for MongoDB Daily – Enter your email address below to receive a concise daily summary of the latest news and analysts’ ratings for MongoDB and related companies with MarketBeat.com’s FREE daily email newsletter.

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


.NET 10 Preview 1: Updates in Runtime, SDK, Frameworks and More

MMS Founder
MMS Almir Vuk

Article originally posted on InfoQ. Visit InfoQ

At the end of February, .NET 10 Preview 1 was released, bringing several major updates and improvements across the platform. This first preview introduces enhancements to the .NET Runtime, SDK, libraries, C#, ASP.NET Core, Blazor, .NET MAUI, and more.

ASP.NET Core in .NET 10 now supports OpenAPI 3.1. This allows developers to generate OpenAPI documents with better integration for JSON Schema draft 2020-12. The release also introduces a simplified method for configuring OpenAPI versions. However, there are breaking changes that require updates to applications using document transformers.

Another noteworthy improvement is the ability to serve OpenAPI documents in YAML format. This provides a more concise alternative to JSON, helping developers manage longer descriptions more efficiently.

Regarding the future of this support, the .NET team states the following:

Support for YAML is currently only available when served at runtime from the OpenAPI endpoint. Support for generating OpenAPI documents in YAML format at build time will be added in a future preview.

Additional updates include response descriptions for ProducesResponseType, and URL validation with RedirectHttpResult.IsLocalUrl, and improved integration testing for applications using top-level statements.

Moving on to Blazor, QuickGrid now includes a RowClass parameter for conditional styling. Additionally, Blazor scripts are now served as static web assets with improved precompression. This change significantly reduces file sizes, offering a more efficient experience for developers.

As reported, .NET MAUI Preview 1 focuses on quality improvements for iOS, Mac Catalyst, Android, and other platforms. Notably, CollectionView handlers for iOS and Mac Catalyst are now enabled by default, improving both performance and stability.

The release also brings support for Android 16 (Baklava) Beta 1. It introduces new recommendations for the minimum supported Android API, now set to API 24. Additionally, JDK-21 support has been added. .NET Android projects can now run using the dotnet run command, simplifying the development process. Trimmer warnings are enabled by default for iOS, macOS, and tvOS applications, prompting developers to address potential trimming issues in their code.

On the database side, Entity Framework Core 10 Preview 1 introduces several new features. First, there is first-class LINQ support for the LeftJoin operator, simplifying queries that previously required complex LINQ constructs. The release also makes working with ExecuteUpdateAsync easier by supporting regular non-expression lambdas.

Other optimizations include improvements to SQL Server scaffolding, date/time function translation, and performance for Count operations on ICollection. Additionally, smaller improvements address optimizations for MIN/MAX over DISTINCT and better handling of multiple consecutive LIMIT operations.

In C# 14, several new features have been added. One of the key updates is support for field-backed properties, providing a smoother path for developers transitioning from auto-implemented to custom properties. The nameof expression now supports unbound generics. Implicit conversions for Span and ReadOnlySpan also make working with these types more intuitive.

Moreover, lambda expressions can now include parameter modifiers like ref and in without specifying types. An experimental feature allows developers to change how string literals are emitted into PE files, offering potential performance benefits. Stating the following:

By turning on the feature flag, string literals (where possible) are emitted as UTF-8 data into a different section of the PE file without a data limit. The emit format is similar to explicit UTF-8 string literals.

Lastly, the .NET Team called on viewers to watch an unboxing video where they discussed what was new in the preview release, featuring live demos from the dev team. The video is now available to watch on demand and for a complete overview, readers can explore the full release notes and dive into additional details about the first preview of .NET 10.

About the Author

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


MongoDB, Inc. (NASDAQ:MDB) Receives Average Recommendation of “Moderate Buy …

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

Shares of MongoDB, Inc. (NASDAQ:MDBGet Free Report) have been assigned an average recommendation of “Moderate Buy” from the thirty-one ratings firms that are covering the firm, Marketbeat reports. One investment analyst has rated the stock with a sell rating, seven have issued a hold rating and twenty-three have assigned a buy rating to the company. The average twelve-month price objective among brokerages that have issued ratings on the stock in the last year is $319.87.

Several equities analysts have issued reports on the company. Rosenblatt Securities reiterated a “buy” rating and issued a $350.00 price objective on shares of MongoDB in a research report on Tuesday, March 4th. DA Davidson lifted their price target on MongoDB from $340.00 to $405.00 and gave the company a “buy” rating in a report on Tuesday, December 10th. JMP Securities reissued a “market outperform” rating and set a $380.00 price objective on shares of MongoDB in a report on Wednesday, December 11th. Tigress Financial boosted their target price on shares of MongoDB from $400.00 to $430.00 and gave the stock a “buy” rating in a research note on Wednesday, December 18th. Finally, Morgan Stanley reduced their target price on shares of MongoDB from $350.00 to $315.00 and set an “overweight” rating on the stock in a research report on Thursday, March 6th.

Get Our Latest Research Report on MongoDB

Insiders Place Their Bets

In related news, CEO Dev Ittycheria sold 2,581 shares of the business’s stock in a transaction on Thursday, January 2nd. The shares were sold at an average price of $234.09, for a total value of $604,186.29. Following the sale, the chief executive officer now directly owns 217,294 shares of the company’s stock, valued at approximately $50,866,352.46. The trade was a 1.17 % decrease in their position. The transaction was disclosed in a legal filing with the SEC, which can be accessed through this hyperlink. Also, insider Cedric Pech sold 287 shares of the company’s stock in a transaction on Thursday, January 2nd. The stock was sold at an average price of $234.09, for a total transaction of $67,183.83. Following the completion of the transaction, the insider now directly owns 24,390 shares of the company’s stock, valued at approximately $5,709,455.10. The trade was a 1.16 % decrease in their ownership of the stock. The disclosure for this sale can be found here. Insiders sold 49,314 shares of company stock worth $12,981,833 over the last ninety days. 3.60% of the stock is currently owned by corporate insiders.

Institutional Trading of MongoDB

A number of hedge funds and other institutional investors have recently made changes to their positions in the business. Norges Bank bought a new position in MongoDB in the fourth quarter valued at $189,584,000. Jennison Associates LLC boosted its stake in shares of MongoDB by 23.6% during the 3rd quarter. Jennison Associates LLC now owns 3,102,024 shares of the company’s stock worth $838,632,000 after buying an additional 592,038 shares during the last quarter. Marshall Wace LLP bought a new position in shares of MongoDB in the 4th quarter valued at about $110,356,000. Raymond James Financial Inc. acquired a new stake in shares of MongoDB in the fourth quarter valued at about $90,478,000. Finally, D1 Capital Partners L.P. bought a new stake in MongoDB during the fourth quarter worth about $76,129,000. 89.29% of the stock is currently owned by institutional investors and hedge funds.

MongoDB Stock Performance

MDB opened at $180.32 on Tuesday. The firm has a market cap of $13.43 billion, a price-to-earnings ratio of -65.81 and a beta of 1.30. The firm’s 50-day simple moving average is $260.61 and its 200-day simple moving average is $274.00. MongoDB has a twelve month low of $173.13 and a twelve month high of $387.19.

MongoDB (NASDAQ:MDBGet Free Report) last issued its quarterly earnings data on Wednesday, March 5th. The company reported $0.19 earnings per share for the quarter, missing the consensus estimate of $0.64 by ($0.45). MongoDB had a negative return on equity of 12.22% and a negative net margin of 10.46%. The firm had revenue of $548.40 million for the quarter, compared to the consensus estimate of $519.65 million. During the same period in the prior year, the company posted $0.86 earnings per share. As a group, sell-side analysts anticipate that MongoDB will post -1.78 EPS for the current fiscal year.

About MongoDB

(Get Free Report

MongoDB, Inc, together with its subsidiaries, provides general purpose database platform worldwide. The company provides MongoDB Atlas, a hosted multi-cloud database-as-a-service solution; MongoDB Enterprise Advanced, a commercial database server for enterprise customers to run in the cloud, on-premises, or in a hybrid environment; and Community Server, a free-to-download version of its database, which includes the functionality that developers need to get started with MongoDB.

Further Reading

Analyst Recommendations for MongoDB (NASDAQ:MDB)

This instant news alert was generated by narrative science technology and financial data from MarketBeat in order to provide readers with the fastest and most accurate reporting. This story was reviewed by MarketBeat’s editorial team prior to publication. Please send any questions or comments about this story to contact@marketbeat.com.

Before you consider MongoDB, you’ll want to hear this.

MarketBeat keeps track of Wall Street’s top-rated and best performing research analysts and the stocks they recommend to their clients on a daily basis. MarketBeat has identified the five stocks that top analysts are quietly whispering to their clients to buy now before the broader market catches on… and MongoDB wasn’t on the list.

While MongoDB currently has a Moderate Buy rating among analysts, top-rated analysts believe these five stocks are better buys.

View The Five Stocks Here

7 Stocks to Buy And Hold Forever Cover

Enter your email address and we’ll send you MarketBeat’s list of seven stocks and why their long-term outlooks are very promising.

Get This Free Report

Like this article? Share it with a colleague.

Link copied to clipboard.

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


US Bancorp DE Buys 321 Shares of MongoDB, Inc. (NASDAQ:MDB) – Defense World

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

US Bancorp DE raised its position in MongoDB, Inc. (NASDAQ:MDBFree Report) by 8.3% during the fourth quarter, according to its most recent filing with the SEC. The firm owned 4,190 shares of the company’s stock after purchasing an additional 321 shares during the period. US Bancorp DE’s holdings in MongoDB were worth $975,000 as of its most recent filing with the SEC.

Other institutional investors also recently modified their holdings of the company. Hilltop National Bank lifted its position in shares of MongoDB by 47.2% during the 4th quarter. Hilltop National Bank now owns 131 shares of the company’s stock valued at $30,000 after acquiring an additional 42 shares during the period. Brooklyn Investment Group acquired a new stake in MongoDB in the third quarter valued at approximately $36,000. Continuum Advisory LLC grew its holdings in shares of MongoDB by 621.1% during the third quarter. Continuum Advisory LLC now owns 137 shares of the company’s stock valued at $40,000 after buying an additional 118 shares during the last quarter. NCP Inc. bought a new stake in MongoDB during the 4th quarter worth approximately $35,000. Finally, Wilmington Savings Fund Society FSB purchased a new stake in MongoDB in the 3rd quarter valued at $44,000. 89.29% of the stock is owned by institutional investors.

Insider Buying and Selling

In other news, insider Cedric Pech sold 287 shares of the stock in a transaction that occurred on Thursday, January 2nd. The stock was sold at an average price of $234.09, for a total value of $67,183.83. Following the completion of the transaction, the insider now owns 24,390 shares of the company’s stock, valued at approximately $5,709,455.10. This represents a 1.16 % decrease in their position. The sale was disclosed in a legal filing with the Securities & Exchange Commission, which is accessible through the SEC website. Also, Director Dwight A. Merriman sold 3,000 shares of MongoDB stock in a transaction that occurred on Monday, March 3rd. The shares were sold at an average price of $270.63, for a total value of $811,890.00. Following the sale, the director now owns 1,109,006 shares in the company, valued at $300,130,293.78. This represents a 0.27 % decrease in their ownership of the stock. The disclosure for this sale can be found here. Insiders have sold 49,314 shares of company stock valued at $12,981,833 over the last ninety days. 3.60% of the stock is owned by insiders.

Analyst Upgrades and Downgrades

<!—->

Several research analysts recently weighed in on MDB shares. DA Davidson boosted their price target on shares of MongoDB from $340.00 to $405.00 and gave the stock a “buy” rating in a report on Tuesday, December 10th. KeyCorp downgraded shares of MongoDB from a “strong-buy” rating to a “hold” rating in a report on Wednesday, March 5th. Rosenblatt Securities reissued a “buy” rating and set a $350.00 target price on shares of MongoDB in a research report on Tuesday, March 4th. Citigroup boosted their price objective on MongoDB from $400.00 to $430.00 and gave the company a “buy” rating in a research note on Monday, December 16th. Finally, Cantor Fitzgerald assumed coverage on shares of MongoDB in a report on Wednesday. They issued an “overweight” rating and a $344.00 price objective for the company. One investment analyst has rated the stock with a sell rating, seven have assigned a hold rating and twenty-three have given a buy rating to the stock. According to data from MarketBeat, the company currently has a consensus rating of “Moderate Buy” and a consensus price target of $319.87.

Check Out Our Latest Analysis on MongoDB

MongoDB Stock Down 2.8 %

MongoDB stock opened at $187.65 on Monday. The firm has a fifty day simple moving average of $261.68 and a two-hundred day simple moving average of $274.47. MongoDB, Inc. has a twelve month low of $181.05 and a twelve month high of $387.19. The company has a market capitalization of $13.97 billion, a PE ratio of -68.49 and a beta of 1.30.

MongoDB (NASDAQ:MDBGet Free Report) last posted its quarterly earnings results on Wednesday, March 5th. The company reported $0.19 earnings per share (EPS) for the quarter, missing the consensus estimate of $0.64 by ($0.45). MongoDB had a negative return on equity of 12.22% and a negative net margin of 10.46%. The company had revenue of $548.40 million for the quarter, compared to the consensus estimate of $519.65 million. During the same quarter in the previous year, the business earned $0.86 earnings per share. As a group, equities analysts expect that MongoDB, Inc. will post -1.78 earnings per share for the current fiscal year.

MongoDB Profile

(Free Report)

MongoDB, Inc, together with its subsidiaries, provides general purpose database platform worldwide. The company provides MongoDB Atlas, a hosted multi-cloud database-as-a-service solution; MongoDB Enterprise Advanced, a commercial database server for enterprise customers to run in the cloud, on-premises, or in a hybrid environment; and Community Server, a free-to-download version of its database, which includes the functionality that developers need to get started with MongoDB.

Further Reading

Want to see what other hedge funds are holding MDB? Visit HoldingsChannel.com to get the latest 13F filings and insider trades for MongoDB, Inc. (NASDAQ:MDBFree Report).

Institutional Ownership by Quarter for MongoDB (NASDAQ:MDB)



Receive News & Ratings for MongoDB Daily – Enter your email address below to receive a concise daily summary of the latest news and analysts’ ratings for MongoDB and related companies with MarketBeat.com’s FREE daily email newsletter.

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


BerryComm expands fiber network in Central Indiana with Nokia technology – Technuter

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

Nokia and BerryComm, a leading fiber-optic broadband provider in Central Indiana, announced the deployment of an enhanced high-speed internet connectivity for thousands of homes and businesses. This initiative, powered by Nokia’s advanced optical networking technology, reinforces BerryComm’s mission to provide reliable, high-capacity broadband services to underserved communities.

The expansion utilizes Nokia’s 1830 Photonic Service Switch (PSS) with coherent optics and Reconfigurable Optical Add-Drop Multiplexer (ROADM) technologies, ensuring superior network scalability and reliability. This deployment allows BerryComm to maintain complete control over service quality while reducing dependence on external carriers for last-mile connectivity.

Beyond residential customers, the enhanced network supports businesses with mission-critical connectivity solutions, ensuring maximum uptime and operational efficiency. With this infrastructure, BerryComm can seamlessly scale to 100G and beyond as bandwidth demands continue to grow.

Top Breaking News Of The Day

“The deployment of Nokia’s ROADM technology marks a significant milestone in our mission to bridge the digital divide across Central Indiana. This cutting-edge technology enhances our ability to deliver reliable, high-speed internet while positioning our network for future growth. We’re proud to partner with Nokia, a global leader in optical networking, to bring these transformative capabilities to the communities we serve,” said Cory Childs, President of BerryComm.

“Fiber internet can be life changing, so innovative service providers like BerryComm are key to reducing the digital divide in America. Nokia’s optical network portfolio enables rapid deployment of fiber to unconnected regions. We appreciate BerryComm’s trust in Nokia and look forward to future projects with them.” Added Matt Young, Head of North American Enterprise Business at Nokia.

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.