Category: Uncategorized

MMS • Bruno Couriol
Article originally posted on InfoQ. Visit InfoQ

Following up on plans to turn ESLint into a general-purpose linter, the ESLint team recently announced official support for the CSS language. The support comes in addition to recently added support for JSON and Markdown linting.
The built-in CSS linting rules include checking against duplicate @import
rules, empty blocks, invalid at-rules, invalid properties, and enforcing the use of @layer
. CSS layers are a new addition to the CSS standard (see CSS Cascading and Inheritance Level 5) that give designers more control over the cascade so the resulting styles are predictably applied instead of being unexpectedly overridden by other rules in other places. A 2020 survey by Scout APM measured that developers spend over 5 hours per week on average debugging CSS issues with cascade/specificity bugs poised to be a major contributing factor.
However, the key lint rule addition is arguably the require-baseline
rule, which lets developers specify which CSS features to check against, depending on their level and maturity of adoption across browsers.
Baseline is an effort by the W3C WebDX Community Group to document which features are available in four core browsers: Chrome (desktop and Android), Edge, Firefox (desktop and Android), and Safari (macOS and iOS). As part of this effort, Baseline tracks which CSS features are available in which browsers. Widely available features are those supported by all core browsers for at least 30 months. Newly available features are those supported by all core browsers for less than 30 months. An example of linter configuration for a widely available baseline is as follows:
import css from "@eslint/css";
export default [
{
files: ["src/css/**/*.css"],
plugins: {
css,
},
language: "css/css",
rules: {
"css/no-duplicate-imports": "error",
"css/require-baseline": ["warn", {
available: "widely"
}]
},
},
];
CSS linting is accomplished using the @eslint/css
plugin:
npm install @eslint/css -D
Developers are encouraged to review the release note for a fuller account of the features included in the release together with technical details and code samples.
ESLint is an OpenJS Foundation project and is available as open-source software under the MIT license. Contributions are welcome via the ESLint GitHub repository. Contributors should follow the ESLint contribution guidelines.

MMS • RSS
Posted on mongodb google news. Visit mongodb google news
MongoDB, Inc. (NASDAQ:MDB – Get Free Report) was the recipient of unusually large options trading on Wednesday. Stock investors bought 23,831 put options on the stock. This represents an increase of 2,157% compared to the typical volume of 1,056 put options.
Wall Street Analyst Weigh In
Several analysts have recently weighed in on the stock. Tigress Financial increased their price target on shares of MongoDB from $400.00 to $430.00 and gave the company a “buy” rating in a research note on Wednesday, December 18th. Citigroup reissued a “buy” rating on shares of MongoDB in a research note on Thursday, March 6th. Robert W. Baird reduced their target price on MongoDB from $390.00 to $300.00 and set an “outperform” rating on the stock in a research report on Thursday, March 6th. Barclays lowered their price target on MongoDB from $330.00 to $280.00 and set an “overweight” rating for the company in a report on Thursday, March 6th. Finally, Wells Fargo & Company lowered shares of MongoDB from an “overweight” rating to an “equal weight” rating and reduced their price objective for the company from $365.00 to $225.00 in a report on Thursday, March 6th. Seven equities research analysts have rated the stock with a hold rating and twenty-three have assigned a buy rating to the company’s stock. Based on data from MarketBeat, the stock currently has a consensus rating of “Moderate Buy” and an average target price of $320.70.
Read Our Latest Stock Report on MongoDB
Insider Buying and Selling
In related news, insider Cedric Pech sold 287 shares of the business’s stock in a transaction dated Thursday, January 2nd. The shares were sold at an average price of $234.09, for a total transaction of $67,183.83. Following the transaction, the insider now directly owns 24,390 shares in the company, valued at approximately $5,709,455.10. The trade was a 1.16 % decrease in their ownership of the stock. The transaction was disclosed in a legal filing with the SEC, which can be accessed through the SEC website. Also, CFO Michael Lawrence Gordon sold 1,245 shares of the firm’s stock in a transaction that occurred on Thursday, January 2nd. The stock was sold at an average price of $234.09, for a total transaction of $291,442.05. Following the completion of the transaction, the chief financial officer now directly owns 79,062 shares in the company, valued at $18,507,623.58. This trade represents a 1.55 % decrease in their ownership of the stock. The disclosure for this sale can be found here. Over the last ninety days, insiders sold 43,139 shares of company stock worth $11,328,869. 3.60% of the stock is owned by corporate insiders.
Hedge Funds Weigh In On MongoDB
Several large investors have recently bought and sold shares of MDB. 111 Capital acquired a new stake in shares of MongoDB in the fourth quarter valued at approximately $390,000. Lansforsakringar Fondforvaltning AB publ acquired a new stake in MongoDB during the 4th quarter worth $5,769,000. Centaurus Financial Inc. grew its position in MongoDB by 19.0% during the 4th quarter. Centaurus Financial Inc. now owns 2,499 shares of the company’s stock worth $582,000 after purchasing an additional 399 shares during the last quarter. Universal Beteiligungs und Servicegesellschaft mbH acquired a new position in MongoDB in the fourth quarter valued at $13,270,000. Finally, Azzad Asset Management Inc. ADV raised its holdings in shares of MongoDB by 17.7% in the fourth quarter. Azzad Asset Management Inc. ADV now owns 7,519 shares of the company’s stock valued at $1,750,000 after buying an additional 1,132 shares during the last quarter. Institutional investors and hedge funds own 89.29% of the company’s stock.
MongoDB Price Performance
Shares of NASDAQ MDB opened at $189.30 on Friday. MongoDB has a 1 year low of $173.13 and a 1 year high of $387.19. The company has a market capitalization of $14.10 billion, a price-to-earnings ratio of -69.09 and a beta of 1.30. The business has a 50-day moving average of $252.02 and a two-hundred day moving average of $270.33.
MongoDB (NASDAQ:MDB – Get Free Report) last announced its quarterly earnings results on Wednesday, March 5th. The company reported $0.19 EPS for the quarter, missing the consensus estimate of $0.64 by ($0.45). MongoDB had a negative net margin of 10.46% and a negative return on equity of 12.22%. The company had revenue of $548.40 million during the quarter, compared to analysts’ expectations of $519.65 million. During the same quarter in the prior year, the company earned $0.86 earnings per share. As a group, equities research analysts expect that MongoDB will post -1.78 earnings per share for the current year.
About MongoDB
MongoDB, Inc, together with its subsidiaries, provides general purpose database platform worldwide. The company provides MongoDB Atlas, a hosted multi-cloud database-as-a-service solution; MongoDB Enterprise Advanced, a commercial database server for enterprise customers to run in the cloud, on-premises, or in a hybrid environment; and Community Server, a free-to-download version of its database, which includes the functionality that developers need to get started with MongoDB.
Further Reading
Receive News & Ratings for MongoDB Daily – Enter your email address below to receive a concise daily summary of the latest news and analysts’ ratings for MongoDB and related companies with MarketBeat.com’s FREE daily email newsletter.
Article originally posted on mongodb google news. Visit mongodb google news

MMS • RSS
Posted on mongodb google news. Visit mongodb google news
An investigation was announced for long-term investors in shares of MongoDB, Inc. (NASDAQ: MDB) concerning potential breaches of fiduciary duties by certain directors of MongoDB, Inc.
Investors who are current long term investors in MongoDB, Inc. (NASDAQ: MDB) shares, have certain options and should contact the Shareholders Foundation at mail@shareholdersfoundation.com or call +1(858) 779 – 1554.
The investigation by a law firm for current long term investors in NASDAQ: MDB stocks follows a lawsuit filed against MongoDB, Inc. over alleged securities laws violations. The investigation on behalf of current long term investors in NASDAQ: MDB stocks, concerns whether certain MongoDB, Inc. directors are liable in connection with the allegations made in that lawsuit.
According to that complaint filed in the U.S. District Court for the Southern District of New York the plaintiff alleges that the Defendants made materially false and misleading statements and engaged in a scheme to deceive the market and a course of conduct that artificially inflated the price of MongoDB’s common stock and operated as a fraud or deceit on purchasers of MongoDB, Inc. (NASDAQ: MDB) common shares between August 31, 2023 and May 30, 2024 by materially misleading the investing public.
Those who purchased shares of MongoDB, Inc. (NASDAQ: MDB) have certain options and should contact the Shareholders Foundation.
Contact:
Michael Daniels
Shareholders Foundation, Inc.
3111 Camino Del Rio North
Suite 423
San Diego, CA 92108
Tel: +1-(858)-779-1554
E-Mail: mail@shareholdersfoundation.com
About Shareholders Foundation, Inc.
The Shareholders Foundation, Inc. is a professional portfolio monitoring and settlement claim filing service, and an investor advocacy group, which does research related to shareholder issues and informs investors of securities lawsuits, settlements, judgments, and other legal related news to the stock/financial market. Shareholders Foundation, Inc. is in contact with a large number of shareholders and offers help, support, and assistance for every shareholder. The Shareholders Foundation, Inc. is not a law firm. Referenced cases, investigations, and/or settlements are not filed/initiated/reached and/or are not related to Shareholders Foundation. The information is provided as a public service. It is not intended as legal advice and should not be relied upon.
This release was published on openPR.
Article originally posted on mongodb google news. Visit mongodb google news

MMS • Ivan Burmistrov
Article originally posted on InfoQ. Visit InfoQ

Transcript
Burmistrov: I’ll start with a short story that happened to me. It was a normal working day. I came from work to join my wife and daughter for dinner. I’m really excited because the model our team has been working on has finally started showing promising results. I’m really eager to share this with my family. My wife is busy though, because she’s trying to feed our daughter who is a fast eater. She doesn’t want to talk about models or anything else at that point. Then, so excited I cannot wait, so I started telling her, “We had this model we built, and it wasn’t working. We’ve been debugging it for more than a month.
Finally, turns out that we thought was just stupid bugs like typos, like prompt feature name, this kind of stuff. We finally hunted them enough, so the model is now performing as we expected”. I shared with excitement and I expect some reaction or something, but she’s been busy. She barely listened, but she doesn’t want to be rude, so she understands that she needs to do something, so she’s offering her comments, like, “Nice, this model behaves just like you do at times”. I’m like, “What do you mean, how is it even relevant?” She said, “When you’re mad at me, you don’t simply tell me what’s wrong. I have to spend so much time figuring out what’s going on, just like you guys with this model”. She’s actually spot on.
The models are really hard to debug. If you build a model and it’s underperforming, it can be a nightmare in debugging. It can cost days, weeks, or months, like in our case. The least we can do to ease the pain is to ensure that the data that we feed to the model is ok, so the data doesn’t contain these bugs. This is what feature platforms aim to do. They aim to deliver good data for machine learning models. Feature platforms is what we will be discussing.
Background, and Outline
My name is Ivan. I’m a student engineer at ShareChat. ShareChat is an Indian company that builds a couple of social networks in India, like the largest domestic social networks. I am primarily focusing on this data for ML, and in particular, this feature platform. Before ShareChat, I had experience working at Meta and ScyllaDB. One of the social networks that ShareChat builds is called Moj. It’s a short video app, pretty much like TikTok. It has fully personalized feed, around 20 million daily active users, 100 million monthly active users. The model in the story is actually the main model for this app, so the main model that defines the ranking, that defines which videos we will show to users. We will be talking about how we built the feature platform for this app, or any app like this, in particular. We will have the background about what a feature platform is, or what are features, high-level architecture of any feature platform. The majority of time will be spent on challenges and how to overcome them, based on our examples. Finally, some takeaways.
Introduction to Feature Platform
Let’s refresh the memory of what the feature is for ML. A feature is pretty much anything we can derive from data. Typically, it’s attached to some entity in the system. For instance, it can be the user as an entity, or post, or creator, which is also a user, but the one who posts the content. There are multiple types of features one may be interested in. Like, for instance, window counters is like a feature, give me the number of likes for this post in the last 30 minutes, or give me the number of engagements from this user in the last one day. These kind of window counters, because they have time window. There can be lifetime counters. Lifetime don’t have windows, so like total number of posts or likes for the given post, or total number of engagements from the given user. There can be some properties, like date of birth for a user, or post language, or this kind of stuff. Or something like last N, give me last 100 interactions with this given post, or give me the last 1,000 engagements for the given user.
In this talk, we’ll primarily focus in on window features, the somewhat most interesting. To summarize what the feature platform is, feature platform is a set of services or tools that allows defining features, like give some feature API. They list features and read what they mean. It allows launching some feature pipeline to compute. Finally, feature store is to read the data when it’s needed. One important aspect I wanted to stress is that it’s really important that a feature platform helps in engineering velocity, meaning that it doesn’t stay in the way. Instead, it allows for fast iteration. The people who build the model, they can easily define features, update them if needed, or read what they mean. Because when it comes to some kind of investigation, especially a model underperforming, let’s say, it’s really important that people can go and easily read, these are the features that we feed, and this is what they mean. It’s pretty clear. It is not hidden somewhere in tens of files, in cryptic language or whatever.
Architecture – High-Level Overview of Feature Platform Components
In typical architecture for any feature platform, it starts with some streams of data. In a social network, it may be streams of likes, streams of views, video plays, this kind of stuff. Some processing engine that gets these streams of data, computes something, and writes to some kind of database. Finally, in front of this database, there is some service that gets a request for features, transforms it to whatever queries it requires for the database, and probably performs some last aggregations, and returns. Speaking about last aggregations, in particular for window counters, how to serve the query, like give me number of likes in the last 30 minutes or one day. Typically, the timeline is split into pre-aggregated buckets, we call them tiles, of different size.
For instance, we can split the timeline into buckets of 1 minute, 30 minutes, 1 day. When the request comes, like let’s say at 3:37 p.m., we want number of likes in the last two hours. We can split this window of two hours, cover it by the relevant tiles, and so then we request the database for these tiles, get the response back, and aggregate across this tile and return the result. It’s a typical technique to power these window counters.
Challenges and Solutions: Story 1 (The Boring One)
That was it about architecture. Now let’s go to the challenges, solutions that I will be talking about in form of stories. The first story is a boring one. It’s about the choice of streaming platform and database. For streaming, we picked Redpanda. For database, we picked ScyllaDB. ScyllaDB and Redpanda, they are like siblings. They share a lot of similarities.
First, they were born with API compatibility with existing systems, like Scylla was born with Cassandra API, and later it did DynamoDB API. Redpanda is Kafka compatible. They’re both obsessed with performance. They do this via so-called shard-per-core architecture. Shard-per-core is when we split the data in some shards. Each shard is processed by a given core in the system. This architecture allows eliminating synchronization, overhead, or logs, this kind of stuff. This helps them to get to the performance numbers they desire. It’s not easy to build applications using this shard-per-core technique. They both leverage Seastar framework. It’s an open-source framework. It’s built by Scylla team. It allows to build these kinds of applications. They both don’t have autoscaling. It’s actually not entirely true anymore for Redpanda. They just announced that they launched Redpanda serverless in one of their clouds. If you install them in your own stack, they don’t have autoscaling out of the box.
Despite that, they’re extremely cost efficient. In our company, we use them not only in this feature platform thing, but actually migrated a lot of workloads to ScyllaDB: to Scylla we migrated from Bigtable, and to Redpanda we migrated from GCP Pub/Sub and in-house Kafka, and achieved really nice cost saving. They both have really cute mascots. If you work with the systems, you get to work with some swag, like this one. That’s it about the story, because these systems are tragically boring. They just work. They really seem to deliver on the promise of operational easiness. Because when you run them, you don’t need to tune, so they tend to get the best out of the given hardware. My view on this is biased a bit, because we use managed versions. We tested non-managed versions as well, and this is the same. We tested it, and we picked managed, just because we didn’t want to have a team behind this.
Story 2 (Being Naive)
Let’s go to the less boring part. When it came to streaming and database, there were a bunch of options. When it comes to processing engine, especially at the moment when we made the decision, there were not so much options. In fact, there was only one option, when you want real-time stream processing. What’s important, we want real-time stream processing, and we want some expressive language for the stream processing.
Apache Flink is the framework that can process a stream of data in real-time, has some power capabilities. What’s important, it has Flink SQL. One can define the job in SQL language, and it’s really important for feature platform, because of that property we want to deliver so that it’s easy to write features, and it’s most importantly easy to read features. We picked Flink. We can build features using this streaming SQL. What it does, this query, it basically works on top of streaming data, selects them, GROUP BY by some entity ID and some identifier of the tile, because we need to aggregate these tiles. This query forms this so-called virtual table. Virtual, because it’s not really materialized, it keeps updating as the data comes. For instance, some event comes, and now we updated the shares and likes. Some new event comes, we again update it. This process continues. Super nice, we build this proof of concept pretty quickly, and we’re happy. From developer experience, Flink is really nice. It’s easy to write on using developer workflow.
Then the question comes, so we build this query, we launch the job, it’s running, it’s updating this virtual table, all good. Now we want to update the query. Flink has this concept called savepoint, checkpoints. Basically, because it’s stateful processing, it has some state. What savepoint does is that we can stop the job, take the snapshot of the state, then we can update the job, and start again, and start from this snapshot. It continues working without huge backlog or something. This was the expectation. Nice, we have Flink, so Flink has savepoints. Now we have Flink SQL, so we can update this SQL and restore from savepoint. Unfortunately, it doesn’t work like this.
In fact, it’s impossible to upgrade Flink SQL job. For us, at that moment, kind of unexperienced Flink engineers, it came like a big shot. Because like, come on guys, are you kidding? What are we supposed to do? We launched the job and should expect it never fails? Like, works forever or what? When the first shock faded, and we thought about this a little bit more, it’s not that surprising. Because this query actually gets translated to a set of stateful operators. When we do even slight change in the query, these stateful operators may completely differ. Of course, mapping one set of stateful operators to another is a super-difficult task. It’s not yet implemented. It’s not a Flink fault that it’s not implemented. Knowing this doesn’t make our life easier, because for us, we want to provide a platform where users want to go, update SQL, and just relaunch the job. It should just pick and continue working. The typical recommendation is to always backfill.
If you want to update Flink SQL, we compose the job in such a way that it first runs in batch mode, already process data, and then continue on the new data. It’s really inconvenient, and it’s also costly. If you want to do this every single time we update these features, it will cost us a lot of money. It’s just super inconvenient, because backfill is also taking time. We want the experience so that we updated the job and just relaunched them, and it continues working.
One thing where Flink shines is so-called DataStream API. DataStream, in comparison to SQL API, DataStream is where we express the job in a form like some JVM language, like Java, Kotlin, or Scala. There is a really nice interop between SQL and DataStream. In particular, it’s called Changelog. When we want to get from SQL to DataStream, there is a thing called Changelog. Basically, SQL will send so-called Changelog rows. What is it? It’s a row of the data with this marker, like on this list, +I. +I means that it’s a new row on our virtual table. There could be -U, +U. They come in pairs. -U means that this was the row before update, and +U means this is the row after update. Once SQL is running, it keeps issuing these Changelog entries. What’s interesting about this Changelog, if you look at this, you can notice that the final values of our columns, our features, is aggregation over this Changelog.
If you consider + operation is plus, and – operation is minus. Basically, here we see three rows. If we want to count shares, we do, 3 – 3 + 5, final result is 5. The same continues. We can keep treating these Changelog entries like commands, either + command or – command. If we want to express this in form of this DataStream, there will be a function like this. Super simple. We have row update. We have current state. We get the state. We see if it’s positive update or negative update, and we perform this update. Super simple. So far, it’s super obvious. What’s interesting about this model is that it survives job upgrades. Because when we upgrade the job, we lost SQL state, fine. This SQL will continue issuing these Changelog entries. We have this set of +I, -U, +U.
Then job upgrades, and now this row from the SQL engine perspective, is a new row. It will start sending these +I entries. It’s fine. This +I represents exactly the change from the previous data. From the logic that performs aggregation over the Changelog, nothing happens. It just keeps treating these entries as commands, like + command, – command. We see, we had this aggregation. Now job upgrade happened. We don’t care. We keep aggregating these Changelog entries. We can compose the job in this way. There is SQL part, and there is Changelog aggregation part. SQL part, we don’t control the state there, because of this magic and complexities of this SQL and so on. In this Changelog aggregation part, it’s expressed in DataStream. This is where we control the state. We can build the state in such a way that it survives job upgrades. This part of the job can be restored from savepoint and continue. The flow will look like we updated the query and just relaunched the job from savepoint. Computation will continue.
Story 3 (Being Tired)
Now we have the system that we can write and also update. It’s pretty exciting. The next problem that we might face is that when we launch the job, performance over time may decline. We call this the job getting tired. It’s clear why. Because this is a stateful job, and we have state. The bigger the state, the less performant the job. It’s a pretty clear dependency. The typical recommendation, you have the state that just applied TTL. The state will expire. It will be constant size. The job will not get tired. It’s a file recommendation, but it’s not really clear what kind of TTL to pick. For instance, in our case, the largest tile that we use in this system is a five-day tile. If you want counters to be accurate, the TTL must be at least five days. It’s not even entirely solved the problem of lifetime counters, where we don’t have window. It’s not really clear what kind of TTL to pick. Assuming they’re fine with lifetime counters being inaccurate, and assuming that we find the five-day TTL in the context of lifetime counters, the problem is that five days is too much.
In our experience, the job processing hundreds of thousands of events and performing millions of operations per second with the state, it shows signs of being tired pretty quickly, just a few hours after launch. Five days is just too much. Of course, we can upscale the job, but it comes with a cost. What can we do about it? Remember that we now have two types of state. One is SQL state, and now, Changelog aggregation state. The good news about SQL state is that we shouldn’t do anything, because we already survived the job upgrade because of this Changelog mode.
Now from the SQL perspective, it doesn’t matter if it lost state because of job upgrade, or it lost state because of TTL. It doesn’t matter. We just set TTL on the SQL and keep treating these Changelog commands as commands. We shouldn’t do anything. For Changelog aggregation state, we can modify our function a bit. When we access the state and it got expired, we can just query our Scylla, because the data exists in Scylla. We modify, like have this in green update of our function. Now we can set TTL for Changelog aggregation as well. It will keep running and recovering itself from the main database. Now the jobs are no longer getting tired, so it has consistent performance because the state has a consistent size.
Story 4 (Co-living Happily)
Good. Now we can launch the job, update the job, and it’s not dying. The problem though now with previous change, now jobs not only write to database, but also read from them. It’s actually a big deal. A big deal for the job to read from the database, because for a database like Scylla or Cassandra or similar kind, reads are somewhat more expensive than writes, especially cold reads. If we read something which is cold, which doesn’t contain database cache, a lot of stuff happens. Because we need to scan multiple files on the disk to get the data, merge them, update the cache, a lot of stuff. What’s interesting about jobs, is that they are more likely to hit cold reads than the service that serves features. What would happen is that when we do something on the job site, for instance, we launched a bunch of test jobs. We want to launch test jobs because we want to unlock this engineering velocity, and we want to experiment, and so on. Or maybe some job got a backlog for some reason and needs a huge backfill or whatever.
We launch the job, and they start hitting these cold reads, especially if it’s in backfill, and they try to process the backlog: they hit a lot of these cold reads. It thrashes main database and affects the service latency that accesses the features. What do we do about that? The first thought may be that we need some throttling. The problem, though, it’s not really clear the level where we should apply the throttling. We cannot throttle on individual worker’s level in the Flink job, because at least we have multiple jobs, and a new job can come and go.
Instead, we can have some kind of coordinator in between jobs and Scylla, which is basically a proxy. It’s a tricky thing, though, because this proxy, Scylla and Scylla clients are really super uber-optimized. If you want the same efficiency for the reads, this proxy should be at least optimized as well as Scylla itself, which is rather tricky. Overall, this solution is complex. It actually has extra cost, because we need this component, which needs to be scaled appropriately. Likely, it’s not efficient, because it’s not really easy to write this proxy in the same way that it will be as efficient as Scylla itself.
The second option is called data centers. Scylla, the same as Cassandra, it has data center abstraction. It’s purely logical abstraction. It doesn’t need to be a real data center, real physical. Basically, we can split our cluster into two logical data centers. Job will hit data center for a job, and feature store will hit data center for the feature store. It’s pretty nice, because it has super great isolation. We also can independently scale these different data centers. The downside is that the cluster management for Scylla becomes much more complex. It also comes with cost, because even though we can independently scale these data centers, it still means extra capacity. Also, the complexity of cluster management shouldn’t be underestimated, especially if you want real data centers. Now, for instance, we wanted our database to be in two data centers, and now it’s in four. The complexity actually increases quite a lot.
The third option, the one that we ended up with, is so-called workload prioritization. There is this feature in Scylla called workload prioritization. It’s that we can define multiple service levels inside Scylla and attach different workloads to different service levels. How does it work? Any access to any resource in Scylla has the queue of operations. For instance, we have job queue and service queue: job has 200 shares and service has 1,000 shares. What does it mean? It means that for any unit of work for the job queries, Scylla will perform up to five units of work for service queries. There is the scheduler that picks from these queues and forms the final queue. What does it mean? It means finally we will have consistent latency. Of course, job latency will be higher than service latency. This is fine because job is a background thing and doesn’t care about latency that much. It cares about throughput.
Service, on the other hand, it cares about latency. Summarizing this final solution, how it will look like, basically we don’t do anything with the cluster, with Scylla itself. We just set up these different service levels. From the job, we access the cluster using user for the job workload. From feature service, we access user for the service-service level. This is super simple. No operation overhead. Best cost because we don’t need to do anything with the cluster. The only downside is that job and service, they connect to the same nodes in Scylla. Theoretically, they are not entirely isolated. It’s a nice tradeoff between cost and safety.
Story 5 (Being Lazy)
Final story is about being lazy so that now we can launch the job, they are running, and they don’t impact our main service. Now it’s time to think about the data model in database to serve the queries. We need to be able to query these tiles to aggregate window counters. The natural data model is like this. Scylla has the notion of partitions. A partition is basically a set of rows ordered by some keys. We can attach each entity ID to partition. Inside partition, we can store each feature in its row. Basically, feature will be identified by timestamp of the tile and feature name. We have these rows. This schema is nice because we don’t need to modify the schema when we add each feature. It’s schemaless in terms of, we can add as many feature types as we want, and the schema survives. However, if you do some math, we have 8,000 feed requests. On average, we rank around 2,000 candidates. For each candidate, we need to query 100 features.
For these features, we need to query like 20 or something tiles. Also, assuming that our feature service has some local cache, and assume we have 80% cache hit rate, then multiplying all of this, we will get more than 7 billion rows per second. This is the load that our database needs to perform in order to satisfy this load. This is totally fine for Scylla. It can scale to this number. The problem is that it will use some compute. Of course, our cloud provider, GCP, and Scylla itself, they will be happy to scale. Our financial officer might not be that happy. What can we do? Instead of storing each feature in the individual row, we can compact multiple features into the same row, like this, so that now rows identify only by tile timestamp. Row value is basically bytes, some serialized list of pairs, like feature name to feature value, feature name to feature value. This is nice, because now we don’t have this 100 multiplier anymore, because we will query 100 less rows, and this number of rows per second looks much better.
The question may arise, whether it’s really a cost saving, because it can be that we just shifted the compute from the database layer to basically the job. Because we used to have this nice schema, when we updated each feature independently, it was nice. Now we have these combined features. In protobuf, it can be expressed in a message like this. We have FeaturesCombined and map in string to some feature value. What does it mean? It means that whenever a feature is updated, we need to serialize all of them together, every single time. Basically, it may look like the cost of updating a single feature now gets 100 times bigger. It’s a fair question. There are fairly easy steps to mitigate it. The first is a no-brainer, is that we don’t need to store strings, of course. We always can store some kind of identifiers of the feature. We can always have a dictionary mapping feature values to some IDs. Now we need to serialize mapping of int to feature value, which, of course, for protobuf is much easier.
The second observation is that protobuf format is pretty nice, in the sense that this map, can actually via equivalent to just a repeated set of key-value pairs. Basically, we have two messages. One is FeaturesCombined, and another is FeaturesCombinedCompatible, which is just repeat MapFieldEntry. We can serialize the FeaturesCombined, and deserialize FeaturesCombinedCompatible, and vice versa. They’re equivalent in the form of bytes that get produced. Moreover, they’re actually equivalent to the just repeat bytes feature, so basically, array of arrays. All these three messages, FeaturesCombined, FeaturesCombinedCompatible, FeaturesCombinedLazy, they’re equivalent in the form of the bytes that get produced by protobuf. How does it help? It helps that in the Flink state, we can store the map from feature ID to the bytes. Bytes serialize this MapFieldEntry. When we need to serialize all of these features, we just combine these bytes together, have this array of arrays, form these FeaturesCombinedLazy message, and serialize with protobuf.
This serialization of protobuf, it’s super easy because protobuf itself will just write these bytes one after another. This serialization is extremely cheap. It’s much cheaper than serialization of original message. In fact, when we implemented that, we didn’t need to scale the job at all. In comparison to other things that the job is doing, this step is basically negligible.
Summary
Assuming that you decided to build a feature platform, first, good luck. It’s going to be an interesting journey. Second, take a look on ScyllaDB and Redpanda. The odds are that they may impress you and be your friends. Third thing is that Flink is still the king of real-time stream processing, but it takes time to learn and to use it in the most efficient way. The fourth thought, there are multiple vendors now who build SQL-only stream processing. My thought is that, in my opinion, SQL is not enough. I don’t understand how we can build what we build without this power of Flink DataStream API. Probably, it’s possible via some user-defined functions or something, but it likely would look much uglier and harder to maintain.
In my opinion, Flink’s ability to have this DataStream API is really powerful. Finally, lazy protobuf trick is a pretty nice trick that can be used pretty much anywhere. For instance, in addition to this place, we also use it on our service to cache gRPC messages. Basically, we have gRPC server, and there is a cache in front of it. We can store serialized data in the cache. When we need to respond to gRPC, we just send these bytes over the wire without a need to deserialize the message, to serialize it back.
Questions and Answers
Participant 1: You’re looking into C++ variants of various classically Java tools. Have you looked into Ververica’s new C++ version of Flink yet? Partially backed by some team at Alibaba, Ververica are launching a C++ rewrite of Flink. It will have similar properties to like Scylla’s rewrite of Cassandra. It’s called VERA, their new platform. Have you looked into using that as another way to get more performance out of Flink? There’s a new one called VERA by the Ververica team, which is a rewrite of the core Flink engine in C++.
Burmistrov: The question is, there is some solution, which is a Flink rewrite to C++, pretty much like what happened to Scylla, rewriting Cassandra, which is Java to C++, and Redpanda the same, like Kafka, which is in Java to C++. In fact, I don’t know about the solution in C++, but there are other solutions that claim to be Flink competitors, which are written in Rust. The downside of all of them that I mentioned, they claim to be all SQL-only. This is harder. We took a look at multiple of them. Performance may be indeed good, but how to adapt them with the same level of what we can do with Flink, we didn’t manage to figure out.
Participant 2: Can you comment a little bit on the developer experience of adding new features, and how often does that happen? You said you have 100 features. How often does that grow, and what’s the developer experience of that?
Burmistrov: What is the developer experience of adding features, and how does it look like in our solution? It’s far from great. For each feature, we have some kind of configuration file. We split features into so-called feature set. Feature set, they are logically combined. Logically, for instance, we have one model, and user features for this model, or this kind of stuff, post features for this model. They are somehow logically combined. This configuration file is basically YAML that contains some settings, and also query. Query in Flink or SQL, but the query is simple, so it’s equivalent in any SQL dialect. This query is basically select.
Then, people can do something with select, like transform something, whatever. They define the query, and then basically they can launch the job. There is deployment pipeline. They can push the button, and a job gets upgraded. That’s the flow to define the features. There is also the process of how this feature gets available for training. We still use so-called wait and lock approach. Basically, through the lifetime of accessing the feature, we lock the values, and using this lock, model gets trained. There is process. When we edit features, we now start to access it for locking. Then enough time passed, so model can start being trained on this data.
Participant 3: Can you maybe elaborate on why did you choose Redpanda over Kafka?
Burmistrov: Why did we choose Redpanda over Kafka? The answer is the cost. First of all, we wanted managed, because we didn’t want to manage. We didn’t have people to manage Kafka. We use the experience of managed Kafka in-house, we just didn’t have a team to continue this. Then, we started evaluating the solutions. There is Confluent and other vendors. Then we compared prices. Kafka was the winner for the cost. Also, there are a few other considerations, like they have remote read replicas. They’re moving towards being a data lakehouse. Every vendor is actually moving to that direction. We just liked their vision.
Participant 4: The DataStream API for Flink is very similar to Spark structured streaming, in terms of the ability to do upserts on the tables. If jobs fail, we can use the checkpoints to trigger jobs. What about the disaster recovery strategies if the data is lost? What usually have you thought in terms of a data backup. Then the trouble becomes that the checkpoints are not portable to another location, because of issues of hardcoding of the table IDs and stuff like that. Have you thought about that?
Burmistrov: What are the best practices of using Flink or Spark streaming, which is equivalent, in terms of disaster recovery, like if job died or something happened? First of all, we have checkpoints enabled. They’re actually taken pretty regularly, like once per minute. Checkpoints get uploaded to cloud storage, S3 equivalent in GCP. We have a history of checkpoints. Also, we take savepoints once in a while. We have all this stuff ready for a job to recover from.
Sometimes, to be fair, with Flink at least, actually the state can get corrupted. The state can get corrupted in such a nasty way that all checkpoints that we store, let’s say we store like last 5, 10, whatever checkpoints, they all can get corrupted. Because the corruption can propagate from checkpoint to checkpoint. Now we have the job with unrecoverable state, what do we do? The good thing about the approach I described, that we don’t do anything, we just start a job from scratch. Because it will just recover from the main database by itself. There will be a little bit of incorrectness probably due to last minute or whatever of data. In general, it can get to the running state pretty quickly.
Participant 5: I have a follow-up question to the developer experience. I can imagine that when you’re trying to understand, especially the part where you talk about the Changelog and the Flink operators, as a developer, I would love to be able to interact with the state. I know that the queryable state feature from Flink was deprecated. I don’t know whether you were able to figure out a different way to see what’s in the state and help you in your ability to create new features and stuff.
Burmistrov: What’s the developer experience when it comes to figuring out what’s going on inside the job? Basically, we have two states: one SQL state and then this Changelog, aggregation of a Changelog. What do we do? It’s in the works now for us. We didn’t have this for a while. Relied on basically feature lock already down the line. When a feature computed and access it, we lock it and we can have basically match rate over raw data plus feature lock. Of course, it’s pretty tricky. We actually want to dump this intermediate Changelog operations to some OLAP database like ClickHouse or similar. In this way, we will have this full history of what happened and ability to query and see. It’s not yet ready, so we’re working on it too.
See more presentations with transcripts

MMS • RSS
Posted on mongodb google news. Visit mongodb google news

MongoDB’s present stock price level is a return to January 2023. However, MDB stock is known for its rallies, having even come close to its ATH at $486 in February 2024.
Is this the moment retail investors have been waiting for, buying low in anticipation for another rally, or are MongoDB fundamentals less sound than they appear?
Not only does Big Tech boost Nvidia’s valuation as hyperscalers for AI data centers, but it also offers a similar database management service as MongoDB. Case in point, Microsoft’s Azure Cosmos DB offers low latency multi modal support for data models as a turnkey solution.
Similarly, Amazon (NASDAQ:AMZN) has DynamoDB as another serverless NoSQL solution that is fully managed, making it convenient for launching internet apps at scale. Given the deep pockets of these Big Tech companies, and their expected reliability, where does MongoDB fit into this market?
MongoDB should be understood as complementary, providing access to all major cloud infrastructures, Amazon’s AWS, Microsoft’s Azure and Google’s GCP. In fact, using the company’s Atlas (NYSE:ATCO) platform, clients could opt for all three simultaneously based on regional availability and performance.
Moreover, while being cloud-native and attached to Big Tech, clients could opt for on-premises deployment for a private data center or have a hybrid setup. The convenience factor is carried over, as MongoDB Atlas scales automatically by vertically allocating computational resources. Horizontally, the platform offers sharding of datasets across multiple servers.
Compounding this flexibility, MongoDB offers an attractive pay-as-you-go pricing model as a mid range option while the high range is only $30 per month, outside Enterprise Advanced pricing. And while not fully open source, MongoDB’s Community Edition is license-free and use-free.
But the question is, has MongoDB found the right balance between pricing and profitability?
At the end of January 2025, the company attracted over 54,500 clients, mostly using its platform for software development, machine learning, artificial intelligence and web development. The bulk of MongoDB customers run a small business, up to 50 employees, which makes sense given the attractive pricing.
In terms of financials, this translated to a double-digit revenue growth to $548.4 million, at 20% year-over-year according to the early March earnings report for fiscal Q4 2025. The company’s Atlas platform made 71% of the total revenue, having grown 24% yoy.
However, there is a cost in this revenue expansion. Compared to a year-ago quarter, MongoDB’s gross margin actually dropped, from 75% to 73%. Even though it seems minor, it does raise concern if MongoDB can keep up with running costs.
Likewise, the company is yet to enter sustained profitability territory. The latest reported fiscal Q4 ’25 is MongoDB’s first profitable quarter, at $15.8 million net income compared to the year-ago’s net loss of $55.5 million. But it is yet to be determined if following quarters will trend upwards or downwards.
What is clear is that the company is burning cash, having halved its free cash flow to $22.9 million from $50.5 million in the year-ago quarter. Overall, MongoDB has a $1.8 billion accumulated deficit against the equity value of $2.7 billion and total current liabilities worth $561.9 million.
This makes the current MDB valuation around 5x its book value.
MongoDB is still a mixed bag, but this makes it ripe for speculative high-growth investing. The sustained customer growth is positive, as well as the first profitable quarter, somewhat overshadowed by lower gross margin and free cash flow.
At the same time, MongoDB’s acquisition of Voyage AI this February is a significant step in the right direction. The AI hype keeps driving data center/management growth, but is also curtailed by the notorious AI confabulation. The Voyage AI team has been working hard to remedy that problem.
If MongoDB becomes known as the go-to database layer for robust AI app launches, this will be reflected in MDB stock. Following the recent wider stock market correction, this makes MDB a compelling exposure at the moment.
Keeping the risks in mind, the average MDB price target is substantially higher than the present price of $188.81, at $300.46 per share. Likewise, the low estimate is also higher, at $215, according to WSJ forecasting data.
***
Neither the author, Tim Fries, nor this website, The Tokenist, provide financial advice. Please consult our website policy prior to making financial decisions.
This article was originally published on The Tokenist. Check out The Tokenist’s free newsletter, Five Minute Finance, for weekly analysis of the biggest trends in finance and technology.
Article originally posted on mongodb google news. Visit mongodb google news

MMS • Ben Linders
Article originally posted on InfoQ. Visit InfoQ

DevOps streamlines software development with automation and collaboration between development and IT teams for efficient delivery. According to Nedko Hristov, testers’ curiosity, adaptability, and willingness to learn make them suited for DevOps. Failures can be approached with a constructive mindset; they provide growth opportunities, leading to improved skills and practices.
Nedko Hristov shared his experiences with DevOps as a quality assurance engineer at QA Challenge Accepted.
DevOps by definition is streamlining software development by automating and integrating tasks between development and IT operations teams, Hristov said. It fosters communication, collaboration, and shared responsibility for faster, more efficient software delivery and integration.
Hristov mentioned that DevOps is not typically an entry-level role. It requires a strong foundation in software development practices, including how software is designed, built, and tested. Fortunately, software quality engineers or software testers often possess this foundational knowledge, as he explained:
We understand the software development lifecycle and many of us have expertise in automation. This background allows us to quickly grasp the core concepts of DevOps.
While software testers may not have deep expertise in every technology used in a DevOps environment, they are familiar with the most common ones, Hristov said. They understand coding principles, deployment processes, and system architectures at a high level. He mentioned that this broad understanding is incredibly valuable for applying DevOps.
Software testers are often inherently curious, Hristov said. They are driven to learn and expand their knowledge base:
When I first became interested in DevOps, I proactively sought out developers to understand the intricacies of their work. I asked questions about system behavior, troubleshooting techniques, and the underlying causes of failures.
Software testers can leverage their existing skills, their inherent curiosity, and a proactive approach to learning to successfully transition into and thrive in DevOps roles, as Hristov explained.
One of the most crucial skills I’ve gained is adaptability. In the ever-evolving tech landscape, we constantly encounter new technologies. This involves identifying key concepts, finding practical examples, and focusing on acquiring the knowledge necessary for the task at hand.
A strong foundation in core technologies is essential, Hristov mentioned. This doesn’t necessitate deep expertise in every domain, but rather a solid understanding of fundamental principles, he added.
Failures provide invaluable opportunities for growth and deeper understanding, Hristov said. While it’s true that failures are not desirable, it’s crucial to approach them with a constructive mindset:
I consistently emphasize to those I mentor that failure is not inherently negative. Our professional development is fundamentally based on experience, and failures are among the most effective teachers.
Failures are essential stepping stones towards enhanced comprehension and improved working practices, Hristov said. They are not inherently good or bad, but rather mandatory components of growth, he concluded.
InfoQ interviewed Nedko Hristov about what he learned from applying DevOps as a software tester.
InfoQ: What skills did you develop and how did you develop them?
Nedko Hristov: My skill development is driven by a combination of adaptability, focused learning, effective communication, practical application, and knowledge sharing.
I’ve honed my communication skills, particularly the ability to ask effective questions. This is often overlooked, but crucial for knowledge acquisition.
When approaching a new technology I usually ask myself a ton of questions:
- What is the core purpose of this technology?
- What problem does it solve?
- How does it work, what are the underlying mechanisms and architecture?
- How can I integrate it into my current projects, tooling, and workflows?
- Are there any best practices or recommended patterns for using it?
- What are the common challenges or pitfalls associated with this technology?
- How does this technology compare to its alternatives?
InfoQ: What’s your approach to failures?
Hristov: When encountering a failure, we should ask ourselves three key questions:
- What happened – Objectively analyze the events leading to the failure
- Why it happened – Identify the root cause and contributing factors
- What is my takeaway – Determine the lessons learned and how to apply them in the future
The key is to extract valuable takeaways from each failure, ensuring that we approach similar situations with greater knowledge and preparedness in the future.
InfoQ: What have you learned, what would you do differently if you had to start all over again?
Hristov: I was fortunate to begin my DevOps journey under the guidance of a strong leader who provided me with a solid foundation. However, if I were to start over, I would focus on avoiding the pursuit of perfection. In the tech world, there will always be a more optimized solution or a 500ms faster request. Instead of chasing perfection, I would prioritize understanding the core business needs and identifying the critical pain points to address.
Early in my career, I often fell into the trap of trying to make everything perfect from the beginning. This is almost impossible and can lead to unnecessary delays and frustration. It’s more effective to iterate and improve incrementally, focusing on delivering value quickly and refining solutions over time.

MMS • RSS
Posted on mongodb google news. Visit mongodb google news
PNC Financial Services Group Inc. decreased its holdings in shares of MongoDB, Inc. (NASDAQ:MDB – Free Report) by 11.7% during the fourth quarter, according to its most recent filing with the SEC. The firm owned 1,932 shares of the company’s stock after selling 255 shares during the period. PNC Financial Services Group Inc.’s holdings in MongoDB were worth $450,000 at the end of the most recent reporting period.
Other institutional investors have also added to or reduced their stakes in the company. B.O.S.S. Retirement Advisors LLC purchased a new position in MongoDB during the 4th quarter valued at about $606,000. Geode Capital Management LLC grew its position in shares of MongoDB by 2.9% in the 3rd quarter. Geode Capital Management LLC now owns 1,230,036 shares of the company’s stock worth $331,776,000 after buying an additional 34,814 shares during the last quarter. B. Metzler seel. Sohn & Co. Holding AG purchased a new position in shares of MongoDB in the 3rd quarter worth approximately $4,366,000. Charles Schwab Investment Management Inc. grew its position in shares of MongoDB by 2.8% in the 3rd quarter. Charles Schwab Investment Management Inc. now owns 278,419 shares of the company’s stock worth $75,271,000 after buying an additional 7,575 shares during the last quarter. Finally, Union Bancaire Privee UBP SA purchased a new position in shares of MongoDB in the 4th quarter worth approximately $3,515,000. Hedge funds and other institutional investors own 89.29% of the company’s stock.
Insider Activity
In related news, Director Dwight A. Merriman sold 3,000 shares of the firm’s stock in a transaction dated Monday, March 3rd. The shares were sold at an average price of $270.63, for a total transaction of $811,890.00. Following the completion of the sale, the director now directly owns 1,109,006 shares in the company, valued at approximately $300,130,293.78. This trade represents a 0.27 % decrease in their ownership of the stock. The transaction was disclosed in a document filed with the Securities & Exchange Commission, which is accessible through this link. Also, insider Cedric Pech sold 287 shares of the firm’s stock in a transaction dated Thursday, January 2nd. The stock was sold at an average price of $234.09, for a total transaction of $67,183.83. Following the sale, the insider now owns 24,390 shares of the company’s stock, valued at approximately $5,709,455.10. The trade was a 1.16 % decrease in their ownership of the stock. The disclosure for this sale can be found here. Insiders sold 43,139 shares of company stock worth $11,328,869 over the last three months. Insiders own 3.60% of the company’s stock.
Analysts Set New Price Targets
A number of research analysts have recently issued reports on the stock. Stifel Nicolaus cut their price objective on shares of MongoDB from $425.00 to $340.00 and set a “buy” rating on the stock in a research report on Thursday, March 6th. Tigress Financial lifted their price objective on shares of MongoDB from $400.00 to $430.00 and gave the stock a “buy” rating in a research report on Wednesday, December 18th. Guggenheim raised shares of MongoDB from a “neutral” rating to a “buy” rating and set a $300.00 price objective on the stock in a research report on Monday, January 6th. Rosenblatt Securities reissued a “buy” rating and set a $350.00 target price on shares of MongoDB in a research report on Tuesday, March 4th. Finally, Robert W. Baird dropped their target price on shares of MongoDB from $390.00 to $300.00 and set an “outperform” rating on the stock in a research report on Thursday, March 6th. Seven analysts have rated the stock with a hold rating and twenty-three have assigned a buy rating to the company. According to data from MarketBeat, the stock presently has an average rating of “Moderate Buy” and an average target price of $320.70.
Check Out Our Latest Analysis on MDB
MongoDB Price Performance
MDB opened at $190.06 on Thursday. The firm has a fifty day moving average price of $253.21 and a 200 day moving average price of $270.89. MongoDB, Inc. has a 1 year low of $173.13 and a 1 year high of $387.19. The stock has a market cap of $14.15 billion, a P/E ratio of -69.36 and a beta of 1.30.
MongoDB (NASDAQ:MDB – Get Free Report) last posted its quarterly earnings data on Wednesday, March 5th. The company reported $0.19 earnings per share (EPS) for the quarter, missing the consensus estimate of $0.64 by ($0.45). The company had revenue of $548.40 million for the quarter, compared to analysts’ expectations of $519.65 million. MongoDB had a negative net margin of 10.46% and a negative return on equity of 12.22%. During the same period in the previous year, the business earned $0.86 earnings per share. Equities analysts expect that MongoDB, Inc. will post -1.78 EPS for the current year.
MongoDB Company Profile
MongoDB, Inc, together with its subsidiaries, provides general purpose database platform worldwide. The company provides MongoDB Atlas, a hosted multi-cloud database-as-a-service solution; MongoDB Enterprise Advanced, a commercial database server for enterprise customers to run in the cloud, on-premises, or in a hybrid environment; and Community Server, a free-to-download version of its database, which includes the functionality that developers need to get started with MongoDB.
Featured Stories
Receive News & Ratings for MongoDB Daily – Enter your email address below to receive a concise daily summary of the latest news and analysts’ ratings for MongoDB and related companies with MarketBeat.com’s FREE daily email newsletter.
Article originally posted on mongodb google news. Visit mongodb google news

MMS • RSS
Posted on mongodb google news. Visit mongodb google news
MongoDB, Inc. (NASDAQ:MDB – Get Free Report) saw some unusual options trading on Wednesday. Stock investors bought 36,130 call options on the company. This represents an increase of 2,077% compared to the typical daily volume of 1,660 call options.
MongoDB Stock Up 0.7 %
Shares of MongoDB stock opened at $190.06 on Thursday. MongoDB has a 12-month low of $173.13 and a 12-month high of $387.19. The company has a market capitalization of $14.15 billion, a price-to-earnings ratio of -69.36 and a beta of 1.30. The firm’s 50-day moving average price is $253.21 and its two-hundred day moving average price is $270.89.
MongoDB (NASDAQ:MDB – Get Free Report) last announced its quarterly earnings data on Wednesday, March 5th. The company reported $0.19 earnings per share (EPS) for the quarter, missing analysts’ consensus estimates of $0.64 by ($0.45). MongoDB had a negative net margin of 10.46% and a negative return on equity of 12.22%. The firm had revenue of $548.40 million during the quarter, compared to the consensus estimate of $519.65 million. During the same period last year, the business earned $0.86 earnings per share. On average, sell-side analysts expect that MongoDB will post -1.78 earnings per share for the current year.
Wall Street Analyst Weigh In
Several equities analysts have recently weighed in on MDB shares. Mizuho raised their price objective on MongoDB from $275.00 to $320.00 and gave the company a “neutral” rating in a report on Tuesday, December 10th. Macquarie decreased their price target on MongoDB from $300.00 to $215.00 and set a “neutral” rating for the company in a research note on Friday, March 7th. Monness Crespi & Hardt upgraded MongoDB from a “sell” rating to a “neutral” rating in a research note on Monday, March 3rd. KeyCorp cut shares of MongoDB from a “strong-buy” rating to a “hold” rating in a research note on Wednesday, March 5th. Finally, Cantor Fitzgerald started coverage on shares of MongoDB in a research report on Wednesday, March 5th. They set an “overweight” rating and a $344.00 price target on the stock. Seven investment analysts have rated the stock with a hold rating and twenty-three have issued a buy rating to the company. Based on data from MarketBeat, the stock has an average rating of “Moderate Buy” and an average target price of $320.70.
View Our Latest Analysis on MongoDB
Insider Activity
In other news, Director Dwight A. Merriman sold 3,000 shares of the firm’s stock in a transaction that occurred on Monday, March 3rd. The shares were sold at an average price of $270.63, for a total transaction of $811,890.00. Following the completion of the transaction, the director now directly owns 1,109,006 shares of the company’s stock, valued at $300,130,293.78. This trade represents a 0.27 % decrease in their position. The sale was disclosed in a legal filing with the Securities & Exchange Commission, which is available through the SEC website. Also, insider Cedric Pech sold 287 shares of the stock in a transaction on Thursday, January 2nd. The stock was sold at an average price of $234.09, for a total transaction of $67,183.83. Following the transaction, the insider now owns 24,390 shares of the company’s stock, valued at $5,709,455.10. The trade was a 1.16 % decrease in their position. The disclosure for this sale can be found here. Insiders have sold a total of 43,139 shares of company stock worth $11,328,869 over the last three months. Corporate insiders own 3.60% of the company’s stock.
Institutional Trading of MongoDB
Large investors have recently bought and sold shares of the business. Strategic Investment Solutions Inc. IL acquired a new stake in shares of MongoDB in the 4th quarter valued at approximately $29,000. Hilltop National Bank increased its holdings in shares of MongoDB by 47.2% during the fourth quarter. Hilltop National Bank now owns 131 shares of the company’s stock worth $30,000 after buying an additional 42 shares in the last quarter. NCP Inc. bought a new position in shares of MongoDB in the 4th quarter valued at $35,000. Brooklyn Investment Group acquired a new stake in shares of MongoDB during the 3rd quarter valued at $36,000. Finally, Continuum Advisory LLC boosted its holdings in shares of MongoDB by 621.1% during the 3rd quarter. Continuum Advisory LLC now owns 137 shares of the company’s stock valued at $40,000 after acquiring an additional 118 shares in the last quarter. 89.29% of the stock is owned by institutional investors and hedge funds.
About MongoDB
MongoDB, Inc, together with its subsidiaries, provides general purpose database platform worldwide. The company provides MongoDB Atlas, a hosted multi-cloud database-as-a-service solution; MongoDB Enterprise Advanced, a commercial database server for enterprise customers to run in the cloud, on-premises, or in a hybrid environment; and Community Server, a free-to-download version of its database, which includes the functionality that developers need to get started with MongoDB.
See Also
This instant news alert was generated by narrative science technology and financial data from MarketBeat in order to provide readers with the fastest and most accurate reporting. This story was reviewed by MarketBeat’s editorial team prior to publication. Please send any questions or comments about this story to contact@marketbeat.com.
Before you consider MongoDB, you’ll want to hear this.
MarketBeat keeps track of Wall Street’s top-rated and best performing research analysts and the stocks they recommend to their clients on a daily basis. MarketBeat has identified the five stocks that top analysts are quietly whispering to their clients to buy now before the broader market catches on… and MongoDB wasn’t on the list.
While MongoDB currently has a Moderate Buy rating among analysts, top-rated analysts believe these five stocks are better buys.

MarketBeat just released its list of 10 cheap stocks that have been overlooked by the market and may be seriously undervalued. Enter your email address and below to see which companies made the list.
Article originally posted on mongodb google news. Visit mongodb google news

MMS • RSS
Posted on mongodb google news. Visit mongodb google news
MongoDB, Inc. (NASDAQ:MDB – Get Free Report) was the target of some unusual options trading on Wednesday. Investors bought 23,831 put options on the stock. This is an increase of approximately 2,157% compared to the typical volume of 1,056 put options.
Insider Activity at MongoDB
In other MongoDB news, CFO Michael Lawrence Gordon sold 1,245 shares of the company’s stock in a transaction that occurred on Thursday, January 2nd. The stock was sold at an average price of $234.09, for a total transaction of $291,442.05. Following the completion of the sale, the chief financial officer now owns 79,062 shares of the company’s stock, valued at $18,507,623.58. This represents a 1.55 % decrease in their ownership of the stock. The sale was disclosed in a document filed with the Securities & Exchange Commission, which is available through the SEC website. Also, insider Cedric Pech sold 287 shares of the company’s stock in a transaction that occurred on Thursday, January 2nd. The stock was sold at an average price of $234.09, for a total value of $67,183.83. Following the sale, the insider now directly owns 24,390 shares of the company’s stock, valued at $5,709,455.10. The trade was a 1.16 % decrease in their position. The disclosure for this sale can be found here. In the last quarter, insiders sold 43,139 shares of company stock valued at $11,328,869. 3.60% of the stock is owned by corporate insiders.
Institutional Inflows and Outflows
Institutional investors have recently made changes to their positions in the business. Strategic Investment Solutions Inc. IL acquired a new position in shares of MongoDB during the 4th quarter worth approximately $29,000. Hilltop National Bank grew its holdings in MongoDB by 47.2% in the 4th quarter. Hilltop National Bank now owns 131 shares of the company’s stock valued at $30,000 after buying an additional 42 shares in the last quarter. Brooklyn Investment Group acquired a new position in MongoDB in the 3rd quarter valued at $36,000. Continuum Advisory LLC grew its holdings in MongoDB by 621.1% in the 3rd quarter. Continuum Advisory LLC now owns 137 shares of the company’s stock valued at $40,000 after buying an additional 118 shares in the last quarter. Finally, NCP Inc. acquired a new stake in shares of MongoDB during the 4th quarter worth $35,000. 89.29% of the stock is owned by institutional investors and hedge funds.
Wall Street Analysts Forecast Growth
Several research firms have recently weighed in on MDB. UBS Group set a $350.00 price objective on MongoDB in a research report on Tuesday, March 4th. Stifel Nicolaus reduced their price objective on MongoDB from $425.00 to $340.00 and set a “buy” rating for the company in a research report on Thursday, March 6th. Wells Fargo & Company cut MongoDB from an “overweight” rating to an “equal weight” rating and reduced their price objective for the company from $365.00 to $225.00 in a research report on Thursday, March 6th. Rosenblatt Securities reissued a “buy” rating and issued a $350.00 price objective on shares of MongoDB in a research report on Tuesday, March 4th. Finally, Bank of America cut their target price on MongoDB from $420.00 to $286.00 and set a “buy” rating for the company in a research note on Thursday, March 6th. Seven analysts have rated the stock with a hold rating and twenty-three have issued a buy rating to the stock. According to data from MarketBeat.com, the stock presently has a consensus rating of “Moderate Buy” and a consensus target price of $320.70.
MongoDB Stock Up 0.7 %
Shares of MongoDB stock opened at $190.06 on Thursday. MongoDB has a 12-month low of $173.13 and a 12-month high of $387.19. The stock has a market cap of $14.15 billion, a P/E ratio of -69.36 and a beta of 1.30. The business’s 50 day moving average is $253.21 and its 200-day moving average is $270.89.
MongoDB (NASDAQ:MDB – Get Free Report) last announced its quarterly earnings data on Wednesday, March 5th. The company reported $0.19 earnings per share (EPS) for the quarter, missing analysts’ consensus estimates of $0.64 by ($0.45). MongoDB had a negative return on equity of 12.22% and a negative net margin of 10.46%. The company had revenue of $548.40 million for the quarter, compared to the consensus estimate of $519.65 million. During the same quarter in the prior year, the company earned $0.86 EPS. On average, sell-side analysts expect that MongoDB will post -1.78 EPS for the current year.
About MongoDB
MongoDB, Inc, together with its subsidiaries, provides general purpose database platform worldwide. The company provides MongoDB Atlas, a hosted multi-cloud database-as-a-service solution; MongoDB Enterprise Advanced, a commercial database server for enterprise customers to run in the cloud, on-premises, or in a hybrid environment; and Community Server, a free-to-download version of its database, which includes the functionality that developers need to get started with MongoDB.
Further Reading
This instant news alert was generated by narrative science technology and financial data from MarketBeat in order to provide readers with the fastest and most accurate reporting. This story was reviewed by MarketBeat’s editorial team prior to publication. Please send any questions or comments about this story to contact@marketbeat.com.
Before you consider MongoDB, you’ll want to hear this.
MarketBeat keeps track of Wall Street’s top-rated and best performing research analysts and the stocks they recommend to their clients on a daily basis. MarketBeat has identified the five stocks that top analysts are quietly whispering to their clients to buy now before the broader market catches on… and MongoDB wasn’t on the list.
While MongoDB currently has a Moderate Buy rating among analysts, top-rated analysts believe these five stocks are better buys.

Discover the next wave of investment opportunities with our report, 7 Stocks That Will Be Magnificent in 2025. Explore companies poised to replicate the growth, innovation, and value creation of the tech giants dominating today’s markets.
Article originally posted on mongodb google news. Visit mongodb google news

MMS • Bruno Couriol
Article originally posted on InfoQ. Visit InfoQ

The Inertia team recently released Inertia 2.0. New features include asynchronous requests, deferred props, prefetching, and polling. Asynchronous requests enable concurrency, lazy loading, and more.
In previous versions, Inertia requests were synchronous. Asynchronous requests now offer full support for asynchronous operations and concurrency. This in turn enables features such as lazy loading data on scroll, infinite scrolling, prefetching, polling, and more. Those features make the single-page application appear more interactive, responsive, and fast.
Link prefetching for instance improves the perceived performance of an application by fetching the data in the background before the user requests it. By default, Inertia will prefetch data for a page when the user hovers over the link after more than 75ms. By default, data is cached for 30 seconds before being evicted. Developers can customize it with the cacheFor
property. Using Svelte, this would look as follows:
import { inertia } from '@inertiajs/svelte'
<a href="/users" use:inertia={{ prefetch: true, cacheFor: '1m' }}>Users</a>
<a href="/users" use:inertia={{ prefetch: true, cacheFor: '10s' }}>Users</a>
<a href="/users" use:inertia={{ prefetch: true, cacheFor: 5000 }}>Users</a>
Prefetching can also happen on mousedown
, that is when the user has clicked on a link, but has not yet released the mouse button. Lastly, prefetching can also occur when a component is mounted.
Inertia 2.0 enables lazy loading data on scroll with the WhenVisible
component, which under the hood uses the Intersection Observer API. The following code showcases a component that shows a fallback message while it is loading (examples written with Svelte 4):
<script>
import { WhenVisible } from '@inertiajs/svelte'
export let teams
export let users
</script>
<svelte:fragment slot="fallback">
<div>Loading...</div>
</svelte:fragment>
</WhenVisible>
The full list of configuration options for lazy loading and prefetching is available in the documentation for review. Inertia 2.0 also features polling, deferred props, and infinite scrolling. Developers are encouraged to review the upgrade guide for more details.
Inertia targets back-end developers who want to create single-page React, Vue, and Svelte apps using classic server-side routing and controllers, that is, without the complexity that comes with modern single-page applications. Developers using Inertia do not need client-side routing or building an API.
Inertia returns a full HTML response on the first page load. On subsequent requests, server-side Inertia returns a JSON response with the JavaScript component (represented by its name and props) that implements the view. Client-side Inertia then replaces the currently displayed page with the new page returned by the new component and updates the history state.
Inertia requests use specific HTTP headers to discriminate between full page refresh and partial refresh. If the X-Inertia
is unset or false, the header indicates that the request being made by an Inertia client is a standard full-page visit.
Developers can upgrade to Inertia v2.0 by installing the client-side adapter of their choice (e.g., Vue, React, Svelte):
npm install @inertiajs/vue3@^2.0
Then, it is necessary to upgrade the inertiajs/inertia-laravel
package to use the 2.x
dev branch:
composer require inertiajs/inertia-laravel:^2.0
Inertia is open-source software distributed under the MIT license. Feedback and contributions are welcome and should follow Inertia’s contribution guidelines.