MongoDB Director Dwight A. Merriman Sells 2000 Shares – TradingView

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

Reporter Name Merriman Dwight A
Relationship Director
Type Sell
Amount $600,940
SEC Filing Form 4

MongoDB Director, Dwight A. Merriman, sold 2,000 shares of Class A Common Stock on November 11 and 13, 2024, for a total sale amount of $600,940. The sales were executed at prices of $290.94 and $310.00 per share, respectively. Following these transactions, Merriman directly owns 1,126,006 shares and indirectly owns 610,959 shares of MongoDB, with indirect ownership through a trust and the Dwight A. Merriman Charitable Foundation.

SEC Filing: MongoDB, Inc. [ MDB ] – Form 4 – Nov. 13, 2024

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Aigen Investment Management LP Purchases New Position in MongoDB, Inc. (NASDAQ:MDB)

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

Aigen Investment Management LP acquired a new position in shares of MongoDB, Inc. (NASDAQ:MDBFree Report) during the 3rd quarter, according to its most recent 13F filing with the SEC. The firm acquired 3,866 shares of the company’s stock, valued at approximately $1,045,000.

A number of other institutional investors and hedge funds have also made changes to their positions in MDB. MFA Wealth Advisors LLC purchased a new position in shares of MongoDB in the 2nd quarter valued at $25,000. J.Safra Asset Management Corp boosted its position in shares of MongoDB by 682.4% in the 2nd quarter. J.Safra Asset Management Corp now owns 133 shares of the company’s stock worth $33,000 after purchasing an additional 116 shares in the last quarter. Quarry LP boosted its position in shares of MongoDB by 2,580.0% in the 2nd quarter. Quarry LP now owns 134 shares of the company’s stock worth $33,000 after purchasing an additional 129 shares in the last quarter. Hantz Financial Services Inc. acquired a new stake in shares of MongoDB in the 2nd quarter worth $35,000. Finally, GAMMA Investing LLC boosted its position in shares of MongoDB by 178.8% in the 3rd quarter. GAMMA Investing LLC now owns 145 shares of the company’s stock worth $39,000 after purchasing an additional 93 shares in the last quarter. 89.29% of the stock is currently owned by hedge funds and other institutional investors.

Analyst Ratings Changes

A number of analysts have recently weighed in on the company. Oppenheimer raised their price objective on MongoDB from $300.00 to $350.00 and gave the stock an “outperform” rating in a research report on Friday, August 30th. Wells Fargo & Company lifted their price target on MongoDB from $300.00 to $350.00 and gave the stock an “overweight” rating in a research report on Friday, August 30th. Stifel Nicolaus lifted their price target on MongoDB from $300.00 to $325.00 and gave the stock a “buy” rating in a research report on Friday, August 30th. Bank of America lifted their price target on MongoDB from $300.00 to $350.00 and gave the stock a “buy” rating in a research report on Friday, August 30th. Finally, JMP Securities reiterated a “market outperform” rating and set a $380.00 price objective on shares of MongoDB in a research note on Friday, August 30th. One investment analyst has rated the stock with a sell rating, five have given a hold rating, nineteen have assigned a buy rating and one has assigned a strong buy rating to the company’s stock. According to data from MarketBeat, MongoDB currently has a consensus rating of “Moderate Buy” and a consensus price target of $334.25.

Get Our Latest Report on MDB

MongoDB Price Performance

Shares of MongoDB stock traded up $9.24 on Wednesday, hitting $300.89. 2,528,931 shares of the company were exchanged, compared to its average volume of 1,429,430. The stock has a market capitalization of $22.23 billion, a price-to-earnings ratio of -99.63 and a beta of 1.15. The company has a 50-day moving average price of $277.90 and a 200 day moving average price of $275.28. MongoDB, Inc. has a 1-year low of $212.74 and a 1-year high of $509.62. The company has a quick ratio of 5.03, a current ratio of 5.03 and a debt-to-equity ratio of 0.84.

MongoDB (NASDAQ:MDBGet Free Report) last released its quarterly earnings results on Thursday, August 29th. The company reported $0.70 EPS for the quarter, topping analysts’ consensus estimates of $0.49 by $0.21. MongoDB had a negative return on equity of 15.06% and a negative net margin of 12.08%. The company had revenue of $478.11 million for the quarter, compared to analysts’ expectations of $465.03 million. During the same period in the prior year, the firm earned ($0.63) earnings per share. The firm’s revenue was up 12.8% compared to the same quarter last year. As a group, sell-side analysts predict that MongoDB, Inc. will post -2.39 earnings per share for the current fiscal year.

Insider Transactions at MongoDB

In other MongoDB news, CFO Michael Lawrence Gordon sold 5,000 shares of the company’s stock in a transaction on Monday, October 14th. The shares were sold at an average price of $290.31, for a total value of $1,451,550.00. Following the completion of the transaction, the chief financial officer now owns 80,307 shares in the company, valued at $23,313,925.17. This represents a 0.00 % decrease in their ownership of the stock. The transaction was disclosed in a legal filing with the SEC, which is available at this hyperlink. In other news, CFO Michael Lawrence Gordon sold 5,000 shares of the stock in a transaction dated Monday, October 14th. The shares were sold at an average price of $290.31, for a total transaction of $1,451,550.00. Following the completion of the sale, the chief financial officer now owns 80,307 shares in the company, valued at approximately $23,313,925.17. This represents a 0.00 % decrease in their ownership of the stock. The sale was disclosed in a filing with the Securities & Exchange Commission, which is accessible through this link. Also, CEO Dev Ittycheria sold 3,556 shares of the stock in a transaction dated Wednesday, October 2nd. The stock was sold at an average price of $256.25, for a total transaction of $911,225.00. Following the sale, the chief executive officer now owns 219,875 shares of the company’s stock, valued at $56,342,968.75. This trade represents a 0.00 % decrease in their position. The disclosure for this sale can be found here. Over the last 90 days, insiders have sold 24,281 shares of company stock valued at $6,657,121. 3.60% of the stock is currently owned by company insiders.

MongoDB Company Profile

(Free Report)

MongoDB, Inc, together with its subsidiaries, provides general purpose database platform worldwide. The company provides MongoDB Atlas, a hosted multi-cloud database-as-a-service solution; MongoDB Enterprise Advanced, a commercial database server for enterprise customers to run in the cloud, on-premises, or in a hybrid environment; and Community Server, a free-to-download version of its database, which includes the functionality that developers need to get started with MongoDB.

Featured Articles

Institutional Ownership by Quarter for MongoDB (NASDAQ:MDB)

Before you consider MongoDB, you’ll want to hear this.

MarketBeat keeps track of Wall Street’s top-rated and best performing research analysts and the stocks they recommend to their clients on a daily basis. MarketBeat has identified the five stocks that top analysts are quietly whispering to their clients to buy now before the broader market catches on… and MongoDB wasn’t on the list.

While MongoDB currently has a “Moderate Buy” rating among analysts, top-rated analysts believe these five stocks are better buys.

View The Five Stocks Here

These 7 Stocks Will Be Magnificent in 2024 Cover

With average gains of 150% since the start of 2023, now is the time to give these stocks a look and pump up your 2024 portfolio.

Get This Free Report

Like this article? Share it with a colleague.

Link copied to clipboard.

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


.NET Aspire 9.0 Now Generally Available: Enhanced AWS & Azure Integration and More Improvements

MMS Founder
MMS Robert Krzaczynski

Article originally posted on InfoQ. Visit InfoQ

.NET Aspire 9.0 is now generally available, following the earlier release of version 9.0 Release Candidate 1 (RC1). This release brings several features aimed at improving cloud-native application development on both AWS and Azure. It supports .NET 8 (LTS) and .NET 9 (STS).

A key update in Aspire 9.0 is the integration of AWS CDK, enabling developers to define and manage AWS resources such as DynamoDB tables, S3 buckets, and Cognito user pools directly within their Aspire projects. This integration simplifies the process of provisioning cloud resources by embedding infrastructure as code into the same environment used for developing the application itself. These resources are automatically deployed to an AWS account, and the references are included seamlessly within the application.

Azure integration has been upgraded in Aspire 9.0. It now offers preview support for Azure Functions, making it easier for developers to build serverless applications. Additionally, there are more configuration options for Azure Container Apps, giving developers better control over their cloud resources. Aspire 9.0 also introduces Microsoft Entra ID for authentication in Azure PostgreSQL and Azure Redis, boosting security and simplifying identity management.

In addition to cloud integrations, Aspire 9.0 introduces a self-contained SDK that eliminates the need for additional .NET workloads during project setup. This change addresses the issues faced by developers in previous versions, where managing different .NET versions could lead to conflicts or versioning problems. 

Aspire Dashboard also receives several improvements in this release. It is now fully mobile-responsive, allowing users to manage their resources on various devices. Features like starting, stopping, and restarting individual resources are now available, giving developers finer control over their applications without restarting the entire environment. The dashboard provides better insights into the health of resources, including improved health check functionality that helps monitor application stability.

Furthermore, telemetry and monitoring have been enhanced with expanded filtering options and multi-instance tracking, enabling better debugging in complex application environments. The new support for OpenTelemetry Protocol also allows developers to collect both client-side and server-side telemetry data for more comprehensive performance monitoring.

Lastly, resource orchestration has been improved with new commands like WaitFor and WaitForCompletion, which help manage resource dependencies by ensuring that services are fully initialized before dependent services are started. This is useful for applications with intricate dependencies, ensuring smoother deployments and more reliable application performance.

Community feedback highlights how much Aspire’s development experience has been appreciated. One Reddit user noted:

It is super convenient, and I am a big fan of Aspire and how far it has come in such a short time.

Full release details and upgrade instructions are available in the .NET Aspire documentation.

About the Author

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


MongoDB Inc (MDB) Shares Up 7.15% on Nov 13 – GuruFocus

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

Shares of MongoDB Inc (MDB, Financial) surged 7.15% in mid-day trading on Nov 13. The stock reached an intraday high of $314.49, before settling at $312.50, up from its previous close of $291.64. This places MDB 38.68% below its 52-week high of $509.62 and 46.89% above its 52-week low of $212.74. Trading volume was 725,050 shares, 66.4% of the average daily volume of 1,092,047.

Wall Street Analysts Forecast

1856733779863957504.png

Based on the one-year price targets offered by 28 analysts, the average target price for MongoDB Inc (MDB, Financial) is $334.88 with a high estimate of $520.00 and a low estimate of $180.00. The average target implies an upside of 7.16% from the current price of $312.50. More detailed estimate data can be found on the MongoDB Inc (MDB) Forecast page.

Based on the consensus recommendation from 32 brokerage firms, MongoDB Inc’s (MDB, Financial) average brokerage recommendation is currently 1.9, indicating “Outperform” status. The rating scale ranges from 1 to 5, where 1 signifies Strong Buy, and 5 denotes Sell.

Based on GuruFocus estimates, the estimated GF Value for MongoDB Inc (MDB, Financial) in one year is $535.73, suggesting a upside of 71.44% from the current price of $312.495. GF Value is GuruFocus’ estimate of the fair value that the stock should be traded at. It is calculated based on the historical multiples the stock has traded at previously, as well as past business growth and the future estimates of the business’ performance. More detailed data can be found on the MongoDB Inc (MDB) Summary page.

This article, generated by GuruFocus, is designed to provide general insights and is not tailored financial advice. Our commentary is rooted in historical data and analyst projections, utilizing an impartial methodology, and is not intended to serve as specific investment guidance. It does not formulate a recommendation to purchase or divest any stock and does not consider individual investment objectives or financial circumstances. Our objective is to deliver long-term, fundamental data-driven analysis. Be aware that our analysis might not incorporate the most recent, price-sensitive company announcements or qualitative information. GuruFocus holds no position in the stocks mentioned herein.

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Cultivating Speed and Scale for the Modern Application with MongoDB and Delphix by Perforce

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

Synonymous with business today is speed, performance, and efficiency; backed by modern applications, these attributes are possible—in theory. While modern applications are the key to boosting proprietary innovation, adaptability, and competitiveness, they necessitate transformations at the core of enterprise data infrastructures.

Ian Ward, head of scaled enterprise adoption at MongoDB, and Corey Brune, senior principal sales engineer at Delphix by Perforce, joined DBTA’s webinar, Powering Modern Applications: Data Management for Speed and Scale, to explore pivotal solutions for supporting today’s applications with flexibility and automation at the forefront.

Relational databases and legacy architectures lead teams to working around limitations—and AI is amplifying this challenge, according to Ward. From cloud “lift and shifts” to cloud sprawl, many enterprises are facing extensive complexities because of migrating legacy systems to the cloud, overprovisioning applications, and introducing AI into the fold.

“There is an exponential increase in this complexity—especially on the cloud—for bringing AI applications to life,” noted Ward.

MongoDB Atlas centralizes a variety of paradigms that transcend traditional transactional databases, offering OLTP, time series, full-text search, real-time analytics, and vector search within a single platform. Instead of having to piece these components individually to power a modern application, each of these capabilities are unified in a single database and “are built to be performant, resilient, and with security at top of mind,” according to Ward.

This centralization is powered by MongoDB’s ability to contain related data into a single, rich document that covers any use case. MongoDB’s document model offers the flexibility necessary to meet the evolving demands of tomorrow’s apps and is highly performant, scalable, and capable of accelerating developer workflows by mapping how they naturally code.

Echoing Ward, Brue said powering modern applications is fraught with challenges. The biggest issue migrating to the cloud, Brune identified, is data. Limited, shared environments, coupled with a lack of high-quality production data makes high RPM development impossible.

With Delphix, enterprises can eliminate as much as 90% of their data migration payload, bridging production to cloud with non-disruptive sync. By replicating data from on-prem to the cloud or cloud-to-cloud, Delphix creates virtual databases with that data to test production scenarios and ensure optimal outcomes. After initial ingest, only changes in the database are transmitted, accelerating cloud migration, according to Brune.

Delphix also offers hybrid cloud application development, enabling enterprises to move secure data to the cloud in minutes or hours while ensuring comprehensive test coverage and cloud data compliance. Once in the cloud, Delphix values ephemerality to increase cost efficiency and cloud agility, eliminating the need to persist database host machines or storage-consuming backups of dev/test environments.

For the full, in-depth webinar featuring detailed explanations, a roundtable discussion, and more, you can view an archived version of the webinar here.

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Presentation: Monorepos: Beyond the Technicalities

MMS Founder
MMS Tiago Bento

Article originally posted on InfoQ. Visit InfoQ

Transcript

Bento: I think the first thing most people think when talking about monorepos is the opposite, which is the poly-repo setting. I’m going to start comparing both approaches. Here you can see some setups that you may recognize from diagrams that you may have in your company, in your organization. The white boxes there are the representation of a repo, and the blue cubes are representing artifacts that these repos produce. It can be like Docker images, or binaries, JARs, anything that you get out of the code that you have can be understood as the blue cube.

The first example here, you have a bunch of repos that have a very clearly defined dependency relationship, and they all together work to produce a single artifact. This is, I think, very common, where people have multiple repos to build a monolith. No mystery there. Then we move on to the next one, where we have a single repo, but we also produce a single artifact. That already has a question mark here from my side, because, is that a monorepo or not? Then, moving on, we have a more complicated setting, where you have multiple repos that have more dependency relationship. You have code sharing between them.

In the end, the combination is three artifacts. That, to me, is very clearly a poly-repo setting. Then the last one is what to me is also very clearly a monorepo setting. You have a single repo, and after a build, you have multiple artifacts. Then, other complicated cases where it’s not so obvious whether it’s a monorepo or a poly-repo setting. For example, here you have repos that in a straight line produce a single artifact, and they are not connected with each other in any way. Is that a poly-repo, if you don’t have sharing of code between multiple artifacts that, in the end, get produced? Conversely, if you have a single repo and the internal modules do not share any code, it’s the analogs of this situation. Can you say that this is a monorepo, or is it just a big repo?

Then, most of us work in a setup that is very similar to this, where you have everything mixed together and you cannot tell whether or not you’re using poly-repo or monorepo, or both, or neither. A simple way to understand them, in my view, what defines if a setting of repos that produce artifacts can be understood as a poly-repo or a monorepo setting is the way you share code between these pieces that you have to have to produce the artifacts. When you add the arrows that represent the dependency relationship between the repos that make artifacts share code, I think you can very clearly say that this is a poly-repo setting. The same is true for a monorepo setting where now you have code sharing.

Monorepos (Overview)

Giving a definition of what a monorepo can be understood as, in this presentation, and in my view, you can say that monolithic applications are not always coming from monorepos. Also, putting code together on the same repo does not characterize a monorepo as well, because if you don’t have the sharing relationship, then it’s just a bunch of modules that are disjoint and produce artifacts from the same place. Of course, to have a monorepo, we have to have a very well-defined relationship between them, so you can very distinctively draw the diagram that represent your dependency relationships.

Also, very important is that the modules on a monorepo are part of the same build system. Most of us, I think, come from a Java background, or a Go background here, and you’re very familiar with Maven builds. We have a lot of modules that are part of the same reactor build. This produces a single artifact in the end, or multiple smaller artifacts that may or may not be important to you, as far as publication go to a registry, or for third parties to consume, or even first parties to consume. Then, in the blue box are things that are not very well defined when talking about monorepos, but I wanted to give you my opinion and what I understand as a monorepo.

Monorepos are not necessarily big in the sense of, there’s a lot of code there, a mess everywhere. Nobody knows what depends on what. No, that’s not the case. Also, for me, monorepos are repos that contain code that produce more than one artifact that is interesting to you after the build. If you’re publishing, for example, two Docker images from one single repo, in my view, that’s a monorepo already. Of course, the code sharing. You need to have ability for a single module in your monorepo to be reused by final artifacts that you have. That’s the definition. We’ll build from there.

Right off the bat, you can do this claim. You can tell me, yes, but nobody has a monorepo, everyone has multiple repos. That is true. It’s very rare to see a company that has everything inside the same repository. That’s not the case at all. When we talk about monorepos, we actually are not talking about putting everything inside the same repo. I’m going to try to explain the way I understand that and how you can benefit from it. Conversely, using the definition that I just gave, you can also say that everyone has a monorepo, because if you think about a repository that has many libraries in it that get published, you have one single repository that publishes multiple artifacts that are interesting after the build. If you have a multi-module Go repo, or if you have a multi-module Maven repo, for me, you have a monorepo.

This session will try to answer this question, should you have a monorepo? The way I’ll try to do that is talking a little bit about code-based structure, and how teams operate together, how people do software, and how code reuse is understood and implemented in the industry. In the end, I hope that by looking at the examples and my personal experience that I’m going to share with you, you’ll be able to make a more informed decision of whether or not you can benefit from a monorepo or if you stick to a more traditional poly-repo setting.

Example 1 – Microservice E-commerce

Let’s see an example where our code base is composed by these repos. We have five microservices at the top, followed by two Backend for Frontends repos. Then we have two frontend applications. Then we have five repos that contain common code. In the end, we have the end-to-end tests repository. Everyone can relate in some way to this example, although oversimplified. The experience I had multiple times is very closely related to this, where you present something very simple, and all of a sudden everyone has an opinion of what should be done, of what’s a better way to do that, or even like personal taste: I like it, I don’t like it, whatever.

First thing I wanted to talk is, if you’re in this situation, chances are you’re not ready to have a conversation about whether or not to pursue a poly-repo or a monorepo setting. Because you have very fragmented opinions, and you don’t have people going to the same direction, even if they agree or not. My advice would be trying to get everyone moving in the same way so that everyone can be exposed to the same problems, understand what comes after them, in case of library building, or in case of the ops team deploying heavy Docker images or something. That is the way that this conversation starts. This conversation is very slow, very lengthy, and very complex.

It’s not something you can decide in a one-hour meeting when you put all the stakeholders together, or the architects, or whatever, and then you say, ok, voting, monorepo or not? It’s not going to go like that. Very important, people, teams, and your uniqueness, the uniqueness of your operating dynamics, come before choosing what to do. Then, even if you do decide what to do, having everyone align and understanding why you’re doing something. Put that in writing. Document it. Make presentations. Make people watch them. Whatever works for you comes before actually pursuing that change, implementing monorepo or poly-repo.

Let’s continue with our little e-commerce setup with the repos I showed before. Let’s break it down in teams. We can look at the repos, and we can see some affinity between them and the way that I as the architect or CTO or whatever, understand how people should be working together. This is valid. It’s something that someone is doing somewhere. Also, you could have something like this, where we have different teams, we have different names, we have different operating dynamics. This is very unique to your organization. Nobody is doing software exactly the same way. We’re sure, borrowing from colleagues and people from other companies, tech presentations, but in reality, the operations is very distinct in wherever you look. This is also valid.

Interesting, we have this now, where three teams are collaborating on the same repo. Also, here, we have all teams collaborating on the end-to-end tests. That’s pretty normal. Everyone owns the tests, so nobody does. Also, this is a very valid possibility where you have a big code base, you have your repos there. You have two main teams doing stuff everywhere, helping everybody out. Then you have a very focused team here doing only security, so they only care about security. You have another very focused team that only cares about tests. This is also valid. Why am I saying all this? Because you as individuals, as decision makers, as multipliers, you know how your team operates much better than me or anyone else that gives advice on the internet, or whatever.

By looking at your particular case and evaluating both techniques, both monorepo and poly-repo, you can see where you could benefit from each one of them. You don’t have to choose. You can pick monorepo for a part of your org, or poly-repo for another part.

So far, list of repos. Everybody had opinions. We sorted that out. Now we have our little team separation here. We know that this is important. We didn’t talk about the dependency relationship between the repos. It’s somewhat obvious from the names, but even so, people can understand them differently. This is a possibility. I know there’s a lot of arrows, but yes. I have the microservices there depending on a lot of common library repos here. Then they get used by the end-to-end tests.

Then you have the frontends, which is like separate, and they have their own code sharing with the design system, or very nasty selects that you implemented. This can be something that someone draws in your company by looking at the repos. Or, you can see someone understand it like this, where the BFFs actually depend on the user service as a hard dependency, and not just like API dependency or something. This is a valid possibility too. More than that, these are only the internal arrows that you would have if you would draw this diagram for your company in this example. There’s a lot of other arrows, which are third-party dependencies that every repo is depending on. You cannot build software without depending on them.

To make things more complicated, this is also something that many of us are doing. We’re versioning our own libraries and publishing them so that they can be consumed by our services. You can see this becomes very complicated very quickly, because it’s hard to tell what libraries are being used and what are not, and what versions of services are using what versions of libraries. It can be that you see this and you solve it like this.

Every time a library publishes a release, we have to update all the services to have everyone on the same page using the same version, no dependency conflicts. Everything works. Or, you can go one step further and you say, every time we make a change on a repo, this gets automatically published to the services. They rebuild, redeploy, do everything. In this case, why do we have repos then? We could put everything inside the same repo and call them modules. It’s a possibility.

I wanted to go through this example to highlight that making software inherently has these problems. If you haven’t, go watch this talk, “Dependency Hell, Monorepos and beyond”. It’s 7 years old, but it’s still current. It’s very educational to understand how software gets built and published. Very great material. We can all identify with these problems here. At some point, we had multiple releases just to consume a tiny library, or we had one person depending on a version of a library, and then this version has a security vulnerability, so you have to run and update everything.

That’s one thing I wanted to say, like making software is hard, even if you were in complete isolation, just you and your text editor, pushing characters there is already difficult. Then, if you start pointing to third-party dependencies, it gets more difficult, because now we have to manage the complexity of upgrading them and tracing them and see if they are reliable or not. Then, if you want to make your software be reusable by others, then that’s even harder, because now you have to care about who’s using your software. If you’re doing all at the same time, which is everyone, then this is the hardest thing ever. It’s really complicated.

Example 2 – Upstream vs. Downstream

Taking a step back, back to the monorepo and poly-repo conversation. How can we define them, after all this? Poly-repo is usually understood as very small repos that contain software for a very specific purpose, and they produce a single artifact, or like very few. Every time you publish something from a repo on a poly-repo setting, you don’t care about who’s using it. It’s their responsibility to get your new version and upgrade their code.

On a monorepo, you have multiple modules that can or cannot be related. The way you reuse code internally is by just pointing to a local artifact. On purpose I put that line there saying that builds can be fast on a monorepo, because if you build it right you can always filter the monorepo and build exactly the part that you want without having to build everything else that’s in there. That’s how you scale a monorepo.

I think we can move to the second example where we have a better understanding of the way I’m presenting poly-repo versus monorepo. This next example has to do with the concepts of upstream and downstream. Many of you may be familiar with it already, especially if you’re in a poly-repo setting. That’s repos in purple, artifacts in blue. Let’s imagine you’re there all the way to the right. What you call upstream is whatever comes before you in the build pipeline. Libraries you use, services you consume, that’s upstream. Everything that depends on you, your libraries, your Docker images, your services, that’s downstream.

In this example, we have one repo that’s upstream and one repo that’s downstream. Pretty easy. If you’re someone making changes to this repo, now your downstream looks like this, which is a lot of stuff. In this cut of the code base structure, that’s the entirety of your publishable artifacts. Changing something in that particular repository has implications in all artifacts you produce. Let’s go through the most common setup to share code, which is publishing libraries using semantic versioning. You can have this setup and you make a change of the very first one there where the red arrow is. What happens is you make a change, then you publish a new version.

Then, people have to upgrade their dependencies to use that new version. If you want to continue that and release the new artifacts using that new version of that library, then you have to continue that chaining, and you have a new version of this repo, and you publish a new version of this one. Then they update the version that they’re consuming. Then you publish a new version of those repos that represent the artifacts, and then you push the artifacts somewhere. This is how many of us are doing things. Depending on who’s doing what, this may be very wasteful. If there’s the same team managing all these repos, why are they making all those version publications, if their only purpose was to update the artifacts in the first place?

We can argue that, let’s use a monorepo then and put every component there in a module inside the same repo under the same build system, and then I make a change, and then I release. That’s not all good things, because on the last setup, the team doing changes here did their change, published their versions, continue to the next task. Then the other teams started upgrading their code. Now you can have something like this where you have a very big change to make, and you have to make this change all at once, which can take weeks especially if you’re doing a major version bump or something. You make the changes and then you can release.

This is the part where I take the responsibility out of me a little bit and hand it over to you, because you know what you’re doing, what your code looks like, what your teams prefer, and everything related to the operating dynamics of your code base. If you look at all this and you say, who’s responsible for updating downstream code? Each one of us will have a different answer. That answer has impact on whether or not you should be using a monorepo. Another question would be, how do you prefer to introduce changes? Where on a poly-repo, you do it incrementally. A team updates a library. This library gets published, and slowly everyone adopts it. Whereas on a monorepo, the team that’s changing the library may have to ask for help updating downstream code because they may not have the knowledge, so it may take longer.

Once you do it, you do it for everyone, so you know that the new artifacts are deployed aligned. This is my take. I don’t think poly-repo and monorepo are either/or. I think they’re complementary. It depends on where you are and what you’re doing and who you’re doing it with. They’re both techniques that can coexist in the same code base, and the way you structure it has to be very tightly coupled with the way you operate it, and the skill set that people have, and everything related to your day-to-day. Some parts of your organization are better understood and operated as a detached unit.

For example, if you have a framework team, or you’re doing a CLI team, where you have a very distinct workflow, they need to be independent to innovate and publish new versions, and you don’t need to force everyone to use the latest version of this tool. That happens. That exists. It’s a valid possibility. The other part is true as well. Some parts of your organization are better understood and operated as a unified conglomerate, where alignment and synchronicity are mandatory, even for success. For me, monorepos or poly-repos is not a question of one or the other. It’s a question of where and when you should be using each one of those.

Apache KIE Tools

I wanted to talk a little bit more about my experience working on a monorepo. We have a fairly big one. That’s the project I work on every day. It’s called Apache KIE Tools. It’s specialized tools for business automation, authoring and running and monitoring and everything. Some stats, our build system is pnpm. We are almost 5 years old. We have 200 packages, give or take. Each package is understood like a little repo. Each package has a package.json file, borrowed from the JavaScript ecosystem, that contains a script that defines how to build it. This package.json file also defines the relationship between the modules themselves.

It’s really easy for me to a simple command, say, I want to build this section of the monorepo, and select exact what part of the tree that I want to build. We have almost 50 artifacts coming from that monorepo, ranging from Docker images to VS Code extensions and Maven modules and Maven applications, and examples that get published, and everything. The way we put everyone under the same build system was by using standardized script names. Each package that has a build step has two commands called build:dev and build:prod, and packages that can be standalone developed, have a start command.

Then, all the configuration is done through environment variables, borrowing from the Twelve-Factor App manifesto. We have built an internal tool that manages this very big amount of environment variables to configure things like logo pass, or whether or not to turn on the optimizer or minifier, or run tests or not. Everything that we do is through environment variables. Every time we need to make a reference to another package, so, for example, I’m building a binary that puts together a bunch of libraries. We do that through the Node modules there, also borrowed from the JavaScript ecosystem.

Through symbolic links and the definitions we have in package.json, we can safely only reference things that we declare as a dependency. We’re not having a problem where we are just going back to directories and entering another package without declaring it as a dependency. This is something that we did to prevent ourselves from making mistakes, and during builds that select only a part of the monorepo, forget to build something.

Then, one thing that we have that’s very helpful, is the ability to partially build the monorepo in PR checks. Depending on the files you changed, we have scripts that will figure out what packages need to be rebuilt, what packages need to be retested, and things like that. The stats there are, for a run that changed only a few files in a particular module, the slowest partition build in 16 minutes, and all the other ones were very fast. For a full build, you can compare the times. Having the ability to split your builds in partitions and in sections of the tree is very important if you want to have speed on a monorepo that can scale very well.

Also, to make things more complicated, we have many languages in there. We are a polyglot monorepo. We have Java libs and apps building with Maven. We have TypeScript, same thing. We have Golang. We have a lot of container images as well. Another thing, and that’s all optimization, is that we have sparse checkout ability, so you can clone the repo and select only a portion of it that you want. Even if it gets really big, you can say on the git clone that you only want these packages, and they will be downloaded, and everything else is going to be ignored. The build system will continue working normally for you.

Apache KIE Tools (Challenges)

Of course, not everything is good. We have some challenges and some things that we are doing right now. One of those things is that we’re missing a user manual. A lot of the knowledge we have is on people’s heads and private messages on Slack, or Zulip chat for open source, and that’s not very good. We’re writing a user manual with all the conventions that we have, the reasoning behind the architecture of the repo and everything. Then, we’re also improving the development experience for Maven based packages, especially with importing them in IDEs and making sure that all the references are picked up, and things become red when they’re wrong, and things like that.

Then we have a very annoying problem, which is, if you change the lockfile, the top level one, our partitioning system doesn’t understand which modules are affected. We have a fix for it. We’re researching how to roll that out. I’m glad that we found a solution there. If that works the way it should, we’re never going to have a full build whenever our code change, unless it’s of like a very root level file, like the top-level package.json or something. Then we’re also, pending, trying a merge queue. Merge queue is when you press the merge button, the code doesn’t go instantly to the target branch. It goes to a queue where it simulates merges, and when a check passes, you can merge automatically. You can take things out of the queue if they’re going to break your main branch.

That’s a very cool thing to prevent semantic conflicts from happening, especially when they break tests or something. We’re pending trying that. Also, we’re pending having multiple cores available for each package to build. We can do parallel builds, but we don’t have a way to say, during your build, you can use this many cores. We don’t have that. We’re probably going to use an environment variable for that. Next one is related to this. You saw we have two commands to build, build:dev and build:prod, and sometimes there’s duplication in these commands, and it’s very annoying to maintain. One thing we’re researching is how we can use the environment variables to configure parameters that will distinct a prod and a dev build.

For example, on webpack, you can say the mode, and it will optimize your build or not. The last one, which I think is the most exciting, is taking advantage of turborepo, also a test runner that will understand package.json files. It has a very nice ability, which is caching, so you can, in theory, download our monorepo and start an app without building anything. You can see how powerful that is for development and for welcoming new people in a code base. Of course, if you look at a poly-repo, you don’t have the caching problem because you’re publishing everything, so you don’t have to build it again: tradeoffs.

Using a Monorepo (Yourself)

I wanted to close with some advice of, how can you build a monorepo yourself, or even improve existing monorepos that you might have? The first one is, if you’re starting a monorepo now, if you think this is for you, if you’re doing research, if you’re doing a POC or something, then don’t start big. Don’t try to hug every part of the code base. Start small. Pick a few languages or one. Choose one build tool, and go from there. You’re going to make mistakes. You’re going to learn from them. You’re going to incorporate the way your organization works. You’re going to have feedback. Start small. Don’t plan for the whole thing.

Then, the third bullet point there is, choose some defaults. Conventionalize from the beginning, like this is the way we do it. It doesn’t matter if it’s good or bad, you don’t know. You’re just starting. The important thing is that everyone is doing the same, and everyone is exposed to the exact same environment so they can feel the same struggles, if they happen. Chance is, there will be some. Then, fourth is, make the relationship between the modules easy to visualize. It’s really easy to get lost when you have a monorepo, because you have very small modules, and if you don’t plan them accordingly, you’ll have everyone depending on everything. This is not good. That also happens with a poly-repo.

Number five, be prepared to write some custom tools. Your build necessities are very unique too. Maybe your company has a weird setup because of network issues, or maybe you’re doing code in a very old platform that needs a very special tool to build, and you need to fetch it from somewhere and use an API key. Be prepared to write custom scripts that look like they are already made for you, tailored for your needs. This is something that’s really valuable. Sixth, be prepared to talk about the monorepo a lot, because this is a controversial topic, and people will have a lot of opinions, and you will have to explain why you’re doing this all over again multiple times.

Then, number seven is, optimize for development. That comes from my personal experience. It is much nicer for people to clone and start working right away, rather than having a massive configuration step. Our monorepo has everything turned off by default, everything targeting development, localhost, all the way, no production names, no production references, nothing. Everything just is made for being ran locally without dependencies on anything. Then some don’ts, which are equally important, in my opinion. Don’t group by technology.

Don’t look at your code base and think, yes, let’s get everything that is using Rust and put on the same monorepo. Because, yes, everything’s Rust. We have a build system built for it. No. Group by operating dynamics, by affinity of teams. Talk to people, see how they feel about interacting with other teams more often. Maybe they don’t like it, and maybe this is not for them, or maybe you’re putting two parts of your code base that are using the same technology but have nothing to do with each other. Don’t do that. Don’t group by technology, group by team affinity, by the things that you want to build and the way people are already operating.

Then, number nine, don’t compromise on quality. Be thorough about the decisions you make and why you’re making them, because otherwise you can become a big ball of mud very quickly. Say no. If some people want to put a bunch of code in your monorepo just to solve an immediate problem that they might be having, think about it. Structure it. Plan. Make POCs. Simulate what’s going to be like your day-to-day if the code was there. Have patience. Number 10 is, don’t do too much right away. I mentioned many things like partial build, sparse checkouts, caching, and unified configuration mechanism. You don’t have to have all these things to have a monorepo. Maybe your monorepo is small in the beginning, and it’s fine if you have to build everything every time, maybe.

Eleventh, I think, is the most important one, don’t be afraid if the monorepo doesn’t work out for you. Reevaluate, incorporate feedback. Learn from your mistakes, and hear people out. Because the goal of all this is to extract the most out of the people’s time. There’s no reason for you to put code in a monorepo if this is going to make people’s life harder. That’s true for the opposite direction too. There’s no reason to split just because it’s more beautiful.

Questions and Answers

Participant 1: Do you have any preference on the build tools? Because we have suffered a lot by choosing one build tool and then we quickly get lost from there.

Bento: You mean like choosing Maven over Bazel or Gradle?

Participant 1: You mentioned a lot of tools, so I was wondering which one is your preferred choice, like Gradle or Maven?

Bento: This is very closely related to your team’s preference and skill set. On my team, it was and it still is, very hard to get people to move from the Maven state of mind to a less structured approach that we have with like package.json and JavaScript all over the place. I don’t have a preference. They all are equally good depending on what you’re doing. It will depend.

Participant 2: Do you have any first-hand opinion on coding styles between the various modules of a monorepo? I’m more interested in a polyglot monorepo, if possible, like how the code is organized, various service layers, and all that.

Bento: I’m a very big sponsor of flat structures on the monorepo. I don’t like too much nesting, because it’s easier to visualize the relationship between all the internal modules if you don’t have a folder that hides 20, 100 modules. I always tell people, give the package name a very nice prefix, and don’t be afraid of creating as many as you want. Like the monorepo that we built is built for thousands of packages. We are prepared to grow that far. IDE support might suffer a little bit, but depending on what IDE you’re using, too. This is the thing that I talk about the most. Like flat structures at the top, easy visualization. If you go to the specifics of each language, then you go to the user manual and you see the code style there.

Participant 3: I’m from one of those rare companies that does pretty much use a monorepo for the entire code. There’s lots of pros and cons there, of course. We’re actually in the process of trying to unwind that. I just wanted to add maybe one don’t, actually, to your list, which is, try to avoid polyglot repos. Because you can easily get into a situation where you have dependency hell resulting from that, if you have downstream dependencies from that and upstream dependencies. You have a Java service that’s depending on a Python library being built, or something like that.

Bento: I don’t necessarily agree with that don’t, but it is a concern. Polyglot monorepos are really fragile and really hard to build. You can look at stories that people who use Bazel will tell you, and it can become very messy very quickly. For our use case, for example, we really do have the need of having this cross-language dependency because we’re having like a VS Code extension that is TypeScript depending on JARs, and these JARs usually depend on other TypeScript modules. The structure that we created lets us navigate this crazy dependency tree that we have in a way that allows us to stay in synchrony.

Participant 3: I was talking about individual repos having multiple languages in them. Maybe it’s specific to our company. We have pretty coarse-grained dependencies, so it’s like code base.

Bento: This is a good example, like your organization, it didn’t work out for you, so you’re moving out of it. That’s completely fine. Maybe a big part of your modules are staying inside a monorepo, which is fine too. You’re deciding where and when.

Participant 4: When you were talking about your own company’s problems, one of the bullets said that the package lockfile would change, or the pnpm-lock file would change, and it was causing downstream things to build, and you fixed that. Why is it a bad thing for when dependencies change, for everything downstream to get rebuilt?

Bento: It’s not a bad thing. It’s actually what we’re aiming for. We want to build only downstream things when a dependency changes. Our problem is that when the lockfile which is in the root folder changes, the scripting system that we have to decide what to build will understand that a root file changed so everything gets built. Now we have a solution where we leverage the turborepo diffing algorithm to understand which packages are affected by the dependencies that changed inside the lockfile. You’re right, like building only downstream when a dependency changed is a good thing.

See more presentations with transcripts

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


JENNISON ASSOCIATES LLC’s Strategic Acquisition of MongoDB Inc S – GuruFocus

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

Overview of the Recent Trade by JENNISON ASSOCIATES LLC (Trades, Portfolio)

On September 30, 2024, JENNISON ASSOCIATES LLC (Trades, Portfolio), a prominent investment firm, expanded its portfolio by acquiring an additional 592,038 shares of MongoDB Inc (MDB, Financial) at a price of $270.35 per share. This transaction increased the firm’s total holdings in MongoDB to 3,102,024 shares, marking a significant endorsement of the tech company’s potential. The trade not only reflects a substantial investment but also contributes 0.1% to JENNISON ASSOCIATES LLC (Trades, Portfolio)’s portfolio, emphasizing the strategic importance of this acquisition.

Insight into JENNISON ASSOCIATES LLC (Trades, Portfolio)

Founded in 1969, JENNISON ASSOCIATES LLC (Trades, Portfolio) has evolved into a growth equity manager primarily focused on institutional assets. As a subsidiary of Prudential Financial, the firm has developed a robust investment philosophy centered on rigorous fundamental research and a dynamic investment process. With a diverse strategy portfolio that includes opportunistic, market neutral, and blend strategies, JENNISON ASSOCIATES LLC (Trades, Portfolio) manages a significant $161.08 billion in assets. Their top holdings include major tech companies such as Apple Inc (AAPL, Financial) and NVIDIA Corp (NVDA, Financial), highlighting their inclination towards technology and consumer cyclical sectors.

1856549469240324096.png

Understanding MongoDB Inc’s Business Model

MongoDB Inc, established in 2007, operates a leading document-oriented database which serves nearly 33,000 paying customers. The company offers both licenses and subscriptions for its NoSQL database, supporting a wide range of programming languages and deployment scenarios. Despite its current non-profitable status indicated by a PE Ratio of 0.00, MongoDB is significantly undervalued with a GF Value of $456.20, suggesting a strong potential for future growth.

1856549352500260864.png

Impact of the Trade on JENNISON ASSOCIATES LLC (Trades, Portfolio)’s Portfolio

The recent acquisition of MongoDB shares has bolstered JENNISON ASSOCIATES LLC (Trades, Portfolio)’s position in the technology sector, with MongoDB now constituting 0.54% of the firm’s total portfolio and representing 4.20% of all MongoDB shares held. This strategic move aligns with the firm’s focus on investing in high-growth potential markets, reinforcing its commitment to technology innovations.

Market Context and Strategic Timing

The timing of JENNISON ASSOCIATES LLC (Trades, Portfolio)’s investment coincides with MongoDB’s stock price increase of 7.91% since the transaction, reflecting a positive market response. MongoDB’s stock currently trades at $291.74, which is significantly below its GF Value, indicating a potential undervaluation and an attractive entry point for the firm.

Comparative Industry Analysis

In the competitive software industry, MongoDB stands out with a GF Score of 82/100, suggesting good potential for outperformance. The company’s strong Growth Rank and GF Value Rank further bolster its competitive edge against industry peers.

Future Prospects for MongoDB Inc

Looking ahead, MongoDB’s strategic initiatives, including expanding its MongoDB Atlas segment, are expected to drive further growth. The company’s robust growth metrics and innovative product offerings position it well to capitalize on the expanding demand for database solutions.

Conclusion

The acquisition of MongoDB shares by JENNISON ASSOCIATES LLC (Trades, Portfolio) is a calculated move that aligns with the firm’s investment strategy and growth outlook. This transaction not only enhances the firm’s portfolio but also positions it to benefit from MongoDB’s potential market success. As MongoDB continues to innovate and expand, JENNISON ASSOCIATES LLC (Trades, Portfolio)’s stakeholders can anticipate favorable outcomes from this strategic investment.

This article, generated by GuruFocus, is designed to provide general insights and is not tailored financial advice. Our commentary is rooted in historical data and analyst projections, utilizing an impartial methodology, and is not intended to serve as specific investment guidance. It does not formulate a recommendation to purchase or divest any stock and does not consider individual investment objectives or financial circumstances. Our objective is to deliver long-term, fundamental data-driven analysis. Be aware that our analysis might not incorporate the most recent, price-sensitive company announcements or qualitative information. GuruFocus holds no position in the stocks mentioned herein.

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


3 AI Stocks to Buy Before Their Breakthroughs Take Off – TradingView

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

AI is rapidly evolving, with its applications expanding across sectors like customer service, cybersecurity, and automation. This transformative potential signals significant future demand, making early investments highly attractive. Against this dynamic backdrop, it could be wise to buy top AI stocks such as Cloudflare, Inc. (NET), MongoDB, Inc. (MDB), and Pegasystems Inc. (PEGA).

AI has revolutionized customer service and engagement across the globe, leveraging tools such as chatbots, predictive models, and sentiment analysis to improve interactions and enhance support, leading to better business outcomes. As adoption continues to rise, global AI spending is expected to exceed $632 billion by 2028, fueled by a 29% CAGR driven by advancements in AI and generative technologies.

Additionally, AI technology is now enhancing areas such as cybersecurity, personalized marketing, inventory management, and recruitment. Notably, AI-powered robotics are transforming manufacturing and customer service, driving advancements in automation and production efficiency. As a result, the AI market is expected to reach $184 billion in 2024, with a 28.46% CAGR, growing to $826.7 billion by 2030.

Considering these conducive trends, let’s analyze the fundamentals of the three AI picks mentioned above.

Cloudflare, Inc. (NET)

NET operates as a cloud services provider, delivering a range of services to businesses worldwide. The company offers an integrated cloud-based security solution to secure various platforms, along with website and application security products. Additionally, it provides website and application performance solutions, a SASE platform, and network services.

On October 18, 2024, NET announced the opening of a new headquarters in Lisbon, Portugal, to strengthen its EMEA operations and support growing customer demand. This expansion highlights NET’s commitment to the region and aims to leverage Lisbon’s tech talent and strategic location.

On October 8, 2024, NET announced its acquisition of Kivera to enhance proactive cloud security within its Cloudflare One platform. This integration aims to prevent cloud security risks by implementing preventive controls, simplifying secure cloud operations for businesses.

In terms of the trailing-12-month gross profit, NET’s 77.53% is 54.4% higher than the 50.23% industry average. Likewise, its 9.08% trailing-12-month Capex / Sales is 345.8% higher than the 2.04% industry average. Its 22.82% trailing-12-month levered FCF margin is 103.7% higher than the 11.20% industry average.

During the third quarter that ended September 30, 2024, NET’s revenues increased 28.2% year-over-year to $430.08 million. The company’s non-GAAP gross profit rose 28.3% from the year-ago value to $339.11 million. Moreover, its non-GAAP net income was $72.58 million and $0.20 per share, up 31.3% and 25% from the previous year’s quarter, respectively.

Street expects NET’s EPS and revenue for the quarter ending December 31, 2024, to increase 19.7% and 24.8% year-over-year to $0.18 and $452.19 million, respectively. It surpassed Street EPS estimates in each of the trailing four quarters. Over the past year, the stock has gained 45.5% to close the last trading session at $92.05.

NET’s POWR Ratings reflect strong prospects. The POWR Ratings assess stocks by 118 different factors, each with its own weighting.

It is ranked #15 out of 20 stocks in the B-rated Software – Security industry. It has a B grade for Growth. Click here to see NET’s ratings for Value, Momentum, Stability, Sentiment, and Quality.

MongoDB, Inc. (MDB)

MDB and its subsidiaries provide a general-purpose database platform worldwide. The company offers MongoDB Atlas, MongoDB Enterprise Advanced, and Community Server.

On August 20, 2024, MDB announced that its Atlas for Government now supports Google Cloud’s Assured Workloads, enhancing flexibility and resilience for public sector customers. This makes it the first multi-cloud data platform at FedRAMP Moderate, expanding secure cloud options for government agencies.

In terms of the trailing-12-month levered FCF margin, MDB’s 15.99% is 42.7% higher than the 11.20% industry average. Its 74.02% trailing-12-month gross profit margin is 47.4% higher than the 50.23% industry average. Similarly, the stock’s 0.63x trailing-12-month asset turnover ratio is 2.5% higher than the 0.61x industry average.

MDB’s total revenue for the second quarter, which ended on July 31, 2024, increased 12.8% year-over-year to $478.11 million. Similarly, the company’s non-GAAP gross profit grew 9.7% over the prior-year quarter to $360.79 million. Additionally, its non-GAAP net income was $59.04 million, or $0.70 per share.

For the quarter ended October 31, 2024, MDB’s revenue is expected to increase 14.5% year-over-year to $495.65 million. Its EPS for the quarter ending April 30, 2025, is expected to grow 17.9% year-over-year to $0.60. It surpassed the consensus EPS estimates in each of the trailing four quarters. Over the past three months, the stock has gained 24.3% to close the last trading session at $292.

MDB’s positive outlook is reflected in its POWR Ratings. It has a B grade for Growth. It is ranked #94 out of 129 stocks in the Software – Application industry. To access additional grades for MDB’s Value, Momentum, Stability, Sentiment, and Quality ratings, click here.

Pegasystems Inc. (PEGA)

PEGA develops, markets, licenses, hosts, and supports enterprise software in the United States, the rest of the Americas, the United Kingdom, Europe, the Middle East, Africa, and the Asia-Pacific.

On October 29, 2024, PEGA announced Pega Infinity 24.2, introducing enhanced generative AI features to boost enterprise innovation, productivity, and customer engagement. The update includes expanded AI model support, streamlined workflows, and advanced automation tools across customer service, decisioning, and sales platforms.

In terms of the trailing-12-month net income margin, PEGA’s 8.29% is 141.1% higher than the 3.44% industry average. Similarly, its 38.32% trailing-12-month Return on Common Equity is 857.5% higher than the 4% industry average. Its 1.06x trailing-12-month asset turnover ratio is 72% higher than the 0.61x industry average.

PEGA’s total revenue for the nine months ended September 30, 2024, rose 5% year-over-year to $1.01 billion. For the same period, the company’s gross profit stood at $718.04 million, up 7.1% year-over-year. The company’s non-GAAP net income and EPS were $122.59 million and $1.38, up 111.3% and 100%, respectively, from the prior year’s period.

Analysts expect PEGA’s revenue for the quarter ending December 31, 2024, to increase marginally year-over-year to $474.51 million. Likewise, its EPS for the quarter ending March 31, 2025, is expected to rise 14.5% year-over-year to $0.55. It surpassed the Street EPS estimates in each of the trailing four quarters. Over the past year, the stock has gained 96.2% to close the last trading session at $88.65.

PEGA’s robust fundamentals are reflected in its POWR Ratings. It has an overall rating of B, which translates to a Buy in our proprietary rating system.

It has a B grade for Sentiment and Quality. Within the B-rated Software – Business industry, it is ranked #13 out of 39 stocks. To see PEGA’s Growth, Value, Momentum, and Stability ratings, click here.

What To Do Next?

Get your hands on this special report with 3 low priced companies with tremendous upside potential even in today’s volatile markets:

3 Stocks to DOUBLE This Year >

NET shares were trading at $90.91 per share on Tuesday afternoon, down $2.32 (-2.49%). Year-to-date, NET has gained 9.19%, versus a 26.77% rise in the benchmark S&P 500 index during the same period.

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


NET: 3 AI Stocks to Buy Before Their Breakthroughs Take Off | StockNews.com

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

<!–

NET: 3 AI Stocks to Buy Before Their Breakthroughs Take Off

<!—->

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Hugging Face Launches SmolTools: Practical AI Apps Powered by SmolLM2 Model

MMS Founder
MMS Robert Krzaczynski

Article originally posted on InfoQ. Visit InfoQ

Hugging Face has introduced SmolTools, a set of applications built on the recently launched SmolLM2 model, a compact 1.7-billion parameter language model. SmolTools includes specialized tools for summarization, rewriting, and task automation, bringing efficient AI functionality to a broader range of users.

SmolTools suite includes several applications designed to streamline common tasks:

  1. SmolSummarizer: Enables quick summarization for texts up to 20 pages, retaining key points and supporting follow-up questions for deeper understanding.
  2. SmolRewriter: Refines initial drafts to sound professional and approachable while preserving original intent, ideal for email and messaging needs.
  3. SmolAgent: Acts as a tool-integrated AI agent capable of executing tasks like random number generation or time checks. Its extensible tool system also allows users to add new capabilities as needed.

To install SmolTools, users can follow these setup steps:

1. Clone the repository:

git clone https://github.com/huggingface/smollm.git
cd smollm/smol_tools

2. Install dependencies:

uv venv --python 3.11
source .venv/bin/activate
uv pip install -r requirements.txt

These tools are powered by SmolLM2’s variants, including lighter models (360M and 135M), optimized for devices with limited resources. This development brings AI-powered functions to a wider range of platforms, with implications for small businesses, developers, and edge devices.

Drasko Draskovic noted the potential impact: 

For small businesses, individual developers, and even edge devices like smartphones, this is game-changing. Imagine running sophisticated summarization or rewriting tasks directly on-device, empowering users everywhere with AI that’s accessible, efficient, and practical.
By pushing forward with innovations like SmolTools, Hugging Face is not just developing technology. They are helping democratize AI. They are proving that efficiency and accessibility are as important as power, opening doors to a future where AI is integrated into everyday workflows, making an impact on all levels of business and society.

SmolLM2’s on-device performance is enhanced with support for tool calling and structured outputs, features critical for building advanced workflows and agentic AI applications. Gaurav Dhiman raised the importance of these functions: 

Without that, it is practically not possible to build useful AI apps other than general chatting summarization apps. For building something serious like Agentic workflows, both tool calling and structured outputs are crucial capabilities.

Andrés Marafioti, a machine learning researcher at Hugging Face, confirmed SmolTools support for these features, referencing a repository example that includes an agent for function calling and structured outputs.

SmolTools offers accessible, practical tools that simplify text processing tasks on-device, with potential applications across various fields.

About the Author

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.