Category: Uncategorized

MMS • RSS
Posted on mongodb google news. Visit mongodb google news
PNC Financial Services Group Inc. decreased its holdings in shares of MongoDB, Inc. (NASDAQ:MDB – Free Report) by 11.7% during the fourth quarter, according to its most recent filing with the SEC. The firm owned 1,932 shares of the company’s stock after selling 255 shares during the period. PNC Financial Services Group Inc.’s holdings in MongoDB were worth $450,000 at the end of the most recent reporting period.
Other institutional investors have also added to or reduced their stakes in the company. B.O.S.S. Retirement Advisors LLC purchased a new position in MongoDB during the 4th quarter valued at about $606,000. Geode Capital Management LLC grew its position in shares of MongoDB by 2.9% in the 3rd quarter. Geode Capital Management LLC now owns 1,230,036 shares of the company’s stock worth $331,776,000 after buying an additional 34,814 shares during the last quarter. B. Metzler seel. Sohn & Co. Holding AG purchased a new position in shares of MongoDB in the 3rd quarter worth approximately $4,366,000. Charles Schwab Investment Management Inc. grew its position in shares of MongoDB by 2.8% in the 3rd quarter. Charles Schwab Investment Management Inc. now owns 278,419 shares of the company’s stock worth $75,271,000 after buying an additional 7,575 shares during the last quarter. Finally, Union Bancaire Privee UBP SA purchased a new position in shares of MongoDB in the 4th quarter worth approximately $3,515,000. Hedge funds and other institutional investors own 89.29% of the company’s stock.
Insider Activity
In related news, Director Dwight A. Merriman sold 3,000 shares of the firm’s stock in a transaction dated Monday, March 3rd. The shares were sold at an average price of $270.63, for a total transaction of $811,890.00. Following the completion of the sale, the director now directly owns 1,109,006 shares in the company, valued at approximately $300,130,293.78. This trade represents a 0.27 % decrease in their ownership of the stock. The transaction was disclosed in a document filed with the Securities & Exchange Commission, which is accessible through this link. Also, insider Cedric Pech sold 287 shares of the firm’s stock in a transaction dated Thursday, January 2nd. The stock was sold at an average price of $234.09, for a total transaction of $67,183.83. Following the sale, the insider now owns 24,390 shares of the company’s stock, valued at approximately $5,709,455.10. The trade was a 1.16 % decrease in their ownership of the stock. The disclosure for this sale can be found here. Insiders sold 43,139 shares of company stock worth $11,328,869 over the last three months. Insiders own 3.60% of the company’s stock.
Analysts Set New Price Targets
A number of research analysts have recently issued reports on the stock. Stifel Nicolaus cut their price objective on shares of MongoDB from $425.00 to $340.00 and set a “buy” rating on the stock in a research report on Thursday, March 6th. Tigress Financial lifted their price objective on shares of MongoDB from $400.00 to $430.00 and gave the stock a “buy” rating in a research report on Wednesday, December 18th. Guggenheim raised shares of MongoDB from a “neutral” rating to a “buy” rating and set a $300.00 price objective on the stock in a research report on Monday, January 6th. Rosenblatt Securities reissued a “buy” rating and set a $350.00 target price on shares of MongoDB in a research report on Tuesday, March 4th. Finally, Robert W. Baird dropped their target price on shares of MongoDB from $390.00 to $300.00 and set an “outperform” rating on the stock in a research report on Thursday, March 6th. Seven analysts have rated the stock with a hold rating and twenty-three have assigned a buy rating to the company. According to data from MarketBeat, the stock presently has an average rating of “Moderate Buy” and an average target price of $320.70.
Check Out Our Latest Analysis on MDB
MongoDB Price Performance
MDB opened at $190.06 on Thursday. The firm has a fifty day moving average price of $253.21 and a 200 day moving average price of $270.89. MongoDB, Inc. has a 1 year low of $173.13 and a 1 year high of $387.19. The stock has a market cap of $14.15 billion, a P/E ratio of -69.36 and a beta of 1.30.
MongoDB (NASDAQ:MDB – Get Free Report) last posted its quarterly earnings data on Wednesday, March 5th. The company reported $0.19 earnings per share (EPS) for the quarter, missing the consensus estimate of $0.64 by ($0.45). The company had revenue of $548.40 million for the quarter, compared to analysts’ expectations of $519.65 million. MongoDB had a negative net margin of 10.46% and a negative return on equity of 12.22%. During the same period in the previous year, the business earned $0.86 earnings per share. Equities analysts expect that MongoDB, Inc. will post -1.78 EPS for the current year.
MongoDB Company Profile
MongoDB, Inc, together with its subsidiaries, provides general purpose database platform worldwide. The company provides MongoDB Atlas, a hosted multi-cloud database-as-a-service solution; MongoDB Enterprise Advanced, a commercial database server for enterprise customers to run in the cloud, on-premises, or in a hybrid environment; and Community Server, a free-to-download version of its database, which includes the functionality that developers need to get started with MongoDB.
Featured Stories
Receive News & Ratings for MongoDB Daily – Enter your email address below to receive a concise daily summary of the latest news and analysts’ ratings for MongoDB and related companies with MarketBeat.com’s FREE daily email newsletter.
Article originally posted on mongodb google news. Visit mongodb google news

MMS • RSS
Posted on mongodb google news. Visit mongodb google news
MongoDB, Inc. (NASDAQ:MDB – Get Free Report) saw some unusual options trading on Wednesday. Stock investors bought 36,130 call options on the company. This represents an increase of 2,077% compared to the typical daily volume of 1,660 call options.
MongoDB Stock Up 0.7 %
Shares of MongoDB stock opened at $190.06 on Thursday. MongoDB has a 12-month low of $173.13 and a 12-month high of $387.19. The company has a market capitalization of $14.15 billion, a price-to-earnings ratio of -69.36 and a beta of 1.30. The firm’s 50-day moving average price is $253.21 and its two-hundred day moving average price is $270.89.
MongoDB (NASDAQ:MDB – Get Free Report) last announced its quarterly earnings data on Wednesday, March 5th. The company reported $0.19 earnings per share (EPS) for the quarter, missing analysts’ consensus estimates of $0.64 by ($0.45). MongoDB had a negative net margin of 10.46% and a negative return on equity of 12.22%. The firm had revenue of $548.40 million during the quarter, compared to the consensus estimate of $519.65 million. During the same period last year, the business earned $0.86 earnings per share. On average, sell-side analysts expect that MongoDB will post -1.78 earnings per share for the current year.
Wall Street Analyst Weigh In
Several equities analysts have recently weighed in on MDB shares. Mizuho raised their price objective on MongoDB from $275.00 to $320.00 and gave the company a “neutral” rating in a report on Tuesday, December 10th. Macquarie decreased their price target on MongoDB from $300.00 to $215.00 and set a “neutral” rating for the company in a research note on Friday, March 7th. Monness Crespi & Hardt upgraded MongoDB from a “sell” rating to a “neutral” rating in a research note on Monday, March 3rd. KeyCorp cut shares of MongoDB from a “strong-buy” rating to a “hold” rating in a research note on Wednesday, March 5th. Finally, Cantor Fitzgerald started coverage on shares of MongoDB in a research report on Wednesday, March 5th. They set an “overweight” rating and a $344.00 price target on the stock. Seven investment analysts have rated the stock with a hold rating and twenty-three have issued a buy rating to the company. Based on data from MarketBeat, the stock has an average rating of “Moderate Buy” and an average target price of $320.70.
View Our Latest Analysis on MongoDB
Insider Activity
In other news, Director Dwight A. Merriman sold 3,000 shares of the firm’s stock in a transaction that occurred on Monday, March 3rd. The shares were sold at an average price of $270.63, for a total transaction of $811,890.00. Following the completion of the transaction, the director now directly owns 1,109,006 shares of the company’s stock, valued at $300,130,293.78. This trade represents a 0.27 % decrease in their position. The sale was disclosed in a legal filing with the Securities & Exchange Commission, which is available through the SEC website. Also, insider Cedric Pech sold 287 shares of the stock in a transaction on Thursday, January 2nd. The stock was sold at an average price of $234.09, for a total transaction of $67,183.83. Following the transaction, the insider now owns 24,390 shares of the company’s stock, valued at $5,709,455.10. The trade was a 1.16 % decrease in their position. The disclosure for this sale can be found here. Insiders have sold a total of 43,139 shares of company stock worth $11,328,869 over the last three months. Corporate insiders own 3.60% of the company’s stock.
Institutional Trading of MongoDB
Large investors have recently bought and sold shares of the business. Strategic Investment Solutions Inc. IL acquired a new stake in shares of MongoDB in the 4th quarter valued at approximately $29,000. Hilltop National Bank increased its holdings in shares of MongoDB by 47.2% during the fourth quarter. Hilltop National Bank now owns 131 shares of the company’s stock worth $30,000 after buying an additional 42 shares in the last quarter. NCP Inc. bought a new position in shares of MongoDB in the 4th quarter valued at $35,000. Brooklyn Investment Group acquired a new stake in shares of MongoDB during the 3rd quarter valued at $36,000. Finally, Continuum Advisory LLC boosted its holdings in shares of MongoDB by 621.1% during the 3rd quarter. Continuum Advisory LLC now owns 137 shares of the company’s stock valued at $40,000 after acquiring an additional 118 shares in the last quarter. 89.29% of the stock is owned by institutional investors and hedge funds.
About MongoDB
MongoDB, Inc, together with its subsidiaries, provides general purpose database platform worldwide. The company provides MongoDB Atlas, a hosted multi-cloud database-as-a-service solution; MongoDB Enterprise Advanced, a commercial database server for enterprise customers to run in the cloud, on-premises, or in a hybrid environment; and Community Server, a free-to-download version of its database, which includes the functionality that developers need to get started with MongoDB.
See Also
This instant news alert was generated by narrative science technology and financial data from MarketBeat in order to provide readers with the fastest and most accurate reporting. This story was reviewed by MarketBeat’s editorial team prior to publication. Please send any questions or comments about this story to contact@marketbeat.com.
Before you consider MongoDB, you’ll want to hear this.
MarketBeat keeps track of Wall Street’s top-rated and best performing research analysts and the stocks they recommend to their clients on a daily basis. MarketBeat has identified the five stocks that top analysts are quietly whispering to their clients to buy now before the broader market catches on… and MongoDB wasn’t on the list.
While MongoDB currently has a Moderate Buy rating among analysts, top-rated analysts believe these five stocks are better buys.

MarketBeat just released its list of 10 cheap stocks that have been overlooked by the market and may be seriously undervalued. Enter your email address and below to see which companies made the list.
Article originally posted on mongodb google news. Visit mongodb google news

MMS • RSS
Posted on mongodb google news. Visit mongodb google news
MongoDB, Inc. (NASDAQ:MDB – Get Free Report) was the target of some unusual options trading on Wednesday. Investors bought 23,831 put options on the stock. This is an increase of approximately 2,157% compared to the typical volume of 1,056 put options.
Insider Activity at MongoDB
In other MongoDB news, CFO Michael Lawrence Gordon sold 1,245 shares of the company’s stock in a transaction that occurred on Thursday, January 2nd. The stock was sold at an average price of $234.09, for a total transaction of $291,442.05. Following the completion of the sale, the chief financial officer now owns 79,062 shares of the company’s stock, valued at $18,507,623.58. This represents a 1.55 % decrease in their ownership of the stock. The sale was disclosed in a document filed with the Securities & Exchange Commission, which is available through the SEC website. Also, insider Cedric Pech sold 287 shares of the company’s stock in a transaction that occurred on Thursday, January 2nd. The stock was sold at an average price of $234.09, for a total value of $67,183.83. Following the sale, the insider now directly owns 24,390 shares of the company’s stock, valued at $5,709,455.10. The trade was a 1.16 % decrease in their position. The disclosure for this sale can be found here. In the last quarter, insiders sold 43,139 shares of company stock valued at $11,328,869. 3.60% of the stock is owned by corporate insiders.
Institutional Inflows and Outflows
Institutional investors have recently made changes to their positions in the business. Strategic Investment Solutions Inc. IL acquired a new position in shares of MongoDB during the 4th quarter worth approximately $29,000. Hilltop National Bank grew its holdings in MongoDB by 47.2% in the 4th quarter. Hilltop National Bank now owns 131 shares of the company’s stock valued at $30,000 after buying an additional 42 shares in the last quarter. Brooklyn Investment Group acquired a new position in MongoDB in the 3rd quarter valued at $36,000. Continuum Advisory LLC grew its holdings in MongoDB by 621.1% in the 3rd quarter. Continuum Advisory LLC now owns 137 shares of the company’s stock valued at $40,000 after buying an additional 118 shares in the last quarter. Finally, NCP Inc. acquired a new stake in shares of MongoDB during the 4th quarter worth $35,000. 89.29% of the stock is owned by institutional investors and hedge funds.
Wall Street Analysts Forecast Growth
Several research firms have recently weighed in on MDB. UBS Group set a $350.00 price objective on MongoDB in a research report on Tuesday, March 4th. Stifel Nicolaus reduced their price objective on MongoDB from $425.00 to $340.00 and set a “buy” rating for the company in a research report on Thursday, March 6th. Wells Fargo & Company cut MongoDB from an “overweight” rating to an “equal weight” rating and reduced their price objective for the company from $365.00 to $225.00 in a research report on Thursday, March 6th. Rosenblatt Securities reissued a “buy” rating and issued a $350.00 price objective on shares of MongoDB in a research report on Tuesday, March 4th. Finally, Bank of America cut their target price on MongoDB from $420.00 to $286.00 and set a “buy” rating for the company in a research note on Thursday, March 6th. Seven analysts have rated the stock with a hold rating and twenty-three have issued a buy rating to the stock. According to data from MarketBeat.com, the stock presently has a consensus rating of “Moderate Buy” and a consensus target price of $320.70.
MongoDB Stock Up 0.7 %
Shares of MongoDB stock opened at $190.06 on Thursday. MongoDB has a 12-month low of $173.13 and a 12-month high of $387.19. The stock has a market cap of $14.15 billion, a P/E ratio of -69.36 and a beta of 1.30. The business’s 50 day moving average is $253.21 and its 200-day moving average is $270.89.
MongoDB (NASDAQ:MDB – Get Free Report) last announced its quarterly earnings data on Wednesday, March 5th. The company reported $0.19 earnings per share (EPS) for the quarter, missing analysts’ consensus estimates of $0.64 by ($0.45). MongoDB had a negative return on equity of 12.22% and a negative net margin of 10.46%. The company had revenue of $548.40 million for the quarter, compared to the consensus estimate of $519.65 million. During the same quarter in the prior year, the company earned $0.86 EPS. On average, sell-side analysts expect that MongoDB will post -1.78 EPS for the current year.
About MongoDB
MongoDB, Inc, together with its subsidiaries, provides general purpose database platform worldwide. The company provides MongoDB Atlas, a hosted multi-cloud database-as-a-service solution; MongoDB Enterprise Advanced, a commercial database server for enterprise customers to run in the cloud, on-premises, or in a hybrid environment; and Community Server, a free-to-download version of its database, which includes the functionality that developers need to get started with MongoDB.
Further Reading
This instant news alert was generated by narrative science technology and financial data from MarketBeat in order to provide readers with the fastest and most accurate reporting. This story was reviewed by MarketBeat’s editorial team prior to publication. Please send any questions or comments about this story to contact@marketbeat.com.
Before you consider MongoDB, you’ll want to hear this.
MarketBeat keeps track of Wall Street’s top-rated and best performing research analysts and the stocks they recommend to their clients on a daily basis. MarketBeat has identified the five stocks that top analysts are quietly whispering to their clients to buy now before the broader market catches on… and MongoDB wasn’t on the list.
While MongoDB currently has a Moderate Buy rating among analysts, top-rated analysts believe these five stocks are better buys.

Discover the next wave of investment opportunities with our report, 7 Stocks That Will Be Magnificent in 2025. Explore companies poised to replicate the growth, innovation, and value creation of the tech giants dominating today’s markets.
Article originally posted on mongodb google news. Visit mongodb google news

MMS • Bruno Couriol
Article originally posted on InfoQ. Visit InfoQ

The Inertia team recently released Inertia 2.0. New features include asynchronous requests, deferred props, prefetching, and polling. Asynchronous requests enable concurrency, lazy loading, and more.
In previous versions, Inertia requests were synchronous. Asynchronous requests now offer full support for asynchronous operations and concurrency. This in turn enables features such as lazy loading data on scroll, infinite scrolling, prefetching, polling, and more. Those features make the single-page application appear more interactive, responsive, and fast.
Link prefetching for instance improves the perceived performance of an application by fetching the data in the background before the user requests it. By default, Inertia will prefetch data for a page when the user hovers over the link after more than 75ms. By default, data is cached for 30 seconds before being evicted. Developers can customize it with the cacheFor
property. Using Svelte, this would look as follows:
import { inertia } from '@inertiajs/svelte'
<a href="/users" use:inertia={{ prefetch: true, cacheFor: '1m' }}>Users</a>
<a href="/users" use:inertia={{ prefetch: true, cacheFor: '10s' }}>Users</a>
<a href="/users" use:inertia={{ prefetch: true, cacheFor: 5000 }}>Users</a>
Prefetching can also happen on mousedown
, that is when the user has clicked on a link, but has not yet released the mouse button. Lastly, prefetching can also occur when a component is mounted.
Inertia 2.0 enables lazy loading data on scroll with the WhenVisible
component, which under the hood uses the Intersection Observer API. The following code showcases a component that shows a fallback message while it is loading (examples written with Svelte 4):
<script>
import { WhenVisible } from '@inertiajs/svelte'
export let teams
export let users
</script>
<svelte:fragment slot="fallback">
<div>Loading...</div>
</svelte:fragment>
</WhenVisible>
The full list of configuration options for lazy loading and prefetching is available in the documentation for review. Inertia 2.0 also features polling, deferred props, and infinite scrolling. Developers are encouraged to review the upgrade guide for more details.
Inertia targets back-end developers who want to create single-page React, Vue, and Svelte apps using classic server-side routing and controllers, that is, without the complexity that comes with modern single-page applications. Developers using Inertia do not need client-side routing or building an API.
Inertia returns a full HTML response on the first page load. On subsequent requests, server-side Inertia returns a JSON response with the JavaScript component (represented by its name and props) that implements the view. Client-side Inertia then replaces the currently displayed page with the new page returned by the new component and updates the history state.
Inertia requests use specific HTTP headers to discriminate between full page refresh and partial refresh. If the X-Inertia
is unset or false, the header indicates that the request being made by an Inertia client is a standard full-page visit.
Developers can upgrade to Inertia v2.0 by installing the client-side adapter of their choice (e.g., Vue, React, Svelte):
npm install @inertiajs/vue3@^2.0
Then, it is necessary to upgrade the inertiajs/inertia-laravel
package to use the 2.x
dev branch:
composer require inertiajs/inertia-laravel:^2.0
Inertia is open-source software distributed under the MIT license. Feedback and contributions are welcome and should follow Inertia’s contribution guidelines.

MMS • RSS
Posted on mongodb google news. Visit mongodb google news

Artificial intelligence is the greatest investment opportunity of our lifetime. The time to invest in groundbreaking AI is now, and this stock is a steal!
The whispers are turning into roars.
Artificial intelligence isn’t science fiction anymore.
It’s the revolution reshaping every industry on the planet.
From driverless cars to medical breakthroughs, AI is on the cusp of a global explosion, and savvy investors stand to reap the rewards.
Here’s why this is the prime moment to jump on the AI bandwagon:
Exponential Growth on the Horizon: Forget linear growth – AI is poised for a hockey stick trajectory.
Imagine every sector, from healthcare to finance, infused with superhuman intelligence.
We’re talking disease prediction, hyper-personalized marketing, and automated logistics that streamline everything.
This isn’t a maybe – it’s an inevitability.
Early investors will be the ones positioned to ride the wave of this technological tsunami.
Ground Floor Opportunity: Remember the early days of the internet?
Those who saw the potential of tech giants back then are sitting pretty today.
AI is at a similar inflection point.
We’re not talking about established players – we’re talking about nimble startups with groundbreaking ideas and the potential to become the next Google or Amazon.
This is your chance to get in before the rockets take off!
Disruption is the New Name of the Game: Let’s face it, complacency breeds stagnation.
AI is the ultimate disruptor, and it’s shaking the foundations of traditional industries.
The companies that embrace AI will thrive, while the dinosaurs clinging to outdated methods will be left in the dust.
As an investor, you want to be on the side of the winners, and AI is the winning ticket.
The Talent Pool is Overflowing: The world’s brightest minds are flocking to AI.
From computer scientists to mathematicians, the next generation of innovators is pouring its energy into this field.
This influx of talent guarantees a constant stream of groundbreaking ideas and rapid advancements.
By investing in AI, you’re essentially backing the future.
The future is powered by artificial intelligence, and the time to invest is NOW.
Don’t be a spectator in this technological revolution.
Dive into the AI gold rush and watch your portfolio soar alongside the brightest minds of our generation.
This isn’t just about making money – it’s about being part of the future.
So, buckle up and get ready for the ride of your investment life!
Act Now and Unlock a Potential 10,000% Return: This AI Stock is a Diamond in the Rough (But Our Help is Key!)
The AI revolution is upon us, and savvy investors stand to make a fortune.
But with so many choices, how do you find the hidden gem – the company poised for explosive growth?
That’s where our expertise comes in.
We’ve got the answer, but there’s a twist…
Imagine an AI company so groundbreaking, so far ahead of the curve, that even if its stock price quadrupled today, it would still be considered ridiculously cheap.
That’s the potential you’re looking at. This isn’t just about a decent return – we’re talking about a 10,000% gain over the next decade!
Our research team has identified a hidden gem – an AI company with cutting-edge technology, massive potential, and a current stock price that screams opportunity.
This company boasts the most advanced technology in the AI sector, putting them leagues ahead of competitors.
It’s like having a race car on a go-kart track.
They have a strong possibility of cornering entire markets, becoming the undisputed leader in their field.
Here’s the catch (it’s a good one): To uncover this sleeping giant, you’ll need our exclusive intel.
We want to make sure none of our valued readers miss out on this groundbreaking opportunity!
That’s why we’re slashing the price of our Premium Readership Newsletter by a whopping 70%.
For a ridiculously low price of just $29.99, you can unlock a year’s worth of in-depth investment research and exclusive insights – that’s less than a single restaurant meal!
Here’s why this is a deal you can’t afford to pass up:
• Access to our Detailed Report on this Game-Changing AI Stock: Our in-depth report dives deep into our #1 AI stock’s groundbreaking technology and massive growth potential.
• 11 New Issues of Our Premium Readership Newsletter: You will also receive 11 new issues and at least one new stock pick per month from our monthly newsletter’s portfolio over the next 12 months. These stocks are handpicked by our research director, Dr. Inan Dogan.
• One free upcoming issue of our 70+ page Quarterly Newsletter: A value of $149
• Bonus Reports: Premium access to members-only fund manager video interviews
• Ad-Free Browsing: Enjoy a year of investment research free from distracting banner and pop-up ads, allowing you to focus on uncovering the next big opportunity.
• 30-Day Money-Back Guarantee: If you’re not absolutely satisfied with our service, we’ll provide a full refund within 30 days, no questions asked.
Space is Limited! Only 1000 spots are available for this exclusive offer. Don’t let this chance slip away – subscribe to our Premium Readership Newsletter today and unlock the potential for a life-changing investment.
Here’s what to do next:
1. Head over to our website and subscribe to our Premium Readership Newsletter for just $29.99.
2. Enjoy a year of ad-free browsing, exclusive access to our in-depth report on the revolutionary AI company, and the upcoming issues of our Premium Readership Newsletter over the next 12 months.
3. Sit back, relax, and know that you’re backed by our ironclad 30-day money-back guarantee.
Don’t miss out on this incredible opportunity! Subscribe now and take control of your AI investment future!
No worries about auto-renewals! Our 30-Day Money-Back Guarantee applies whether you’re joining us for the first time or renewing your subscription a year later!
Article originally posted on mongodb google news. Visit mongodb google news

MMS • RSS
Posted on nosqlgooglealerts. Visit nosqlgooglealerts
USA, New Jersey- According to Market Research Intellect, the global NoSQL Software market in the Internet, Communication and Technology category is projected to witness significant growth from 2025 to 2032. Market dynamics, technological advancements, and evolving consumer demand are expected to drive expansion during this period.
The NoSQL software market is experiencing robust growth as organizations increasingly adopt non-relational databases to handle the complexities of big data and real-time applications. Traditional relational databases often struggle with scalability, flexibility, and the need for high-performance processing, which has led to the growing popularity of NoSQL solutions. These databases are particularly favored for their ability to store and manage vast amounts of unstructured data across distributed systems, making them ideal for cloud environments and modern applications. As businesses continue to embrace digital transformation, the demand for scalable, high-performance databases that can handle a variety of data types is expected to drive the growth of the NoSQL software market. Innovations in AI and machine learning, along with the expanding adoption of IoT devices, are further fueling the demand for NoSQL solutions.
The growth of the NoSQL software market is driven by several key factors. The explosion of big data and the increasing volume of unstructured data is one of the primary catalysts, as traditional relational databases struggle to effectively store, manage, and process such data. NoSQL databases offer the scalability, flexibility, and high-performance capabilities needed to handle these challenges, making them ideal for businesses dealing with large, diverse datasets. Additionally, the rise of cloud computing has further accelerated the adoption of NoSQL solutions, as they are better suited for cloud environments due to their ability to scale horizontally across distributed networks. The demand for real-time applications in areas like IoT, social media, and e-commerce is another significant driver, as NoSQL databases can provide faster data processing and support real-time analytics. Furthermore, the growing interest in machine learning and artificial intelligence, which require large datasets for training models, is fueling the market’s expansion.
Request PDF Sample Copy of Report: (Including Full TOC, List of Tables & Figures, Chart) @ https://www.marketresearchintellect.com/download-sample/?rid=1065792&utm_source=OpenPr&utm_medium=042
Market Growth Drivers-NoSQL Software Market:
The growth of the NoSQL Software market is driven by several key factors, including technological advancements, increasing consumer demand, and supportive regulatory policies. Innovations in product development and manufacturing processes are enhancing efficiency, improving performance, and reducing costs, making NoSQL Software more accessible to a wider range of industries. Rising awareness about the benefits of NoSQL Software, coupled with expanding applications across sectors such as healthcare, automotive, and electronics, is further accelerating market expansion. Additionally, the integration of digital technologies, such as AI and IoT, is optimizing operational workflows and enhancing product capabilities. Government initiatives promoting sustainable solutions and industry-standard regulations are also playing a crucial role in market growth. The increasing investment in research and development by key market players is fostering new product innovations and expanding market opportunities. Overall, these factors collectively contribute to the steady rise of the NoSQL Software market, making it a lucrative industry for future investments.
Challenges and Restraints-NoSQL Software Market:
The NoSQL Software market faces several challenges and restraints that could impact its growth trajectory. High initial investment costs pose a significant barrier, particularly for small and medium-sized enterprises looking to enter the industry. Regulatory complexities and stringent compliance requirements add another layer of difficulty, as companies must navigate evolving policies and standards. Additionally, supply chain disruptions, including raw material shortages and logistical constraints, can hinder market expansion and lead to increased operational costs.
Market saturation in developed regions also presents a challenge, forcing businesses to explore emerging markets where infrastructure and consumer awareness may be lacking. Intense competition among key players further pressures profit margins, making it crucial for companies to differentiate through innovation and strategic partnerships. Economic fluctuations, geopolitical instability, and changing consumer preferences add to the uncertainty, requiring businesses to adopt agile strategies to sustain long-term growth in the evolving NoSQL Software market.
Emerging Trends-NoSQL Software Market:
The NoSQL Software market is evolving rapidly, driven by emerging trends that are reshaping industry dynamics. One key trend is the integration of advanced digital technologies such as artificial intelligence, automation, and IoT, which enhance efficiency, performance, and user experience. Sustainability is another major focus, with companies shifting toward eco-friendly materials and processes to meet growing environmental regulations and consumer demand for greener solutions. Additionally, the rise of personalized and customized offerings is gaining momentum, as businesses strive to cater to specific consumer preferences and industry requirements. Investments in research and development are accelerating, leading to continuous innovation and the introduction of high-performance products. The market is also witnessing a surge in strategic collaborations, partnerships, and acquisitions, as companies aim to expand their geographical footprint and technological capabilities. As these trends continue to evolve, they are expected to drive the market’s long-term growth and competitiveness in a dynamic global landscape.
Competitive Landscape-NoSQL Software Market:
The competitive landscape of the NoSQL Software market is characterized by intense rivalry among key players striving for market dominance. Leading companies focus on product innovation, strategic partnerships, and mergers and acquisitions to strengthen their market position. Continuous research and development investments are driving technological advancements, allowing businesses to enhance their offerings and gain a competitive edge.
Regional expansion strategies are also prominent, with companies targeting emerging markets to capitalize on growing demand. Additionally, sustainability and regulatory compliance have become crucial factors influencing competition, as businesses aim to align with evolving industry standards.
Startups and new entrants are introducing disruptive solutions, intensifying competition and prompting established players to adopt agile strategies. Digital transformation, AI-driven analytics, and automation are further reshaping the competitive dynamics, enabling companies to streamline operations and improve efficiency. As the market continues to evolve, businesses must adapt to changing consumer demands and technological advancements to maintain their market position.
Get a Discount On The Purchase Of This Report @ https://www.marketresearchintellect.com/ask-for-discount/?rid=1065792&utm_source=OpenPr&utm_medium=042
The following Key Segments Are Covered in Our Report
Global NoSQL Software Market by Type
Cloud Based
Web Based
Global NoSQL Software Market by Application
E-Commerce
Social Networking
Data Analytics
Data Storage
Others
Major companies in NoSQL Software Market are:
MongoDB, Amazon, ArangoDB, Azure Cosmos DB, Couchbase, MarkLogic, RethinkDB, CouchDB, SQL-RD, OrientDB, RavenDB, Redis, Microsoft
NoSQL Software Market -Regional Analysis
The NoSQL Software market exhibits significant regional variations, driven by economic conditions, technological advancements, and industry-specific demand. North America remains a dominant force, supported by strong investments in research and development, a well-established industrial base, and increasing adoption of advanced solutions. The presence of key market players further enhances regional growth.
Europe follows closely, benefiting from stringent regulations, sustainability initiatives, and a focus on innovation. Countries such as Germany, France, and the UK are major contributors due to their robust industrial frameworks and technological expertise.
Asia-Pacific is witnessing the fastest growth, fueled by rapid industrialization, urbanization, and increasing consumer demand. China, Japan, and India play a crucial role in market expansion, with government initiatives and foreign investments accelerating development.
Latin America and the Middle East and Africa are emerging markets with growing potential, driven by infrastructure development and expanding industrial sectors. However, challenges such as economic instability and regulatory barriers may impact growth trajectories.
Frequently Asked Questions (FAQ) – NoSQL Software Market (2025-2032)
1. What is the projected growth rate of the NoSQL Software market from 2025 to 2032?
The NoSQL Software market is expected to experience steady growth from 2025 to 2032, driven by technological advancements, increasing consumer demand, and expanding industry applications. The market is projected to witness a robust compound annual growth rate (CAGR), supported by rising investments in research and development. Additionally, factors such as digital transformation, automation, and regulatory support will further boost market expansion across various regions.
2. What are the key drivers fueling the growth of the NoSQL Software market?
Several factors are contributing to the growth of the NoSQL Software market. The increasing adoption of advanced technologies, a rise in industry-specific applications, and growing consumer awareness are some of the primary drivers. Additionally, government initiatives and favorable regulations are encouraging market expansion. Sustainability trends, digitalization, and the integration of artificial intelligence (AI) and Internet of Things (IoT) solutions are also playing a vital role in accelerating market development.
3. Which region is expected to dominate the NoSQL Software market by 2032?
The NoSQL Software market is witnessing regional variations in growth, with North America and Asia-Pacific emerging as dominant regions. North America benefits from a well-established industrial infrastructure, extensive research and development activities, and the presence of leading market players. Meanwhile, Asia-Pacific, particularly China, Japan, and India, is experiencing rapid industrialization and urbanization, driving increased adoption of NoSQL Software solutions. Europe also holds a significant market share, particularly in sectors focused on sustainability and regulatory compliance. Emerging markets in Latin America and the Middle East & Africa are showing potential but may face challenges such as economic instability and regulatory constraints.
4. What challenges are currently impacting the NoSQL Software market?
Despite promising growth, the NoSQL Software market faces several challenges. High initial investments, regulatory hurdles, and supply chain disruptions are some of the primary obstacles. Additionally, market saturation in certain regions and intense competition among key players may lead to pricing pressures. Companies must focus on innovation, cost efficiency, and strategic partnerships to navigate these challenges successfully. Geopolitical factors, economic fluctuations, and trade restrictions can also impact market stability and growth prospects.
5. Who are the key players in the NoSQL Software market?
The NoSQL Software market is highly competitive, with several leading global and regional players striving for market dominance. Major companies are investing in research and development to introduce innovative solutions and expand their market presence. Key players are also engaging in mergers, acquisitions, and strategic collaborations to strengthen their positions. Emerging startups are bringing disruptive innovations, further intensifying market competition. Companies that prioritize sustainability, digital transformation, and customer-centric solutions are expected to gain a competitive edge in the industry.
6. How is technology shaping the future of the NoSQL Software market?
Technology plays a pivotal role in the evolution of the NoSQL Software market. The adoption of artificial intelligence (AI), big data analytics, automation, and IoT is transforming industry operations, improving efficiency, and enhancing product offerings. Digitalization is streamlining supply chains, optimizing resource utilization, and enabling predictive maintenance strategies. Companies investing in cutting-edge technologies are likely to gain a competitive advantage, improve customer experience, and drive market expansion.
7. What impact does sustainability have on the NoSQL Software market?
Sustainability is becoming a key focus area for companies operating in the NoSQL Software market. With increasing environmental concerns and stringent regulatory policies, businesses are prioritizing eco-friendly solutions, energy efficiency, and sustainable manufacturing processes. The shift toward circular economy models, renewable energy sources, and waste reduction strategies is influencing market trends. Companies that adopt sustainable practices are likely to enhance their brand reputation, attract environmentally conscious consumers, and comply with global regulatory standards.
8. What are the emerging trends in the NoSQL Software market from 2025 to 2032?
Several emerging trends are expected to shape the NoSQL Software market during the forecast period. The rise of personalization, customization, and user-centric innovations is driving product development. Additionally, advancements in 5G technology, cloud computing, and blockchain are influencing market dynamics. The growing emphasis on remote operations, automation, and smart solutions is reshaping industry landscapes. Furthermore, increased investments in biotechnology, nanotechnology, and advanced materials are opening new opportunities for market growth.
9. How will economic conditions affect the NoSQL Software market?
Economic fluctuations, inflation rates, and geopolitical tensions can impact the NoSQL Software market’s growth trajectory. The availability of raw materials, supply chain stability, and changes in consumer spending patterns may influence market demand. However, industries that prioritize innovation, agility, and strategic planning are better positioned to withstand economic uncertainties. Diversification of revenue streams, expansion into emerging markets, and adaptation to changing economic conditions will be key strategies for market sustainability.
10. Why should businesses invest in the NoSQL Software market from 2025 to 2032?
Investing in the NoSQL Software market presents numerous opportunities for businesses. The industry is poised for substantial growth, with advancements in technology, evolving consumer preferences, and increasing regulatory support driving demand. Companies that embrace innovation, digital transformation, and sustainability can gain a competitive advantage. Additionally, expanding into emerging markets, forming strategic alliances, and focusing on customer-centric solutions will be crucial for long-term success. As the market evolves, businesses that stay ahead of industry trends and invest in R&D will benefit from sustained growth and profitability.
For More Information or Query, Visit @ https://www.marketresearchintellect.com/product/nosql-software-market/?utm_source=OpenPR&utm_medium=042
Our Top Trending Reports
Data Loss Prevention DLP Solutions Market Size By Type: https://www.marketresearchintellect.com/ko/product/data-loss-prevention-dlp-solutions-market/
Financial Data Warehouse Solution Market Size By Applications: https://www.marketresearchintellect.com/zh/product/financial-data-warehouse-solution-market/
Digital Turbidity Meter Market Size By Type: https://www.marketresearchintellect.com/de/product/global-digital-turbidity-meter-market-size-and-forecast-2/
Intelligent Excavator Market Size By Applications: https://www.marketresearchintellect.com/es/product/global-intelligent-excavator-market-size-and-forecast/
Cloud Automation Market Size By Type: https://www.marketresearchintellect.com/ja/product/global-cloud-automation-market-size-and-forecast/
Spectroscopy Reagent Sp Market Size By Type: https://www.marketresearchintellect.com/pt/product/spectroscopy-reagent-sp-market-size-and-forecast/
Flexible Foam Sales Market Size By Applications: https://www.marketresearchintellect.com/it/product/global-flexible-foam-sales-market/
Network Traffic Analysis Tool Market Size By Type: https://www.marketresearchintellect.com/nl/product/network-traffic-analysis-tool-market/
Push To Talk Telemedicine And M-Health Convergence Market Size By Applications: https://www.marketresearchintellect.com/ko/product/global-push-to-talk-telemedicine-and-m-health-convergence-market/
Payroll And Bookkeeping Services Market Size By Type: https://www.marketresearchintellect.com/zh/product/payroll-and-bookkeeping-services-market/
As Interface Market Size By Applications: https://www.marketresearchintellect.com/de/product/as-interface-market-size-and-forecast/
About Us: Market Research Intellect
Market Research Intellect is a leading Global Research and Consulting firm servicing over 5000+ global clients. We provide advanced analytical research solutions while offering information-enriched research studies. We also offer insights into strategic and growth analyses and data necessary to achieve corporate goals and critical revenue decisions.
Our 250 Analysts and SMEs offer a high level of expertise in data collection and governance using industrial techniques to collect and analyze data on more than 25,000 high-impact and niche markets. Our analysts are trained to combine modern data collection techniques, superior research methodology, expertise, and years of collective experience to produce informative and accurate research.
Our research spans a multitude of industries including Energy, Technology, Manufacturing and Construction, Chemicals and Materials, Food and Beverages, etc. Having serviced many Fortune 2000 organizations, we bring a rich and reliable experience that covers all kinds of research needs.
For inquiries, Contact Us at:
Mr. Edwyne Fernandes
Market Research Intellect
APAC: +61 485 860 968
EU: +44 788 886 6344
US: +1 743 222 5439
This release was published on openPR.

MMS • RSS
Posted on mongodb google news. Visit mongodb google news

LEESBURG, Va., March 19, 2025 (Newswire.com)
–
Vertosoft is thrilled to announce that they have been named as MongoDB’s newest public sector distributor. With this partnership, MongoDB and their intelligent data platform will be available to Vertosoft’s channel partners as well as government agencies through Vertosoft’s trusted and secure supply chain. This addition significantly enhances Vertosoft’s Big Data & Analytics technology portfolio, showcasing their commitment to providing innovative software solutions that drive operational efficiency and improve decision-making within the public sector.
MongoDB is the world’s leading modern document database provider, and the MongoDB for Public Sector program offers flexible, highly-secure data infrastructure that is optimized for the public sector, enabling federal, state, and local governments to accelerate and streamline their digital transformation efforts. MongoDB for Public Sector is specifically designed to help public sector organizations balance the unique set of compliance requirements they face with the need to innovate in order to keep up with technological progress in the private sector.
MongoDB Atlas for Government is the FedRAMP Moderate Authorized environment of MongoDB’s cloud-native data platform, Atlas. Atlas for Government facilitates the modernization of legacy applications to the cloud while meeting the unique requirements and missions of the U.S. government in a secure, fully managed environment. With real-time data visibility and robust security features, it ensures easy adoption within the public sector community. Additionally, MongoDB Enterprise Advanced provides similar capabilities in an on-premises operational model, making it the only NoSQL database with a STIG reviewed and approved by DISA.
“Public sector organizations must balance some of the strictest compliance requirements with the need to keep up with the breakneck pace of private sector technological innovation,” said Joe Perrino, Vice President of Public Sector at MongoDB. “MongoDB gives them the flexibility and intuitive developer experience they need to move fast, while its exceptional levels of security, durability, availability, and performance enable them to build and deploy cutting-edge applications with confidence. More than 1,000 public sector customers in the U.S. rely on MongoDB to power mission-critical workloads, and now, it’s even easier for them to do so.”
“We are excited to partner with MongoDB and add the world’s most versatile data platform to our Big Data & Analytics portfolio. This collaboration emphasizes our commitment to supporting public sector missions by delivering cutting-edge solutions to the Government,” said Jay Colavita, President of Vertosoft.
About Vertosoft
At Vertosoft, we are a trusted, value-driven distributor of innovative technology solutions. Our experienced team and tailored services equip our channel partners and suppliers with the tools, contracts, and secure systems needed to succeed in the public sector market.
Source: Vertosoft
Article originally posted on mongodb google news. Visit mongodb google news

MMS • Sergio De Simone
Article originally posted on InfoQ. Visit InfoQ

Born as an enterprise-focused AI-based code generation tool, Gemini Code Assist now provides a free tier to individual developers with a limit of 6,000 code completions and 240 chat requests daily.
Google emphasizes that Gemini Code Assist offers the highest free usage limits. It is indeed true that one of the strongest competitors to Code Assist, GitHub Copilot, only gives up to 2,000 code completions free per month. AWS CodeWhisperer also provides a free tier for individuals, apparently with no limits on code completions, but it does not include a chat.
Another feature in Code Assist that Google underlines is its 128,000 token context size. This is significantly less than the 2 million tokens provided in the Standard and Enterprise editions but is still a compelling offer for a free-tier. Among the advantages of a larger token context are the capability to handle larger codebases, better code completions, and improved multi-file understanding.
However, it is important to understand that Gemini Code Assist’s free tier has several limitations compared to its Standard and Enterprise tiers. For example, the Enterprise tier includes customized code suggestions based on an organization’s private repositories, support for BigQuery, Apigee, and more. The free tier also does not include any form of IP indemnification, which aims to protect Code Assist from certain IP-related challenges.
Another important factor to keep in mind is that while Google is explicitly saying that Code Assist Standard and Enterprise do not use prompts or responses for training, this is not the case with its free tier, where Google will collect prompts, related code, generated responses according to its privacy policy.
Powered by Gemini 2.0, Code Assist uses a specific version of the model customized using a large number of real-world coding samples within the public domain, Google says. While this makes it capable of understanding and generating code in many programming languages, Google defined a subset of languages that it ensures work at best with the model, including C/C++, C#, Go, JavaScript, Python, Kotlin, Swift, and many more.
Code Assist is integrated by default in Google’s Cloud-based IDEs, including the Cloud Shell Editor and Cloud Workstations, and is supported through extensions in Visual Studio Code and JetBrains IDEs.

MMS • Santosh Yadav
Article originally posted on InfoQ. Visit InfoQ

Transcript
Yadav: When I was asked to come here and give a talk, I was thinking about how not to give a talk which we have been through, because at Celonis, we went through a lot of troubles to where we are right now, so it’s like a journey, and the tools which we used in the process to make us deliver faster. Nx is one of the most important tools which we use in our ecosystem. That’s why I just mentioned it in the talk title as well. We’ll talk about Nx as well. Let’s see what we are going to talk about.
First, I want to show you how our application looks like. This is our application. If you see the nav bar, actually each nav bar is actually an application. It’s a separate application which we load inside our shell. Even inside the shell, there can be multiple components which we can combine together and create a new dashboard for our end users. It means there are different teams which are building these applications. This is where we are right now.
Problem Statement
How did we start? I just want to show you what the problem statement was. What was the problem or the issue which we were trying to resolve? Then we ended up here. This was our old approach. I was speaking to some of my friends, and we were talking about the same issue where we have multiple repositories, but we are still thinking about, of course, moving all the code to a single monorepo stuff, but we are not able to do it. Or we are struggling, like we know that there might be challenges. This is where we were three years ago. We had separate apps with separate repositories. We used to load each app using URL routing. It’s not like SPA or module federation or microcontent, which we know today. Because in the past few years, tools have added more capabilities.
For example, webpack came with the support of module federation, which was not there earlier. Everyone was solving module federation in some different ways, just not in the right way. This is another issue which we had. Of course, we had close to 40 different repositories, and then we used to build those code. We are using GitHub Actions. Of course, we used to build that code, push it in the artifact or database, because that was the one way to load the application. We used to push the entire build into the database and then load it on frontend. The only problem is we were doing it X times. The same process, same thing, just 30 times. Of course, it costs a lot of money. The other issue which we had was, of course, we have a design system.
Which company doesn’t have a design system? The first thing which a company decides to do is, let’s have a design system. We don’t have a product, but we should have a design system. This was another issue which we had. Now this became a problem. Of course, we had a design system, but now different applications started using different versions of design system, because sometimes they had time. Some teams started pushing back, we don’t have frontend developers or we don’t have time to upgrade it right now. This was, of course, a big pain. How should we do it? This caused another issue.
Some of our customers are actually seeing the same app, but as soon as they move to a different application or part of the application, they see a different design system. There’s probably a dark theme and light theme, just as an example. Think about a different text box. Someone is seeing a different text box and someone is seeing different.
What were the issues we went through? Page reloads, for example, of course, now with HTML5, everyone knows of course, the experience should be smooth. As soon as I click on the URL, there should not be page refresh. That’s the expectation of today’s users. This is not early ’90s or 2000 where we can just click on a URL and wait for one hour to download my page. This is the thing of the past. Our users were facing this issue. Every page, every app, it reloads the entire thing. Bundle size, of course, we could not actually tree shake anything, or there was no lazy loading. Of course, there was a huge bundle size which we used to download.
Of course, when we have to upgrade Angular or any other framework, this can be any other framework which you are using in your enterprise. We are using Angular, of course. We had too much effort upgrading Angular because we have to do it 30 times. Plus, our reusables and design system. Maintaining multiple versions of shared libraries and design system became a pain because we cannot move ahead and adopt the new things which are available in Angular or any other ecosystem because it’s always about backward compatibility.
Everyone knows, backward compatibility is not a thing. It’s just a compromise. It’s a compromise we do that, ok, we have to support this. That’s why we are just still here. Now, as we said, we had 30-plus apps and then we used to deploy them separately. We had to sync our design system, which we saw in the previous slide. Which was, again, very difficult because for a few seconds or a few minutes, if your releases are not synchronized, you will see different UIs.
What Is Nx?
Then came Nx. Of course, we started adopting Nx almost three years back. Let’s see what is Nx. It’s a build tool. It’s an open-source build tool, which is available for everyone. You can just start using it for free. There’s no cost needed. It also supports monorepo. Monorepo is just an extra thing which you get. The main thing is it’s a build tool. It’s a build tool you can use. Let’s see. It actually provides build cache for tasks like build and test. As of today, one thing which we all are doing is we are building the same code again and again. Nx takes another approach. The founders actually are from Google. Everyone knows Google has different tools.
If you have a colleague from Google, you keep hearing about, we had this tool and we had that tool, and how crazy it was. Of course, these people, they used to work in the Angular team. They took this idea of Bazel. Bazel was, of course, the idea, because Google uses it a lot. They built the entire Nx based on it. Eventually, they launched it for Angular first, and then now it’s platform technology independent. As I said, it’s framework and technology agnostic. You can use it for anything. It’s plugin based, so you can bring your own framework. If there is no support for any existing technology, you can just add it. Or if you have any homegrown framework, you build it on your own. You can also bring it as a plugin, as part of Nx, and you can start getting all the features which Nx offers.
For example, build cache. It supports all the major frameworks out of the box. For example, Angular, React, Vue. On top of it, it supports micro-frontend. If you want to do micro-frontend with React or Angular, it’s just easy. I’ll show you the commands. It also supports backend technologies. They have support for .NET, Java, Spring. They have support for Python. They also added support for Gradle recently. As I said, it’s wide.
Celonis Codebase
This is our codebase as of today. We have 2 million lines of code. We have close to 40-plus applications. We have 200 projects. Why are applications not projects? Because we also have libraries. We try to split our code into smaller chunks using libraries, so that’s why we have close to 200 projects. Then, more than 40-plus teams which are contributing to this codebase. We get close to 100 PRs per day. That’s average. There are some times where we get more. With module federation, this is what we do today. We are not loading those applications via URL routing. It’s just the Angular application loads natively. We have multiple applications here. Shell app is something which just renders your nav bar.
Then you can access any apps. It just feels like you’re a single page application. There is no reload. We can do tree shaking. We can actually do code splitting. We can also share or reduce the need to share our design system across the application, because now we have to do it only once. These are some tasks which we run for each and every PR. Of course, we do build. Once you write your code, the first thing which you do is you build your project. Then we write unit tests. We use Jest. We also have Cypress component test to write our test. Then we, of course, run it on the CI as well. Before we merge our PR, we also run end-to-end test. We are using Playwright for writing our end-to-end test or user journey.
Then, let’s see how to start using module federation with Angular. You can just use this command. You can generate nx generate. For any framework, you will find nx generate. Then you will say nx, and the framework name. You can just here, for example, replace Angular with React, and you get your module federated app or micro-frontend app for your React application. These remotes are actually applications which will be loaded when you route through your URLs. For example, home, about, blogs, this can be different URLs which we have. They are actually different applications. It means your three teams can work on three different applications but, at the end, they will be loaded together.
Feature Flags
We use feature flags a lot because when we started migrating all of the codebase, it became a mess. Of course, a lot of teams started pushing their code in a single codebase. We were coming from different teams. A different team had their different ways to write code. We had feature flags for backend. Of course, that was something which was taken care of. At the frontend, we were seeing a lot of errors. We thought of creating a feature flag framework for our frontend application. This is how it feels like without feature flag. I’ve seen this meme many times. This always says, this is fine. We believe this is not fine. If your organization is always on fire, this is not fine. This is not fine for everyone. You should not do 24 by 7 just monitoring your systems because you just did a release. This is where we started. Of course, we had a lot of fires.
Then we decided, of course, we will have our own feature flag framework for frontend applications. This is what we used to think before we had a feature flag. We’re used to, ok, backend, frontend, we will merge it. Then everything goes fine. We’ll do a release, and everyone is happy. This is not the reality. This looks good on paper but, in reality, this is what happens once you merge your code. Everything just collapses. We started with this. We started creating our frontend feature flag to do this. We now have the ability to ship a feature based on a user, based on a cluster. We can also define how many percentages of users or customers we want to ship this feature to. Or we can also ship a specific build. We generally try to avoid this. This is something which we use for our POCs.
Let’s say if you want to do a POC for a particular customer, we can say, just use this build. That customer will do its POC, and if they’re fine or they’re happy with this, we can go ahead and write for the code. For example, of course, we have to still write tests. We have to write user journey test. This is just for POC. We can also combine all of the above. We ended up with this. We started seeing, now there are less bugs, because now the bugs are isolated, because they are behind a feature flag. We also have the ability to roll back a feature flag if anything goes wrong. We don’t have to roll back the entire release, which was the case earlier. Now we are shipping features with more confidence, which we need.
Before you ask me which feature flag solution we are using, I’m not here to sell anything. We built our own. We decided to build our own. How? Again, Nx comes into the picture. Because Nx, as I said, is plugin based. You can build anything and just create it as a plugin. You get everything out of the box. It feels native. It feels like you are still working with Nx. This is the command. You can just say, nx add and a new plugin. You can define where you want to put that plugin into. For our feature flag solution, we use a lot of YAML files. We added all the code to read those YAML files as part of our plugin. It’s available for everyone.
One thing which you have to focus more on, in case you are creating a custom solution, is developer experience. Otherwise, no one will use it. We also added the ability to enable/disable flags. Developers can just raise a PR and enable and disable a feature flag. We also added some checks that no one should disable a flag in case it’s already being used, and no one knows about it. There are some checks. Like, for example, your release manager or your team lead has to approve it.
Otherwise, someone just does it by mistake. Then we also have a dashboard where you can see which features are enabled and in which environment. Our developers can also see that. We also have a weekly alert, just in case there is a feature flag which is GA now, and it’s available for everyone. We also send a weekly alert so developers can go ahead and remove those feature flags. This is fine, because we know where the fire is, and we can just roll it back.
Proof of Concepts
Of course, when you have a monorepo, the other problem which we have seen is that a lot of teams are actually not fans of monorepos, because they think they’re being restricted to do anything. This is where we came up with the idea like, what if teams want to do a proof of concept? Recently, there were a few teams which said, we want to come into the monorepo, but the problem is our code is something which is a POC. We don’t want to write tests, because we also have checks. I think most of you might have checked for your test coverage. You should have 80%, or 90%, or whatever. I don’t know why we keep it, but it’s useful, just to see the numbers.
Then we said, let’s give you a way so you can start creating POCs, and we will not be a blocker for you anymore. In Angular, you can just say, I’ll define a new remote, and that’s it. A new application is created. They can just do it. Another issue is, most of the enterprises, they have their own way of creating applications. They may need some customization. That, I want to create an application, but I need some extra files to be created when I create this application. Nx offers you that. Nx offers you the way to customize how your projects will be created. For example, in our use case, what we do is whenever we create an Angular application, we also add the ability to write component test. What we did is we just took the functionality from Nx, added all this into a single bundle or a single plugin, and we gave it to our developers.
That whenever you create a new application, you will also get component test out of the box. Or let’s say it can be your Cypress, or it can be your Playwright, or it can be anything which you like. For example, you want to create some extra files, for example, maybe Dockerfile, or maybe something related to your deployment, which is mandatory for each and every app. You can customize the way your applications are created by using the generators. This is called Nx Generator. As I said, you can also create files. You can define the files wherever you want to. Generally, we use files as a folder. You can put all the files.
For example, as I said, Dockerfile, or any other files which you need for configuration. You can pass them as a parameter. It uses a format called EJS. I’m not sure how many people are aware of EJS. It uses a syntax called EJS to replace any variables into the actual file. Here, I’m talking about the actual file. This is not any temporary files. I’m talking about the actual files which will be written on the drive. You can all do this with the help of Nx Generator. This is what we do whenever someone creates a new application. We just add some things out of the box.
Maintaining a Large Codebase
When it comes to maintaining a large codebase, because now we are here, we have 2 million lines of code in a single repository, there are a few things which we have to take care of. For example, refactoring. We do a lot of refactoring because we got the legacy code. I’m sure everyone loves legacy code, because you love to hate it. Then, we keep doing deprecations. This is one thing I think we are doing better, that we are doing deprecations. As soon as we see some old code, we start deprecating that code if it’s not used. Then, migration. Of course, over the period of time, we have migrated multiple apps into our monorepo.
We still support, just in case anyone wants to migrate their code to our monorepo. It took us time. It took us close to two years. Now we are at the stage where I think we have only one app, which is outside our monorepo. This is not going to happen in a day, but you have to start someday. Then, adding linters and tools. Of course, this is very important for any project. You need to have linters today. You may need to add tools tomorrow. Especially with the JavaScript ecosystem, there is a tool every one hour, I think. Then, helping team members. This is very important in case you are responsible for managing your monorepo. I’m sure if you end up doing this, initially you will end up actually doing this a lot.
Most of the time, you’ll be helping your new developers onboard into a monorepo. This is very important, again. Documentation, this is critical, because if you don’t do this, then more developers will rely on you, which you don’t want to. It will take your time away. Then the ability to upgrade Angular framework for everyone. Whatever framework you use, we use Angular, but in case you use React or Vue. This is what we wanted. This is what comes under the maintaining our monorepo. How do we do this? For example, Nx offers something called nx graph. If I run nx graph, I get this view, where I can see all the applications, all the projects.
I can figure out which library is dependent on which app. If I want to refactor something, I can just check if this is being used or not by using the nx graph. Or if there is something refactored which is required, I can just look at this graph and say, probably this UI should not be used in home, it should be used in blogs. Then you can just refactor your code. It helps a lot during refactoring and during deprecations as well.
Now, talking about the migrations. As I said, you may have to migrate a lot of code to your monorepo once you start, because all the code is available in different repositories. Nx offers you a command called nx import, where you can define your source repository and your destination repository, and it will migrate your code with your Git history. This command just came in the last release. From past years, we have been doing it manually. We did it for more than 30 repositories, but we did it manually. The same thing is now available as part of Nx. You can just run this command and do everything automatically. We deploy our documentation on Backstage.
This is what we do, so everyone is aware of where the documentation is. We use Slack for all the communications or any new initiatives or deprecations which we are announcing. We have a dedicated Slack channel, so just in case developers have any questions, they can ask on this channel. It actually improves the knowledge sharing as well, because if someone already knows something, we don’t have to jump in and say, this is how you should do it. It reduced a lot of dependency from us, the core team. Education is important.
We started doing a lot of workshops initially when we moved to a monorepo, just to give the confidence to the developers that we are not taking anything from you. We are actually giving you more control over your codebase, and we are just here to support. We started educating. We did multiple workshops. Whenever we add a new tool, we do a workshop. That’s very important.
Tools
As I said, every other hour, you are getting a tool. What should you use? Which tool should you add? This is true that, of course, introducing a tool in a new codebase is very time consuming. You may actually end up doing probably two, three days just to figure out how to make this tool work. At the same time, sometimes adding a tool is easy, but maintaining it is hard. Because as soon as you add it, there is a new tool, which is available the next hour, which is much more powerful than this. Now you are maintaining this tool, because there is no upgrades. Most of your code is already using this tool, so you cannot actually move away from this now.
At the end of the day, you have to just maintain this code or maintain this tool. Nx makes it easy. It also makes it easy to introduce a new tool and maintain a new tool. Let’s see how. Nx offers you support out of the box for the popular tools, for example, Cypress and Playwright. This is now a go-to tool for writing end-to-end tests. I’m not sure about the others, but it’s widely used in the JavaScript ecosystem. Anyone who starts a new project probably now goes for Playwright, but there was a time that many people were going with Cypress. Nx, just a command, and then you can just start adding or start using this tool. You don’t have to even invest time configuring this. You just start using it. That’s what I’m talking about.
For unit tests, it gives you Jest and Vitest out of the box. You can just add this and then start using it. No time needed to configure this tool. What about the upgrades? Nx offers you something called Migrate. With the migrate command, you can just migrate everything to the latest version. For example, if you’re using React and you want to move to the new React version, you can just say nx migrate latest, and it will migrate your React version. Same for Angular. This is what we do now. We don’t invest a lot of time doing manual upgrades or something. We just use this nx migrate, and our code gets migrated to the new version. It works for all the frameworks, all the technologies which is supported by Nx, but you can also do it for your plugins.
For example, let’s say if you end up writing something for your own company, a new plugin, and you want to push some new updates, you can just write a migration, where this migration tool will just automate the migration for your codebase, and your developers don’t have to even worry about what’s happening. Of course, you have to make sure that you test it properly before shipping.
Demo
I’ll show you a small demo, because everything we saw was a picture. Always believe when you see something running, otherwise, don’t. This is how your nx graph looks like, whenever you run nx graph, and you can click on Show all projects. Then you can hover on any project and see how it is connected, like how it’s being used, which application is dependent on which application. For example, shell, you see dotted lines. Dotted lines is lazy loading. It means they are not directly related, but they are related.
For example, Home and UI, it says that there is a direct dependency. You can figure out all this from nx graph. It also gives you the ability to see tasks, tasks like build or lint. Let’s say if you make a code change, you can figure out what tasks will be run after my code change. Which builds will be running? Which applications will be affected? Everything you can figure out from this nx graph. This is free, so you don’t have to pay. I’m just saying this is one of the best features which I have seen, which is available for free. Let me show you the build. I talked about caching. Let’s run a build, nx run home:build. I’ll just do production build. It’s running the build. This line is important. It says, 1 read from cache. Let’s say if you make some changes, like right now, one thing about monorepo, people think I have 40 projects.
Whenever I make changes, my 40 projects will be built. Monorepos have actually a bad name for this. I have done .NET, so I know. We used to have so many projects, and then rerun the same code or build the same code again and again, but not with Nx. Nx knows your dependency graph, so it can figure out what needs to be built again and what needs to be read from the cache. They do it really well. Here we can see one read from cache, because I already built it before. It just retrieved the same build from the cache. Now let’s say, 40 teams working on 40 different apps, but one team makes changes to its own app, then 39 apps are not built again, because Nx knows from dependency graph that this application is not affected, so I don’t have to build anything.
If I try to build it again, so next time it will just retrieve everything from cache. Now it’s faster than before. It says now it took 3 seconds, which earlier was 10 seconds. This is what Nx offers you out of the box. Nx is available for your builds, your test, your component test, or your end-to-end test, anything. All the tasks can be cached. This is caching.
CI/CD
Of course, CI/CD, there is always one guy in your team who is asking for faster builds. I was one of them. We use GitHub Actions with Nx, which gives us superpower. How do we do it? We use actually larger runners on GitHub Actions. We use our own machines. We used to use GitHub-provided machines, but it was too expensive for us. We moved on to using our own machines now. We use Merge Queue to run end-to-end tests. I’ll talk about Merge Queue, because this is an amazing feature given by GitHub. This is only available for enterprises. We can cache build for faster build and test, which we saw on the local. What we saw was on the local. I’ll show you how we do it on CI. Let’s talk about Merge Queue and user journey test first.
One thing about user journey test is they are an excellent way to avoid bugs. Everyone knows. Because you are testing in a real-time simulation, because you are actually going to log in and click on a button to process something. We all know that if you try running user journey on every PR, it will be very expensive, because we are interacting with the real database. It may take a lot of time to complete your build. We also know that when you are running multiple branches, this is another issue. Because the next branch will soon go out of sync with the main branch because you already have latest changes in main branch.
Then running the user journey test again on an old branch is pointless because now you don’t have latest changes. It means there is chances that you may introduce errors. This is where actually Merge Queue was introduced by GitHub. Let’s see how it works. Let’s say these are four PRs in your pipeline, PR is pull request, and PR 4 fails, so it’s removed from your queue. These three PRs, PR 1, PR 2, PR 3, will be sent to your Merge Queue. Merge Queue is actually a feature provided by GitHub, which you can enable from your settings. You can define how many PRs you need to consider for Merge Queue. We do 10. Ten PRs will be pushed to Merge Queue at once. You can change. Because we have 100 PRs per day, we found that this is our average. We can do 10.
In your case, if you get more PRs, you can just increase the number of PRs which you want to push into Merge Queue. Then once it goes to Merge Queue, this is how it works. GitHub will create a new branch from your PR, the first PR, and the base branch will be main. Then it will rebase your changes from PR 1 to this new branch, which is created, but it will not do anything else. The branch is created. That’s it. Then it creates another branch called PR 1, PR 2. Now the PR 1 branch is your base. Then it will merge PR 2 changes into this branch. Now it’s latest code. Same with PR 3. Now it will create PR 1, PR 2, PR 3, take PR 1, PR 2 as base, and PR 3 changes will be merged to this branch.
After this, it will run all the tasks which are actually available on your CI/CD. For example, you run build, you run test, you run your component test, plus user journey test. Whenever you are running user journey test, you are running it on latest code. It’s not the old code which is out of sync. Yes, it reduces the number of errors you have.
Before I go with affected, I want to give some stats, like how we are doing today. With 2 million lines of code, 200 projects, as of today, our average time for each PR is 12 minutes. For entire rebuild, it’s 30 minutes. It’s all possible because we take usage of affected builds. Because Nx knows what has been affected, so this is what it does internally. For example, Lib1, Lib2, it affects five different applications. Your change is this. You push a new code, which affects your library 1, in turn affects App1 and App3. What we will do is we will just run affected tasks. We will say, run affected and do build, lint, test. That’s it. We retrieve the cache from S3 bucket.
As of today, we are using S3 bucket to push our cache and then retrieve it back whenever there is a change. We just retrieve it back from the S3 bucket. You can do it if you have money. There is a paid solution by Nx, it’s called Nx Cloud. You can just remove this. You don’t have to do it on your own. Nx Cloud can take care of everything for you. It can actually even do cache hit distribution. I’m talking about cache hit distribution on your CI pipeline as well as on your developer’s machine. Your developers can get the latest build, which is available on the cache, and they don’t have to build even a single thing. It’s very powerful, especially if you are onboarding new developers. They can just join your team on day one, within one hour, they are running your code without doing anything, because everything is already built.
As soon as they make changes, they are just building their own code and not everything. If you want to explore Nx Cloud, just go to nx.dev, and then you will find a link for Nx Cloud. As of today, we are not using Nx Cloud because it was probably too expensive for us and not a good fit, but if you have a big organization? As I said, Nx Cloud works for everyone. It’s not only for frontend or backend: any technology, any framework. This is an example from our live code. We have our design system. For example, when I tried to run it for the first time, it took 48 seconds. The next run took us 0.72 seconds, not even a second. This is a crazy level of time which we save on every time we build something. Our developers are saving a lot of time. They are drinking less coffee.
Release Strategy
The last thing is about the release strategy. One thing at Celonis, is we love our weekends. I’m sure everyone loves their weekend, but we really care about it. Our release strategy is actually built around the same, that we don’t have to work on weekends. This is what we do. Of course, we have 40-plus apps, so we know that this is risky, so we don’t do Friday releases. Because it’s not fun, going home and working on Saturdays and Sundays to fix some bugs. What we do today, we create a new release candidate every Monday morning. Then we ask teams to run their test. It’s a journey. There are teams who have automated tests. There are teams who don’t have automated tests. They do manual or whatever way they are doing, or they just say, ok, it’s fast. You should not do that, but, yes, that might be a possibility. They execute their tests, automated or manual.
If everything goes fine, we deploy by Wednesday or Thursday. Wednesday is our timeline that we ask every team to finish their test by Wednesday, or worst case, Thursday. If something goes wrong, we say, no release this week. Because we are already on Thursday, if we do a release, it means our weekends are destroyed. We don’t like that. We really care about our weekends, so we cancel our release, and then we say, we’ll come back on Monday and then see if it goes ahead and we can do a deployment. If everything goes green, we just deploy and then go home and monitor it for Monday, either Thursday or Friday, based on when we release. Everything is happy. Then we do this again next week.
Of course, there are some manual interventions which are required here. This is where we want to be. Of course, every company has a vision. Every person has a vision. We also have a vision. This is what we want to do. We want to create a release candidate every day. If CI/CD is green, we want to deploy to production. That’s it. If there’s something which goes wrong, we want to cancel our deployment and do it next day. Renato accidentally mentioned 40 releases per week. We at least want to do five releases a week. That’s our goal. Probably we will be there one day. We are probably very close to that, but it will take us some time.
Questions and Answers
Participant 1: I have a question about end-to-end test. As I understand you call it user journey test. How do you debug that in this huge setup of 40 teams? Let’s say if test is red, how do I understand root causes? It can be a problematic red.
Yadav: Playwright actually has a very good way to debug the test. We use Playwright. Then it comes with a debug command. You can just pass, –debug, and whichever application is giving us an error, you can just debug that particular application. You don’t have to debug 40 applications. We also have insights. Whenever we run tests, we push the data of success and failure on Datadog. We display it on our GitHub summary. Even the developer knows which test is failing. They don’t have to look into the void and see, what’s going wrong? They know, this is the application, and this is what I have to debug.
Participant 2: I was wondering if you also integrate backend systems into this monorepo, or if it was a conscious decision not to do so.
Yadav: It does support. As I said, you can actually use your backend, like .NET Core. I think it supports Spring, as well as Maven. Now they added support for Gradle as well. You can bring whatever framework or whatever technology you want to. We are not using it because I think that’s not a good use case for us. I think more teams will be happy with having the current setup where they own the backend, and the frontend is owned by a single team.
Participant 3: How do you handle major framework updates or, for example, design system updates? Because I think in the diagram you showed that you try to do it like every day release. I can imagine that with many breaking changes, this is not how it can work. You need more time to test and make sure it’s still working.
Yadav: We actually recommend every developer write their own test. It’s not like another team who is writing the test. That’s one thing. Of course, about the upgrades, this is what we do. We have the ability to push a specific build. For example, Angular 14 upgrade, which was a really big upgrade for us, because after Angular 13, we were doing it for the first time, and there were some breaking changes. We realized very early that there are some breaking changes. We wanted to play safe. What we did is with feature flag, we started loading only Angular 14 build for some customers and see how it goes. We rolled it out initially for our internal customers, like our users.
Then we ran it for a week. We saw, ok, everything is fine. Everything is good. Then we rolled it out for 20% of the users. Then we monitored it again for a week. Then 50%, and now we will go 100%. This is very safe. We don’t have any unexpected issues. With design system, we do it weekly. It’s like design system is owned by another team, so they make all the changes. They also do it on Monday. They get enough time, like four or five days, to test their changes, and then make it stable before the next release goes.
Participant 4: You explained about the weekly release. How do you handle hotfix with so many teams?
Yadav: Of course, there will be hotfixes, we cannot avoid this. There will be some code which goes by mistake on release. We try to capture hotfixes or any issues on release before they go on to production. Just in case there is anything which needs to be hotfixed, they generally create a PR with the last release, which is there. Then we create a new hotfix. It’s all automated. You just need to create a new release candidate from the last build, which we had, and just push a new build again. Good thing is, with the setup, it’s not like we have to roll back the entire release.
See more presentations with transcripts

MMS • Steef-Jan Wiggers
Article originally posted on InfoQ. Visit InfoQ
To streamline video optimization for the explosion of short-form content, Cloudflare has launched Media Transformations, a new service that extends its Image Transformations capabilities to short-form video files, regardless of their storage location, eliminating the need for complex video pipelines.
With the service, the company aims to simplify video optimization for users with large volumes of short video content, such as AI-generated videos, e-commerce product videos, and social media clips.
Traditionally, Cloudflare Stream offered a managed video pipeline, but Media Transformations addresses the challenge of migrating existing video files. By allowing users to optimize videos directly from their existing storage, like Cloudflare R2 or S3, Cloudflare aims to reduce friction and streamline workflows.
(Source: Cloudflare blog post)
Media Transformations enables users to apply various optimizations through URL-based parameters. Using URL parameters, Media Transformations enables automation and integration, allowing dynamic video adjustments without complex code changes – simplifying workflows and ensuring optimized video delivery across various platforms and devices.
The key features of the service include:
- Format Conversion: Outputting videos as optimized MP4 files.
- Frame Extraction: Generating still images from video frames.
- Video Clipping: Trimming videos with specified start times and durations.
- Resizing and Cropping: Adjusting video dimensions with “fit,” “height,” and “width” parameters.
- Audio Removal: Stripping audio from video outputs.
- Spritesheet Generation: creating images with multiple frames.
The service is accessible to any website already using Image Transformations and new zones can be enabled through the Cloudflare dashboard. The URL structure for Media Transformations mirrors Image Transformations, using the /cdn-cgi/media/ endpoint.
Initial limitations include a 40MB file size cap and support for MP4 files with h.264 encoding. Users like Philipp Tsipman, founder of CamcorderAI, quickly pointed out the initial limitations, tweeting:
I really wish the media transforms were much more generous. The example you gave would actually fail right now because QuickTime records .mov files. And they are BIG!
Cloudflare plans to adjust input limits based on user feedback and introduce origin caching (Cloudflare stores frequently accessed original videos closer to its servers, reducing the need to fetch them repeatedly from the source).
Internally, Media Transformations leverages the same On-the-Fly Encoder (OTFE) platform Stream Live uses, ensuring efficient video processing. Cloudflare aims to unify Images and Media Transformations to simplify the developer experience further.
In addition to the Cloudflare offering, alternatives are available regarding video optimization, such as Cloudinary, ImageKit, and Gumlet, which have comprehensive features for format conversion, resizing, and compression. Other cloud providers, such as Google Cloud Platform, offer various cloud services, including video processing and delivery solutions. While not solely focused on video transformation, it provides the building blocks for creating custom solutions.
Lastly, Cloudflare highlights use cases such as optimizing product videos for e-commerce, creating social media snippets, and generating thumbnails. The service is currently in beta and free for all users until Q3 2025, after which it will adopt a pricing model similar to Image Transformations.