Investors Purchase High Volume of Call Options on MongoDB (NASDAQ:MDB)

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

MongoDB, Inc. (NASDAQ:MDBGet Free Report) was the target of some unusual options trading activity on Wednesday. Investors acquired 36,130 call options on the company. This is an increase of approximately 2,077% compared to the average daily volume of 1,660 call options.

Wall Street Analyst Weigh In

A number of research analysts have recently issued reports on the company. Wells Fargo & Company upped their price target on MongoDB from $300.00 to $350.00 and gave the company an “overweight” rating in a report on Friday, August 30th. Stifel Nicolaus upped their target price on shares of MongoDB from $300.00 to $325.00 and gave the company a “buy” rating in a research note on Friday, August 30th. Piper Sandler raised their price target on shares of MongoDB from $300.00 to $335.00 and gave the company an “overweight” rating in a report on Friday, August 30th. DA Davidson boosted their price objective on shares of MongoDB from $330.00 to $340.00 and gave the company a “buy” rating in a report on Friday, October 11th. Finally, Royal Bank of Canada restated an “outperform” rating and issued a $350.00 target price on shares of MongoDB in a report on Friday, August 30th. One analyst has rated the stock with a sell rating, five have issued a hold rating and twenty have given a buy rating to the stock. Based on data from MarketBeat.com, the stock has an average rating of “Moderate Buy” and an average target price of $337.96.

View Our Latest Stock Report on MongoDB

Insider Buying and Selling at MongoDB

In other MongoDB news, Director Dwight A. Merriman sold 2,000 shares of MongoDB stock in a transaction on Friday, August 2nd. The stock was sold at an average price of $231.00, for a total transaction of $462,000.00. Following the completion of the transaction, the director now owns 1,140,006 shares in the company, valued at approximately $263,341,386. The trade was a 0.00 % decrease in their ownership of the stock. The sale was disclosed in a filing with the Securities & Exchange Commission, which can be accessed through this hyperlink. In related news, Director Dwight A. Merriman sold 2,000 shares of the stock in a transaction dated Friday, August 2nd. The shares were sold at an average price of $231.00, for a total transaction of $462,000.00. Following the sale, the director now directly owns 1,140,006 shares of the company’s stock, valued at approximately $263,341,386. This represents a 0.00 % decrease in their position. The sale was disclosed in a document filed with the SEC, which can be accessed through this hyperlink. Also, CFO Michael Lawrence Gordon sold 5,000 shares of MongoDB stock in a transaction dated Monday, October 14th. The stock was sold at an average price of $290.31, for a total transaction of $1,451,550.00. Following the completion of the transaction, the chief financial officer now directly owns 80,307 shares of the company’s stock, valued at $23,313,925.17. This trade represents a 0.00 % decrease in their ownership of the stock. The disclosure for this sale can be found here. Over the last quarter, insiders sold 23,281 shares of company stock valued at $6,310,411. Corporate insiders own 3.60% of the company’s stock.

Institutional Trading of MongoDB

Institutional investors and hedge funds have recently bought and sold shares of the stock. MFA Wealth Advisors LLC bought a new stake in MongoDB in the 2nd quarter worth approximately $25,000. Sunbelt Securities Inc. increased its position in MongoDB by 155.1% in the first quarter. Sunbelt Securities Inc. now owns 125 shares of the company’s stock worth $45,000 after purchasing an additional 76 shares during the last quarter. J.Safra Asset Management Corp raised its stake in MongoDB by 682.4% during the second quarter. J.Safra Asset Management Corp now owns 133 shares of the company’s stock valued at $33,000 after purchasing an additional 116 shares in the last quarter. Quarry LP lifted its position in MongoDB by 2,580.0% during the second quarter. Quarry LP now owns 134 shares of the company’s stock valued at $33,000 after purchasing an additional 129 shares during the last quarter. Finally, Hantz Financial Services Inc. bought a new position in MongoDB during the second quarter valued at $35,000. 89.29% of the stock is currently owned by institutional investors.

MongoDB Trading Down 2.2 %

Shares of MDB stock opened at $278.39 on Thursday. The stock has a fifty day simple moving average of $268.79 and a two-hundred day simple moving average of $286.09. MongoDB has a 52-week low of $212.74 and a 52-week high of $509.62. The firm has a market capitalization of $20.42 billion, a PE ratio of -99.07 and a beta of 1.15. The company has a debt-to-equity ratio of 0.84, a quick ratio of 5.03 and a current ratio of 5.03.

MongoDB (NASDAQ:MDBGet Free Report) last posted its quarterly earnings results on Thursday, August 29th. The company reported $0.70 EPS for the quarter, topping analysts’ consensus estimates of $0.49 by $0.21. MongoDB had a negative return on equity of 15.06% and a negative net margin of 12.08%. The company had revenue of $478.11 million for the quarter, compared to analysts’ expectations of $465.03 million. During the same period in the previous year, the firm earned ($0.63) EPS. The firm’s revenue was up 12.8% compared to the same quarter last year. On average, analysts predict that MongoDB will post -2.44 EPS for the current fiscal year.

About MongoDB

(Get Free Report)

MongoDB, Inc, together with its subsidiaries, provides general purpose database platform worldwide. The company provides MongoDB Atlas, a hosted multi-cloud database-as-a-service solution; MongoDB Enterprise Advanced, a commercial database server for enterprise customers to run in the cloud, on-premises, or in a hybrid environment; and Community Server, a free-to-download version of its database, which includes the functionality that developers need to get started with MongoDB.

Featured Stories



Receive News & Ratings for MongoDB Daily – Enter your email address below to receive a concise daily summary of the latest news and analysts’ ratings for MongoDB and related companies with MarketBeat.com’s FREE daily email newsletter.

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


This Stock Is Crushing Salesforce, MongoDB And Snowflake In AI Revenue – Forbes

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

Palantir has been one of the top-performing AI software stocks this year with a 156% YTD return, thanks to accelerating revenue growth and strong business momentum from its Artificial Intelligence Platform (AIP) released last year.

AIP sets Palantir apart from the rest of the SaaS universe, driving visible AI-related growth and acceleration in multiple different metrics – at this time, other leading AI favorites such as Snowflake or MongoDB can’t say the same. Outside of the cloud hyperscalers, Palantir is one of the rare few that sees AI drive both real returns for its business and real value for its customers due to AIP.

Below, I break down how Palantir’s AIP is putting it a step above peer Salesforce, MongoDB and Snowflake with visible AI growth, and its undeniable ‘secret sauce’.

Palantir’s AI Growth is Visible

AIP has driven tremendous growth for Palantir’s business since its release, with primary impacts arising in the commercial segment. A clear inflection point in Palantir’s growth is visible following AIP’s release, while other ‘AI’ cloud peers can’t say the same about AI-driven growth.

Palantir said that “US commercial continues to accelerate in Q2 2024 alongside [the] AIP revolution” with “unprecedented demand”, and the numbers to back this up:

· 55% YoY revenue growth in US commercial to $159 million, accelerating from 40% YoY in Q1.

· 83% YoY growth in US commercial customers to 295 and 98% YoY growth in US commercial deals closed to 123.

· 103% YoY growth in US commercial remaining deal value and 152% YoY growth in US commercial total contract value to $262 million. Chief Revenue Officer Ryan Taylor explained that “one of the most notable indicators of our delivery is the volume of existing customers who are signing expansion deals, many of which are a direct result of AIP.”

Here’s what the growth in US commercial customers looks like:

US commercial customer growth began to stagnate through late 2022 and early 2023, but following AIP’s release in Q2 2023, customer count re-accelerated. There is a clear inflection point from where QoQ customer additions were decelerating – from 12 net adds in Q1 2023 to six net adds in Q2 2023. Following the AIP-driven acceleration, net adds rose to 20 QoQ in Q3 2023, then 40 QoQ in Q4 2023.

This matches a similar acceleration in commercial customer growth as Palantir quickly became a market darling following its IPO, which was seen as a way to drive growth in the commercial sector. From Q4 2020 to Q4 2021, commercial customers grew nearly 5X. Now, as a stock market darling once more with a unique and unbeatable AI offering, Palantir is seeing commercial growth resume.

Palantir is King of AI Among Cloud SaaS Stocks

Other leading cloud ‘AI’ stocks are struggling to put up AI-driven growth numbers like Palantir.

Salesforce reported 8% YoY revenue growth in Q2, decelerating from 11% YoY in Q1, as subscription revenue growth decelerated to 9% YoY, down from 12% YoY in Q1. Salesforce sees Q3 revenue growth of 7%, another deceleration. The full-year revenue growth of just 8% to 9% translates to the SaaS giant struggling to realize AI gains. Furthermore, Salesforce’s more AI-aligned offerings, MuleSoft and Tableau, decelerated sharply in Q2, from 27% YoY to 13% YoY for MuleSoft and 21% YoY to 11% YoY for Tableau.

MongoDB witnessed a much steeper deceleration in Q2, as Atlas and new workload wins struggled at the start of the year. In Q1, MongoDB reported 22% YoY growth with Atlas growth of 32% YoY, and this decelerated to 13% YoY revenue growth in Q2 as Atlas declined 5 percentage points QoQ to 27% YoY. For the full year, MongoDB guided to about 14.6% YoY growth in Q2 as it slightly boosted its outlook, a steep deceleration from 31% YoY growth in fiscal 2024.

Snowflake’s product revenue growth decelerated from 34% YoY in Q1 to 30% YoY in Q2, and while this was ahead of its guidance by 3 percentage points, growth is set to decelerate further in Q3. Management guided for 22% YoY growth in product revenue for the third quarter, a steeper QoQ deceleration rate, with the full year product revenue guide of 26% YoY. Despite management saying that they see “great traction” in early stages of AI products, there’s no visible inflection or acceleration in growth.

In sharp contrast, AIP has helped Palantir drive a significant topline acceleration over the past four quarters.

Palantir reported 27.2% YoY revenue growth in Q2, aided by strength in US commercial stemming from AIP as well as government revenue accelerating significantly. Palantir’s YoY revenue growth bottomed in Q2 2023 at 12.7%, the same quarter as AIP’s release, with revenue growth now 15 points higher. Despite guiding for a slight 2 percentage point deceleration in Q3 to 25.2% YoY growth, Palantir would only need to beat its guide by 1.5% to keep this revenue acceleration intact.

Fundamentally, what’s most critical for shares is maintaining a revenue growth rate above 20% for the foreseeable future – analysts currently estimate fiscal Q2 2025 to be the one quarter of the next eight with revenue growth just below that threshold. Given AIP’s strength just one year following its launch, with clear inflections in customer and revenue growth, it will be the telling sign of Palantir’s AI status if it can maintain these revenue growth rates as it scales.

Palantir’s AIP Separates it From the SaaS Universe

Palantir’s standout performance so far in 2024 against SaaS peers can be attributed to the success of AIP, which, at its core, is a comprehensive AI platform that lets enterprises lever Palantir’s AI and machine learning tools and harness the power of the latest large language models (LLMs) within Foundry and Gotham.

Gotham was the company’s first product and is built for government operatives in defense and intelligence sectors. The platform enables users to identify patterns hidden deep within datasets using semantic, temporal, geospatial and full-text analysis, with mixed reality capabilities to allow operations to be run in virtual environment as well. Graph allows data objects to be seen as nodes and edges, while Map track geo-located objects, run searches and display key data.

Foundry was built for the commercial sector, and is centered around the three-layer Ontology Core, integrating semantic, kinetic, and dynamic layers for real-time data analytics and AI-powered decision making capabilities:

· The Semantic layer brings volumes of data into one place, and lets users generate detailed object properties

· The Kinetic layer brings operations and business behaviors into a real-time graph linked back to the Semantic layer, creating the basis for AI-driven analytics, real-time monitoring, identification of inefficiencies, and ability to optimize workflows

· The Dynamic layer connects models to objects and actions, reasoning across both the Semantic and Kinetic layers for AI-powered automation and AI-driven decision making, alongside multi-step simulations with AI predictive analytics to explore possibilities of changing actions or events

AIP combines with Foundry’s data operations suite and Apollo’s autonomous software deployment capabilities as part of Palantir’s ‘AI Mesh’, providing enterprise and government customers with a full suite of AI products from the web to mobile to the edge. With the Ontology, linking data and logic into an AI-accessible environment, Palantir brings generative AI directly to an enterprise’s operations, delivering real-time AI-driven operational decision-making abilities.

Palantir describes Gotham and Foundry as the “ability to construct a model of the real world from countless data points.” AIP links this all together, and this is what separates Palantir as a standout in the SaaS space — outside of the cloud hyperscalers, Palantir is one of the rare few that sees AI drive both real returns for its business and real value for its customers due to AIP.

What further sets AIP apart is its scalability, interoperability and versatility. With AI Mesh, organizations can integrate AI across different operations and applications, while its design facilitates interoperability with existing enterprise software and systems. AIP is also extremely versatile, having been successfully and seamlessly integrated into enterprises spanning a wide range of industries from tech to healthcare to aerospace, while still driving value to customers.

The uniqueness of Palantir’s AIP and value that it can quickly provide has driven growth for the company. CEO Alex Karp said in Q2 that “growth across the commercial and government markets has been driven by an unrelenting wave of demand from customers for artificial intelligence systems that go beyond the merely performative and academic.”

Essentially, there is constant strong demand for an applicable, scalable, versatile AI platform that can drive real-time results with an instant value-add for an organization. Chief Revenue Officer Ryan Taylor added that Q2’s “exceptional results are a reflection of a market that is quickly awakening to a reality that our customers have already known, we stand alone in our ability to deliver enterprise AI production impact at scale.”

Government is Palantir’s Secret Sauce

While Palantir is undoubtedly seeing strong business momentum in the commercial sector, the government sector remains Palantir’s bread and butter, being that the government sector has funded the company and allowed it to aggressively invest in AIP while expanding margins, with a recent growth acceleration

In Q2, US government revenue accelerated to 24% YoY growth, up 12 percentage points from 12% YoY growth in Q1. Overall, government revenue growth was 23% YoY, up 7 percentage points from 16% YoY in Q1. Management noted that Palantir was “selected for several notable awards in Q2, which led to the strongest US government bookings quarter since 2022, reflecting the growing demand for our government software offerings.”

This included a production contract from the DoD, Chief Digital and Artificial Intelligence Office (CDAO) for an AI-enabled operating system for the DoD, with an initial $153 million order and additional awards for up to $480 million over a 5-year period.

The acceleration in the government segment aided overall revenue growth in the quarter, as the government continues to remain Palantir’s primary revenue source, accounting for nearly 55% of total revenue. This is why the government segment is vital for Palantir, and is its ‘secret sauce’ – these long-term, high-value government contracts provide consistent and recurring revenue and financial stability, allowing the company to venture and invest to scale AIP while expanding margins and increasing its profitability.

Conclusion

Palantir has been on a tear this year, and is outperforming major cloud competitors, thanks to the strength and uniqueness of its AIP offering. Palantir has the best of both worlds in government contracts and AI exposure, as well as accelerating enterprise AI adoption and strong customer and revenue growth.

The one caveat is Palantir’s valuation, at 34x FY24 revenue and 29x FY25 revenue, is increasingly challenging to sustain. In the past, the low 20x revenue multiple range has tended to be the level that even the industry’s leading SaaS names have struggled to break past over the last few years.

Given the outsized valuation, the I/O Fund is looking for a lower entry in Palantir before adding the stock to our portfolio. Join the I/O Fund’s next webinar on Thursday, October 24th where Knox Ridley, Technical Analyst, will discuss the firm’s buy zones and targets for AI leaders. Learn more here.

If you would like notifications when my new articles are published, please hit the button below to “Follow” me.

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


MongoDB (NASDAQ:MDB) Sees Strong Trading Volume – Still a Buy? – MarketBeat

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

Shares of MongoDB, Inc. (NASDAQ:MDBGet Free Report) saw unusually-strong trading volume on Thursday . Approximately 1,003,819 shares traded hands during trading, a decline of 30% from the previous session’s volume of 1,442,082 shares.The stock last traded at $271.09 and had previously closed at $278.39.

Analyst Upgrades and Downgrades

Several equities analysts have recently issued reports on MDB shares. Scotiabank lifted their price target on MongoDB from $250.00 to $295.00 and gave the company a “sector perform” rating in a research report on Friday, August 30th. Mizuho boosted their price target on shares of MongoDB from $250.00 to $275.00 and gave the stock a “neutral” rating in a research report on Friday, August 30th. DA Davidson upped their price target on shares of MongoDB from $330.00 to $340.00 and gave the company a “buy” rating in a report on Friday, October 11th. Needham & Company LLC boosted their target price on shares of MongoDB from $290.00 to $335.00 and gave the stock a “buy” rating in a report on Friday, August 30th. Finally, Truist Financial increased their target price on shares of MongoDB from $300.00 to $320.00 and gave the company a “buy” rating in a research note on Friday, August 30th. One investment analyst has rated the stock with a sell rating, five have assigned a hold rating and twenty have issued a buy rating to the company. Based on data from MarketBeat.com, the company currently has an average rating of “Moderate Buy” and a consensus target price of $337.96.

Get Our Latest Stock Analysis on MongoDB

MongoDB Stock Performance

The firm has a 50-day simple moving average of $268.79 and a two-hundred day simple moving average of $286.09. The company has a current ratio of 5.03, a quick ratio of 5.03 and a debt-to-equity ratio of 0.84. The company has a market capitalization of $19.95 billion, a price-to-earnings ratio of -99.07 and a beta of 1.15.

MongoDB (NASDAQ:MDBGet Free Report) last announced its quarterly earnings results on Thursday, August 29th. The company reported $0.70 EPS for the quarter, beating the consensus estimate of $0.49 by $0.21. The business had revenue of $478.11 million during the quarter, compared to analysts’ expectations of $465.03 million. MongoDB had a negative return on equity of 15.06% and a negative net margin of 12.08%. The firm’s quarterly revenue was up 12.8% compared to the same quarter last year. During the same quarter in the previous year, the business posted ($0.63) EPS. As a group, sell-side analysts forecast that MongoDB, Inc. will post -2.44 earnings per share for the current fiscal year.

Insider Buying and Selling

In other news, CAO Thomas Bull sold 154 shares of the stock in a transaction dated Wednesday, October 2nd. The stock was sold at an average price of $256.25, for a total transaction of $39,462.50. Following the transaction, the chief accounting officer now owns 16,068 shares of the company’s stock, valued at approximately $4,117,425. This represents a 0.00 % decrease in their position. The sale was disclosed in a legal filing with the SEC, which is accessible through this link. In other MongoDB news, Director Dwight A. Merriman sold 1,385 shares of the company’s stock in a transaction dated Tuesday, October 15th. The shares were sold at an average price of $287.82, for a total transaction of $398,630.70. Following the completion of the sale, the director now directly owns 89,063 shares in the company, valued at $25,634,112.66. This represents a 0.00 % decrease in their ownership of the stock. The transaction was disclosed in a legal filing with the Securities & Exchange Commission, which is available at the SEC website. Also, CAO Thomas Bull sold 154 shares of MongoDB stock in a transaction dated Wednesday, October 2nd. The stock was sold at an average price of $256.25, for a total value of $39,462.50. Following the completion of the transaction, the chief accounting officer now owns 16,068 shares in the company, valued at $4,117,425. The trade was a 0.00 % decrease in their ownership of the stock. The disclosure for this sale can be found here. Over the last three months, insiders sold 23,281 shares of company stock worth $6,310,411. Corporate insiders own 3.60% of the company’s stock.

Institutional Inflows and Outflows

Institutional investors have recently added to or reduced their stakes in the business. Bleakley Financial Group LLC raised its position in shares of MongoDB by 10.5% during the third quarter. Bleakley Financial Group LLC now owns 939 shares of the company’s stock valued at $254,000 after buying an additional 89 shares during the last quarter. MN Wealth Advisors LLC raised its holdings in MongoDB by 11.9% in the 3rd quarter. MN Wealth Advisors LLC now owns 2,576 shares of the company’s stock valued at $696,000 after acquiring an additional 273 shares in the last quarter. Creative Planning lifted its position in shares of MongoDB by 16.2% in the 3rd quarter. Creative Planning now owns 17,418 shares of the company’s stock worth $4,709,000 after acquiring an additional 2,427 shares during the period. Sapient Capital LLC acquired a new stake in shares of MongoDB during the 3rd quarter worth approximately $736,000. Finally, CHICAGO TRUST Co NA acquired a new position in shares of MongoDB in the third quarter valued at $255,000. 89.29% of the stock is owned by institutional investors.

About MongoDB

(Get Free Report)

MongoDB, Inc, together with its subsidiaries, provides general purpose database platform worldwide. The company provides MongoDB Atlas, a hosted multi-cloud database-as-a-service solution; MongoDB Enterprise Advanced, a commercial database server for enterprise customers to run in the cloud, on-premises, or in a hybrid environment; and Community Server, a free-to-download version of its database, which includes the functionality that developers need to get started with MongoDB.

Read More

This instant news alert was generated by narrative science technology and financial data from MarketBeat in order to provide readers with the fastest and most accurate reporting. This story was reviewed by MarketBeat’s editorial team prior to publication. Please send any questions or comments about this story to contact@marketbeat.com.

Before you consider MongoDB, you’ll want to hear this.

MarketBeat keeps track of Wall Street’s top-rated and best performing research analysts and the stocks they recommend to their clients on a daily basis. MarketBeat has identified the five stocks that top analysts are quietly whispering to their clients to buy now before the broader market catches on… and MongoDB wasn’t on the list.

While MongoDB currently has a “Moderate Buy” rating among analysts, top-rated analysts believe these five stocks are better buys.

View The Five Stocks Here

Which stocks are likely to thrive in today’s challenging market? Click the link below and we’ll send you MarketBeat’s list of ten stocks that will drive in any economic environment.

Get This Free Report

Like this article? Share it with a colleague.

Link copied to clipboard.

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


DataStax Announces DataStax AI Platform Built with NVIDIA AI – Analytics India Magazine

MMS Founder
MMS RSS

Posted on nosqlgooglealerts. Visit nosqlgooglealerts

Listen to this story

DataStax, a one-stop generative AI stack platform, announced on October 17 that the DataStax AI Platform, built with NVIDIA AI, would reduce the AI development time by 60%. 

Kari Briski, NVIDIA, Vice President AI Software states, “Enterprises are harnessing AI to drive digital transformation across industries. The DataStax AI Platform, built with NVIDIA AI, enables companies to create AI-ready databases and rapidly deploy tailored AI applications, unlocking new levels of customer value.”

The DataStax AI Platform, built with NVIDIA AI  is one such platform that gives enterprises a holistic  solution for all parts of the AI development and production life cycle including data ingestion and retrieval , application development, deployment and ongoing AI training.

The platform integrates the DataStax AI platform with NVIDIA AI Enterprise software, helping enterprises to build AI applications that use companies’ enterprise data and context.As a result , it makes it easier for enterprises to improve their models through self-learning and get more accurate with customer use.

NVIDIA NeMo Customizer and NeMo Evaluator simplify training or fine-tuning LLMs, SLMs, embedding models, and reranking models. Meanwhile, DataStax’s AI application platform gives developers the dynamic control of search and retrieval that is necessary to tailor GenAI to individual customers.

“PhysicsWallah is democratizing education through GenAI-driven learning experiences for over 20 million students in India. The DataStax AI Platform, built with NVIDIA AI provides a real-time solution for PhysicsWallah to offer personalized, high-quality learning and accessibility at scale. This partnership enables the company to manage a 50x surge in traffic with zero downtime, serving millions of students, ” adds  Sandeep Varma, PhysicsWallah, Head of AI.

DataStax and Data Management

DataStax delivers a RAG-first developer experience, with first-class integrations into leading AI ecosystem partners, working with developers’ existing stacks of choice.With DataStax, anyone can quickly build smart, high-growth AI applications at unlimited scale, on any cloud. Hundreds of the world’s leading enterprises, including Audi, Bud Financial, Capital One, Skypoint, and many more rely on DataStax.

The company delivers industry-leading vector search, flexible hybrid search, knowledge graph and graph RAG, real-time AI analytics, streaming, pub/sub, and a linearly scalable NoSQL store. Available in the cloud(DataStax Astra), or cloud-native self-managed software (DataStax Hyper-Converged Database).

The DataStax AI Platform, built with NVIDIA AI, is for both cloud and self-managed environments. This gives enterprises the flexibility to deploy as they prefer.  Cloud deployments can leverage their Amazon Web Services (AWS), Microsoft Azure, or Google Cloud environments. 

Many large enterprises need to run their AI applications in cloud-native self-managed data centers to fully control their technology stack. This holds value for heavily regulated industries like banks, insurance companies, and healthcare companies, which have often had issues with other AI tools that weren’t built for enterprise scale or compliance needs.

Chet Kapoor, Chairman & CEO, DataStax: “As companies strive to leverage AI, we’re laser-focused on simplifying and accelerating the path to production to unlock innovation at scale.”

Kapoor also mentions that DataStax AI Platform will change the trajectory of enterprise AI and redefine customer experiences.

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


MongoDB Announces Redemption of All of Its Outstanding Convertible Senior Notes due 2026

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

Classified in: Science and technology
Subject: OFR


NEW YORK, Oct. 16, 2024 /PRNewswire/ — MongoDB, Inc. (“MongoDB”) (Nasdaq: MDB), the leading, modern general purpose database platform, today announced that it issued a notice of redemption for all $1,149,972,000 aggregate principal amount outstanding of its 0.25% convertible senior notes due 2026 (the “Notes”).  The redemption date will be December 16, 2024.  The redemption price with respect to any redeemed note will equal 100% of the principal amount thereof, plus accrued and unpaid interest, from July 15, 2024, to, but excluding the redemption date.  On the redemption date, the redemption price will become due and payable upon each note to be redeemed and interest thereon will cease to accrue on and after the redemption date.

The notes may be converted by holders at any time before 5:00 p.m. (New York City time) on December 13, 2024 (the “conversion deadline”).  The conversion rate for notes converted after today and through the conversion deadline is equal to 4.9260  shares of common stock of MongoDB, par value $0.001 per share (the “Common Stock”), per $1,000 principal amount of the notes, which includes an increase to the conversion rate of 0.1911 shares of Common Stock per $1,000 principal amount of the notes as a result of the notes being called for redemption.  MongoDB has elected to settle any conversions of the notes during the redemption period by delivering shares of its Common Stock, together with cash, if applicable, in lieu of delivering any fractional share of Common Stock (physical settlement).

About MongoDB

Headquartered in New York, MongoDB’s mission is to empower innovators to create, transform, and disrupt industries by unleashing the power of software and data. Built by developers, for developers, MongoDB’s developer data platform is a database with an integrated set of related services that allow development teams to address the growing requirements for today’s wide variety of modern applications, all in a unified and consistent user experience. MongoDB has tens of thousands of customers in over 100 countries. The MongoDB database platform has been downloaded hundreds of millions of times since 2007, and there have been millions of builders trained through MongoDB University courses.

Forward Looking Statements

This press release includes certain “forward-looking statements” within the meaning of Section 27A of the Securities Act of 1933, as amended, or the Securities Act, and Section 21E of the Securities Exchange Act of 1934, as amended, including statements concerning the planned redemption of the notes. These forward-looking statements include, but are not limited to, plans, objectives, expectations and intentions and other statements contained in this press release that are not historical facts and statements identified by words such as “anticipate,” “believe,” “continue,” “could,” “estimate,” “expect,” “intend,” “may,” “plan,” “project,” “will,” “would” or the negative or plural of these words or similar expressions or variations. These forward-looking statements reflect our current views about our plans, intentions, expectations, strategies and prospects, which are based on the information currently available to us and on assumptions we have made. Although we believe that our plans, intentions, expectations, strategies and prospects as reflected in or suggested by those forward-looking statements are reasonable, we can give no assurance that the plans, intentions, expectations or strategies will be attained or achieved. Furthermore, actual results may differ materially from those described in the forward-looking statements and are subject to a variety of assumptions, uncertainties, risks and factors that are beyond our control including, without limitation: risks associated with executing the redemption of the notes and events that could impact the terms of the redemption, as well as those described in MongoDB’s filings with the United States Securities and Exchange Commission (“SEC”), including under the caption “Risk Factors” in our Quarterly Report on Form 10-Q for the quarter ended July 31, 2024, filed with the SEC on August 30, 2024, and other filings and reports that we may file from time to time with the SEC. Except as required by law, we undertake no duty or obligation to update any forward-looking statements contained in this press release as a result of new information, future events, changes in expectations or otherwise.   

Investor Relations
Brian Denyeau
ICR for MongoDB
646-277-1251
[email protected]

Media Relations
MongoDB
[email protected]

SOURCE MongoDB, Inc.

These press releases may also interest you

at 18:39

Deepak Jain and his company performed fully under the SEC contract. There is no evidence that any data was lost or compromised in any way. Mr. Jain is an innocent man who looks forward to confronting these charges at trial.

–Steve McCool, attorney…

at 18:29

Dominion DMS announces the launch of a new cloud native solution for U.S. automobile dealers. Adding to their VUE DMS portfolio, VUE Net provides dealers a next generation level of business continuity, one designed explicitly for the automotive…

at 18:05

Merck , known as MSD outside of the United States and Canada, today announced the presentation of positive results from the Phase 2b/3 clinical trial (MK-1654-004) evaluating clesrovimab, the company’s investigational prophylactic monoclonal antibody…

at 18:00

In the midst of high invoice volumes and complex workflows, businesses need flexible tools that adapt to their accounts payable (AP) processes and deliver efficiencies with AI….

at 17:55

Today MaxLinear, Inc. , a leading provider of radio frequency (RF), analog, digital and mixed-signal integrated circuits, announced it has received Cisco’s 2024 Emerging Supplier of the Year award. Cisco presented this esteemed honor at its annual…

at 17:55

Report with the AI impact on market trends – The Global Overhead Cables Market  size is estimated to grow by USD 17.8 billion from 2024-2028, according to Technavio. The market is estimated to grow at a CAGR of almost 5.15%  during the forecast…

News published on 16 october 2024 at 17:30 and distributed by:

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Powersync Partners with MongoDB to Offer Enterprise-Grade Sync Engine – MarketScreener

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

PowerSync introduced the ability to connect to MongoDB as a source database. The new MongoDB Atlas-PowerSync integration enables customers to securely and efficiently sync data between MongoDB and on-device SQLite databases, empowering developers to deliver robust, data-rich applications with seamless offline capabilities and minimal synchronization overhead. PowerSync is a product of JourneyApps, founded in 2009 and headquartered in Colorado.

JourneyApps originally created and released the core PowerSync engine within its industrial app development platform, where it has been in large-scale production use by a range of Fortune 500 companies for more than a decade, including GE, Halliburton, ExxonMobil and Emerson. Based on demand from customers, PowerSync was subsequently made available as a standalone product: a versatile stack-agnostic sync engine that allows developers to create instantly-responsive apps that work seamlessly regardless of whether users are online or offline. Built by developers, for developers, MongoDB’s developer data platform is a database with an integrated set of related services that allow development teams to address the growing requirements for today’s wide variety of modern applications, all in a unified and consistent user experience.

MongoDB Atlas is the leading multi-cloud developer data platform that accelerates and simplifies building modern applications with a highly flexible, performance, and globally distributed operational database at its core. MongoDB has tens of thousands of customers in over 100 countries and the MongoDB database platform has been downloaded hundreds of millions of times since 2007. PowerSync’s architecture was designed from the ground up for high scalability and high performance, and provides key functionality such as fine-grained control over which data syncs with which users through the use of decline rules.

Client SDKs are available for a range of environments including web apps (JavaScript), React Native, Flutter, Kotlin Multiplatform and Swift. The PowerSync Service, which acts as a bridge between the backend database, MongoDB, and apps embedding the PowerSync Client SDK, is available both as a cloud-hosted offering as well as self-hosted.

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


DataStax & NVIDIA launch AI platform to boost accuracy – IT Brief Australia

MMS Founder
MMS RSS

Posted on nosqlgooglealerts. Visit nosqlgooglealerts

DataStax has announced the launch of the DataStax AI Platform, built in collaboration with NVIDIA AI, designed to expedite AI development and improve accuracy.

The newly introduced platform claims to reduce AI development time by 60% and process AI workloads 19 times faster. It integrates DataStax’s offerings with NVIDIA AI Enterprise software, aiming to simplify the building of AI applications by utilising enterprise data. It also enhances model accuracy through continuous learning as customer interaction increases.

Key components of the platform include the DataStax Langflow platform and several NVIDIA AI Enterprise tools. The DataStax Langflow platform offers an application development environment with a visual interface to simplify complex logic flows.

NVIDIA’s contributions include the NeMo Retriever, which connects custom models to business data for more accurate responses, and blueprints for multimodal PDF data extraction, aiding in the ingestion of unstructured data. The NeMo Curator tool helps create high-quality datasets for pretraining, while the NeMo Customiser and Evaluator simplify fine-tuning and model evaluation. The NeMo Guardrails feature provides programmable safety measures for conversational applications, and the NIM Agent Blueprints offer AI workflows for generative applications.

Sandeep Varma, Head of AI at PhysicsWallah, remarked, “PhysicsWallah is democratising education through GenAI-driven learning experiences for over 20 million students in India. The DataStax AI Platform, built with NVIDIA AI provides a real-time solution for PhysicsWallah to offer personalised, high-quality learning and accessibility at scale. This partnership enables the company to manage a 50x surge in traffic with zero downtime, serving millions of students.”

Chet Kapoor, Chairman & CEO of DataStax, commented, “As companies strive to leverage AI, we’re laser-focused on simplifying and accelerating the path to production to unlock innovation at scale. The DataStax AI Platform, built with NVIDIA AI, provides an end-to-end solution that not only reduces cost but unlocks unmatched speed of development — it makes applications smarter and more accurate as customers use it. We’re excited to deliver a platform that will change the trajectory of enterprise AI and redefine customer experiences.”

Kari Briski, Vice President AI Software at NVIDIA, stated, “Enterprises are harnessing AI to drive digital transformation across industries. The DataStax AI Platform, built with NVIDIA AI, enables companies to create AI-ready databases and rapidly deploy tailored AI applications, unlocking new levels of customer value.”

The platform seeks to resolve challenges in AI application development and accuracy, offering a unified solution to simplify tool complexity and support extensive team workflows. This is designed to address the issue of AI projects often encountering failure or delays in large organisations.

DataStax’s platform also prioritises data diversity, crucial for AI applications, offering a range of solutions including vector search, hybrid search, and NoSQL stores. This versatility is available in either cloud-based or self-managed environments, providing flexibility for enterprises to host their AI solutions as needed.

The platform supports deployment on Amazon Web Services, Microsoft Azure, or Google Cloud, while also catering to businesses requiring more controlled environments, such as those in heavily regulated sectors like finance and healthcare.

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Data Storage Stocks Q2 Recap: Benchmarking MongoDB (NASDAQ:MDB) – Barchart.com

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

Switch the Market flag

for targeted data from your country of choice.

Open the menu and switch the
Market flag for targeted data from your country of choice.

Need More Chart Options?

Right-click on the chart to open the Interactive Chart menu.

Use your up/down arrows to move through the symbols.

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Presentation: Stateful Cloud Services at Neon: Navigating Design Decisions and Trade-Offs

MMS Founder
MMS John Spray

Article originally posted on InfoQ. Visit InfoQ

Transcript

Spray: I want to talk about stateful services, because they’re different. The considerations that we apply to deploying a usual application on Kubernetes don’t apply in the same way when you’re deploying something stateful. For me, stateful means usually involving storage. You can have a stateful application which is an in-memory cache or something like that. Typically, when we say stateful, that’s a euphemism for a database, or a file system, or a queue, something that stores. You need to have a choice of storage tech.

I’ll talk about how to choose what should go in object storage, what shouldn’t. Deploying is different. I’ll talk about how deploying in Kubernetes works when you have to deal with a stateful service. You also have to think more about throughput with a stateful service. A typical storage service has a user expectation that you saturate the disk or saturate the network, and that plays into your choices as well.

Background: Neon

My background is in file systems like Lustre and Ceph. Streaming systems like Redpanda, which is a Kafka equivalent message queue. Most recently, Neon, where we build a serverless database product. This talk isn’t about Neon, but I will use it as an example. This is a very high-level block diagram of what our service is. On the left-hand side, there’s a Postgres database, the open-source database that you know and love. Its writes and reads go through our bespoke storage layer, which is also open source, but a separate piece of software written in Rust.

In fact, two separate pieces of software. We have something called a Safekeeper, which is a distributed, replicated log of incoming writes, and pageserver, which lays out your data and makes it available for read at a per page granularity, from the pageserver to Postgres. Behind all of this is S3. You can think of our storage layer as a mapping between very fine-grained, small, latency sensitive online transaction processing IO, and bigger, high throughput, but higher latency storage in S3.

Physical Trends

I’ll talk a little bit about background to our technology choices. The Economist had a few articles about cloud computing, and I thought this was an interesting juxtaposition of headlines, that users don’t need to think about physical underpinnings of the cloud, but advances in physical storage are what made the cloud possible. The takeaway is that while your users don’t need to think about hardware, you as a developer probably do. Everything we do as software developers sits on top of a hardware development. We’re used to things getting faster every year. Drives get bigger. Drives get faster. What’s interesting is tipping points. That’s what affects our choices of architecture.

The tipping points I’ve seen recently in my world are that the density of servers has become so great that the need to scale out storage systems is much less than it used to be. We used to have to lash together thousands of spinning disks to get a good, high performance file system. Now you can have a 1U server with a petabyte of flash in it, and a couple of 400 gigabit network interfaces on the back. That’s a change to your architecture. If your business hasn’t kept up with the rate at which storage has improved, which for most businesses, it hasn’t, the rate of improvement in storage is explosive. Even if your data is big, the hardware is probably growing faster. The other tipping point is drive performance itself in terms of the number of IO operations, so that’s 4-kilobyte read or writes, we can do per terabyte of storage. It’s got to the point where we have spare IOPS.

The little M.2 gumstick in your PlayStation probably has hundreds of thousands of IOPS, which you don’t really need. Even big enterprises with a lot of work to do will often struggle to find true needs for millions of IOPS on a storage product. They might use millions of IOPS due to some inefficiency in an application, but it’s unusual for there to be a true business need for that many IOPS. What do we do with the spare IOPS? If you work for a storage company or a database company, you might do one of several things.

Something like Pure Storage or VAST Data selling hardware appliances, they use this surplus IOPS to do compression, to do dedupe, to get more data onto a given piece of storage. We at Neon use those extra IOPS to give versioning, so we let users store a continuous history of all of their transactions and roll back to an arbitrary point in time. That means that the products built on top of the hardware are also having qualitative changes, that we’re not just getting bigger and faster, we’re getting smarter as well.

Durability

What is durability? What is persistence? It means different things to different people. The obvious example is that RAM is not durable, but a drive is. A drive isn’t really durable either. If I write something to my laptop, I might leave it on the bus, the drive might fail. Usually, we’re talking about multiple drives. Historically, that might have been a RAID array. More often in the cloud, we’re talking about multiple drives, perhaps in the same availability zone with some replication on top of them. For me as a database person, I don’t really think of something as persistent until it goes to multiple AZs, so, ideally, multiple physical buildings, to the point that if you have a fire in your data center, which is a real thing that does happen, that your data would survive that incident.

Finally, cross-region replication, which is the picture of the planet Earth at the bottom, where you go across regions, either for regulatory compliance reasons, or because you want to be closer to your users, or because you are just super paranoid about loss of a region. There’s an economic consequence to that choice of durability. Physical drives at the top of the table have a super low cost per terabyte today. That $30 per terabyte is over a 3-year lifetime of a drive. Those are retail prices. That’s an enterprise grade NVMe drive.

The number is so low that you almost don’t think about it, compared with what you pay your developers or what you pay for your infrastructure. The numbers for cloud services are higher. As you jump from a physical drive to S3, you get more for it. You get region level durability because you’re replicating across AZs, or rather, Amazon’s doing that on your behalf. If you want a cloud instance that has a fast drive in it, the number jumps up even more.

The cost per terabyte of an i3en instance is substantially higher than the cost per terabyte of S3. If you’re going to use that as the basis for a storage system, you had better have a good reason. S3 Express is a new product, but I think it shows the direction things are going in. That’s a version of S3 which is cheaper to use for large numbers of tiny operations. It’s much faster. You can almost use it like it was a disk. The interesting thing I think about the pricing is that it’s very similar to what it costs to just buy a storage instance. You’re not really getting S3 pricing. You’re getting pricing as if it was a drive, but you’re getting an S3 interface on top of it. EBS isn’t that interesting for future architectures, just because it’s so expensive per terabyte and because it’s not durable across AZs.

If you have a fire or an incident in an AZ and it kills your EBS block device, you wonder why you were paying so much money to have it replicated behind the scenes. This isn’t everything. Of course, there are other cloud providers, there are other services within each one. There are things like Glacier. Those, of course, are for colder storage, longer-term storage. What I’m interested in is the storage for online applications and for platform services that run in a cloud native environment. This isn’t just relevant to people like me building these services. This is relevant to you if you’re buying a service like this, if you’re trying to choose between products. It’s relevant to you if you’re operating a service like this, if you’re an SRE. If you want to look critically at a product like the product that I make, you should do it within this cost context.

There’s one more aspect of cost that often gets overlooked, and that’s replication egress costs. If you have copies of data in three different data centers, and you’re paying a per gigabyte cost for the traffic between them, it sometimes comes as a surprise that if you saturate a 10-gig network link, the amount you pay for egress can be more than you paid for the servers that you’re renting. That doesn’t mean you shouldn’t ever do it, but it means that when you’re thinking about an architecture or maybe somebody is trying to sell you a product that answers the durability question by replicating everything between AZs, you should always ask, who’s going to pay the egress fees?

A couple of other things to watch out for if you’re building or buying cloud products that use a lot of storage, many tiny object writes can build up to a AWS bill shock, quite easily. The example I use from my streaming days is if you had tens of thousands of streams and you wanted a relatively low recovery point objective for getting that data securely into S3, and you’re writing each one every second. Not every millisecond, every second. It’s modest. S3 will let you do that many writes per second. You would end up with a million dollar plus bill just for your PUTs, not even for the capacity, not for the instances you are running on, just for the PUTs.

You can’t optimize your way out of that. The architecture just has to avoid doing it. The other thing to avoid is a service that just transplants an on-prem product and runs it in a VM in the cloud with an EBS volume attached to it. Unless that’s some super irreplaceable piece of software, the cost of a dedicated instance and attaching a large EBS volume to it is likely to wipe out any profit that you might make building and selling a cloud service, if you’re doing that. That’s great news for your cloud provider. They’ll make money, you might not.

Architecture Trend

This builds into a trend in architectures for stateful systems. I’m starting from before cloud native here, but building up to where we are today. In the 2000s, what I would call a Gen X storage system, we had lots of hard disk drives. We were building huge clusters, things like Hadoop FS, things like Lustre, which I used to work on. It’s not that you needed thousands of drives, it’s just that they were so slow that you had to use that many to get the performance that you wanted. In the 2010s, we had SSDs. They’re much faster. We also started to see people build hybrid systems that could run on-prem or in the cloud, and they would typically have integration with object storage as an option. You could tier to object storage for higher capacity, but your local drives were still your primary storage.

Today, the Gen Z storage systems are using object storage as their primary storage. Yes, they still use SSDs sometimes, but those are like a cache or a buffer, and your primary storage is S3, or your choice of object store. That’s necessary to have a cost optimal design, which, of course, we all care about cost, but especially people building Software as a Service products, who are trying to make a profit on top of their infrastructure costs, care about cost even more. Here’s Neon as an example of these architectural trends. On the left-hand side, we get SQL statements from a user. They’re using us just like a normal Postgres database.

Those are getting translated into streaming writes when the user makes a change, which we write to a triplet of nodes with local NVMe storage. We are taking that egress cost, but only for the initial ingest of the data. It gets written onwards to our pageservers, which use NVMe drives as a cache. These are the two worthwhile uses of local drives in a modern architecture, as a write buffer, where you’re willing to eat the cost in order to get lower latency, and as a cache where you’re translating between small IOs that your application wants, and bigger IOs which you can make to S3. Of course, when I say IO, I mean a PUT or a GET.

Kubernetes (cloud native computing)

What does all this have to do with cloud native computing? I would use the more general definition of cloud native computing, which is, anytime you’re building something to run in the cloud. It doesn’t have to be on Kubernetes. The question will always come up like, do I run these services as part of my Kubernetes deployment? Should they only be platform services that live outside of it? Is it safe to run these in Kubernetes rather than bare metal? I am going to question whether one should use Kubernetes for services like these. Kubernetes has a default bias towards stateless workloads, so you have to make an extra effort to persuade it to do things like pinning a pod to a particular machine with a local drive. You have to make an extra effort to understand what a persistent volume claim is. That kind of thing. Your pods don’t have drives by default.

Some of the operational aspects are also biased toward a stateless workload. Kubernetes is trigger happy when it comes to killing pods. Typical application, if something fails a status check, it’s a good idea to kill it. It’s a stateless pod, it’ll come right back up again. That’s a good thing to do. For a storage system, like a database, a clustered database, or a file system, it is not smart to go around killing unresponsive nodes. You can have a cascading failure. What you ideally want to do if a node is unresponsive, is to talk to your clustered storage system, and say, would you please move some workload away from it? Because your workload isn’t pods, and that’s where the real impedance mismatch comes in between a typical stateful system, which is a distributed system, usually, which has some concept of workload of its own, whether that’s a database or a Kafka partition, or whatever it is, and Kubernetes which thinks in terms of pods.

Where, for a storage system, you’ll often just have one process or one pod, per physical node or per drive. You’re not scheduling huge numbers of pods. You’re scheduling some other workload on top of a smaller number of pods. Kubernetes also has a runtime overhead, which I think doesn’t get talked about enough. It’s not that it’s slow. It’s a good, efficient system that makes good tradeoffs to provide the level of abstraction that it provides. There is an overhead. If you have an application that would otherwise saturate the network link or an application that would otherwise have very low tail latency, and you run it inside GKE, and outside GKE, you can measure the difference. In some cases, I’ve seen it go as high as 10%. That might not be a literal 10% loss of bandwidth, but it might be, you can only run it up to 90% of the capacity before your tail latency starts degrading because you’ve got other things running on the box that are competing for your CPUs.

Again, this is me coming from a background in lower-level systems, but this is absolutely something you should consider when running some stateful service on top of Kubernetes, because this costs you money. If you’ve got a million-dollar AWS bill, 10% is $100,000. That’s potentially an extra member of staff. Measure it, and if it is going to cost you more to run inside Kubernetes, make an informed decision about whether that money is buying you something. Are you getting enough advantage from Kubernetes to justify the overhead?

Kubernetes has a system for running stateful services, it’s called StatefulSet. It works. It does what it says on the tin. It lets you have the equivalent of a ReplicaSet, but with a tighter binding to some persistent volumes, where those persistent volumes might be the drives on physical nodes. It’s not quite as smooth as you might hope. This is just a little screen grab from the official Kubernetes docs for StatefulSet. I think it’s great that they have this level of detail in the documentation. I’m not trying to criticize this.

This is a legitimately hard problem to solve that they have. When using rolling updates with the default policy, it’s possible to get into a broken state that requires manual intervention to repair. Why is that part of the docs rather than being some bug that was fixed years ago in Kubernetes? It’s because it’s difficult to map a system that cannot be torn down and put back up again, because you would lose data, into a pure declarative way of managing a system.

If you’re doing rolling upgrades, if you have some newer nodes, some older nodes, maybe they have different formats of data, you can get into corner cases where the naive version of a rolling upgrade is hard to recover from. Upgrades are the hard part. A system running in a steady state is easy to operate, but a rolling upgrade is harder. Here’s an example with three nodes. Let’s imagine this is a replicated log across three nodes. Maybe it’s running Paxos. Maybe it’s running Raft: well-known consensus algorithms, and I’m restarting the top node. It’s going to go offline for a bit. When it comes back, it’s going to be behind. Assuming there have been some writes in the meantime, it’s got a less than up to date copy of the data. Fine. My system is still online. I restart the node in the bottom left, and now I’ve got one node that has the latest data and another node that doesn’t quite have the latest data.

Depending on the underlying consensus algorithm, this might put your system into an unwritable state, because you might need to have the latest data on two nodes in order to append to that latest data. Not the case for all systems, but the case for some systems, and not an unreasonable requirement for the underlying storage system to have. Why did we shoot forward to restart the second node before the first one had fully replicated? The restart of nodes in a StatefulSet is driven by readiness checks.

The readiness checks that you might write are usually, can I serve an IO? Am I up? Can I accept user traffic? It’s not, do I have the latest copy of data in a way that satisfies the requirements of the distributed system of which I am a part? Kubernetes doesn’t have a hook for, “Hello. I’m a member of a distributed system. Please don’t restart me until my cluster says it’s ok.” It can cause an availability blip if you haven’t very carefully written a readiness check that does exactly the right thing.

In the case of node replacement, it can be worse. I’ve seen a real-life incident in which there was data loss on a storage system that was running inside Kubernetes, where the cloud provider required a node replacement to take place. In order to stay on a supported version of Kubernetes, you had to replace the nodes. That included dropping whatever was on local storage on those nodes, which was the customer’s data. It didn’t do it all in one, it did a graceful rolling restart.

In this case, imagine if, when restarting the top node, we didn’t come back with 80% of the data, we came back with an empty disk. Then we proceeded to restart the second node to an empty disk. We restart the third node to an empty disk. Great, you’ve just blown away all your data. That’s not Kubernetes’ fault. It’s possible to configure it in a way that it doesn’t do that, but you must be very careful. You must know exactly what you’re doing.

None of the engineers involved in the incident that I’m thinking of were foolish people. Another thing to look for when building a solution that involves stateful services is make sure the people who understand the distributed system within it are talking to the people who are deploying into Kubernetes. For folks who live and breathe Kubernetes, there can be a perception that the system you’re deploying should just do the right thing. You should be allowed to kill nodes any time, because it’s cattle, not pets. Yes, until you blow away all three copies of your data because you had to do a node replacement, because GKE forced you to.

One option to deal with these problems is use an operator. It’s a good idea. There are operators like Strimzi for Kafka, Rook for Ceph, and other file systems. They, more or less, solve this problem, if someone else has gone out and figured out how do we safely do a rolling restart of these systems in a way that is smarter and more tailored than a generic StatefulSet. If you’re building your own system, you probably don’t have an operator. You might have to build one. That’s extra work. Kubernetes isn’t serving you out of the box. Even if you’re adopting a product that comes with an operator, that’s one more thing for your SREs to learn. Their generic Kubernetes knowledge isn’t going to teach them exactly how to use the operator for a particular stateful service. None of these concerns mean you can’t use Kubernetes, but they should feed into your decision.

This is a summarized matrix of how I make the decision for my team whether we should use Kubernetes for storage. I say, we want multi-cloud portability. Kubernetes offers that. Great. Definite source of value. We want to go run a binary on a node somewhere. Kubernetes has a convenient way of doing that. Great. We want to schedule our tenants, which are our end user databases, onto our pageservers, which are our storage nodes. Kubernetes knows how to schedule pods onto physical nodes, which we don’t need at all because we just have one pod per node.

Kubernetes knows how to run a StatefulSet, but we want to do our upgrades and our operations in a way which is aware of our workload, and does things like migrating things between nodes during an upgrade, which you can do with Kubernetes, but it takes extra effort. The bottom line for us, for our team, is that we don’t have a strong motivation to adopt Kubernetes. We might do anyway eventually, if it provides sufficient operational convenience for our team, because we have a lot of other services that do run in Kubernetes. It’s not something that we would rush because customer data is the most precious thing we have. It’s something that we should think carefully about before doing. Neon loves Kubernetes. We use it for the vast majority of our services, but our storage services, not right now.

Object Storage in Practice

I talked about the trend toward object storage, and this actually really helps with running stateful services in a cloud native environment. If you are not using replication between local disks, you don’t have that fraught moment of restarting the nodes in a replica group and making sure you don’t lose any data. It’s better if you can just write data to object storage. What does that look like in practice? That’s what our pageserver does. Our incoming writes have a concurrency of about 10,000, so we have 10,000 streams of writes coming into one server. The entries in those streams are about 100 bytes big. We translate those to S3 PUTs, which are up to about 256 megabytes big, and we write perhaps between 1 and 10 objects per second.

On the read side, we’re serving sub-millisecond reads based on what’s on local storage, and promoting layers from S3 to local storage, on-demand. There is a whole discipline around caching that I’m glossing over here, and figuring out how to decide what sits on local disk and what’s safe to offload to S3.

Fundamentally, that’s what it means to write a service like this. You can have an application that just consumes S3 directly, that’s fine, especially for analytics workloads. If you’re using something like DuckDB, which is a fantastic piece of software, you don’t have to have an online transaction processing database in front of your S3. If your application does require that, and almost all businesses have at least some applications that do require a fast OLTP database like Postgres, and you want the cost efficiency of S3 storage, you end up wanting some component like this. My claim is that, this isn’t just what we happen to do. This is a reusable pattern that I see more products using in order to deliver cost effective stateful services in the cloud.

In this model, local disk is just a cache, and that avoids all of the sleepless nights that come with orchestrating a restart of a bunch of nodes. Of course, it also keeps you on your cloud native train of treating nodes as cattle rather than pets. It’s not as simple as that. The story doesn’t end there. If you’re building a system like this, or again, buying a system like this, you have to bear in mind that while S3 might be practically infinite, the nodes you’re using as caches aren’t.

In between the user and S3, I have these physical pageservers, and they are real-life computers with real-life finite sizes, so you need some mechanism to distribute that caching work across multiple physical nodes in order to let the user take advantage of the capacity they expect. This isn’t just for systems that are explicitly built for very large capacities. This is really a table stakes thing. Your users are used to S3. They’re used to being able to store as much data as they want.

If you have a size limit, it’s perceived as a bug, so you have to solve that problem. The other problem you have to solve to meet expectations is that while your node’s disk might be disposable and ephemeral, because it’s just a cache, a sufficiently cold start is perceived by users as an outage, and rightly so. It is a de facto outage if they have tens of seconds or a minute or something like that, where they can’t read their data. Avoiding a cold start on a system that keeps an 8-terabyte disk full of cached data, requires more than just writing reasonably fast code. You have to design it into your system. You have to have some concept of keeping a secondary location warm so that if something happens to one of your nodes, you can point your users with the other one.

Case Study: Neon Pageserver

Again, to use us as a case study, we started out with a prototype pageserver, which I would very much call a pet. It was principally a local storage system that knew how to store this Postgres-aware data structure, which is a little bit like a modified LSM tree with history. It tiered to S3 but the S3 was an afterthought. That was my 2010s storage system example. One tenant lived on exactly one pageserver, which meant that large tenants were out of luck if they wanted to scale beyond a certain point. What we had to do to get this ready for production, for a generally available product, is turn it into cattle, which means S3 is the primary storage. Local disk is just a cache. Node death is handled by cutting over to a warm secondary cache. This is not as hard as building a true replicated distributed system, because you don’t have any strict consistency guarantee requirements.

You just have to have another node that has approximately the same set of objects from S3 cached on local disk. We share the data from one tenant across many of these cache nodes. That’s not just necessary in the case of super large tenants, which are bigger than a single physical disk, it’s also necessary to balance load across machines. Users expect to get more than their fair share of hardware. They expect bursting. If I actually divided the number of IOPS of all my disks by my users, they would have a very disappointing level of performance. It’s essential that they can burst. In order to enable that, we have to spread the users out in a somewhat statistically uniform way, and that means sharing out the data. While S3 is one big monolithic store, the way we provide a gateway to it isn’t.

The same thing really has to be true of any system which implements this model. It’s the right model to get the right level of storage efficiency from a cost point of view, but you have to look for answers to the questions, how do you handle a node failure? What happens when a node goes offline? Do I have a cold cache? If I have a cold cache, how long does it take to refill it? Is it minutes? Is it hours? If you’re trying to refill a modern drive, you’re probably talking about hours, because although drives are very fast, it still takes a long time to refill them. You should also ask, what happens as I scale? In theory, my storage is infinite because it goes in S3, but is it really scalable once you factor in whatever frontend service you’re using on top of S3.

S3 is also shared storage, which is a term that has almost fallen out of use. In the days of storage area networks and physical data centers, we had what we call dual ported hard drives, where you really could have two servers writing to the same disk at the same time. You had all kinds of interesting mechanisms for solving that problem and avoiding corruption if two wrote to the same piece of storage at the same time. We forgot about this for a while, because in cloud native environments, we typically had local disk or we were using some higher-level managed storage service. The problem comes back when you build a scalable system on top of S3.

Briefly, the multi-writer problem typically occurs where you have a node fail, but only fail from the point of view of the rest of your system. Let’s say it’s failing at status checks. You’ve scheduled a replacement node for it, but this zombie pod, I’m saying node, but it’s synonymous with pod, if you imagine one pod per node in a storage context. This zombie server is still physically running. Nothing has cut it off from the network. Again, in traditional infrastructure, you might have what’s called a fencing mechanism to cut off a node from the network or cut its power, something like that. You don’t have that in Kubernetes. You often don’t have that in the cloud in general.

This failed node can rise from the dead. Let’s say it comes out of some pathological driver bug that made it pause for 20 seconds, long enough to get a replacement, comes back and does a write to S3. If these two nodes were responsible for keeping the same index object up to date in S3, this could corrupt your data, or at the very least make it unavailable while you unpacked all this and figured it out. Again, real example. I’ve seen this happen in a production system where somebody had made an over-optimistic assumption that if a node wasn’t responding to heartbeats, that meant it was definitely dead.

Fortunately, there’s a pretty general solution, which is to cheat and rely on there being some other component in your system that can hand out a monotonic number, which you might call a term or a generation number, and you include it in your object names. In Neon, for instance, you might have a index.23 and an index.24, and when our software comes up and figures out which one’s true, we just take them on with the highest number.

I’m glossing over a lot of detail in the arguments for why this is correct, but the key points are, it’s necessary. If somebody is presenting you a design that has failover in nodes that write to S3 and they’re not giving you an answer to how they solve this problem, you should probably probe that. Secondly, you don’t have to be a full-on distributed system to do this. I’ve seen this done in systems that had like Raft, or etcd equivalent built into them, and they would use that for generating numbers.

I’ve also seen it done, and actually the way we do it at Neon is just with a tiny relational database sitting off to the side, and the ACID semantics of that database are sufficient to hand out these generation numbers that we use to make multi-writers safe. We don’t usually aim to have multiple nodes writing to the same object at the same time. At scale, we have to solve that problem, because a one in a million race is not rare. We have hundreds of thousands of tenants, so we have to solve that problem.

What’s Inside the Objects?

I’ve talked about objects in a very general sense, it’s worth talking about what’s actually in those objects. You might start out with systems that write JSON or some other format. Parquet, Arrow are columnar data formats which do columnar compression. They’re very efficient. They’re very good. They’re primarily used in OLAP, analytics processing at the moment, but that doesn’t mean they don’t have applications elsewhere. If you’re building a storage system that has a lot of objects containing your data, and you have indices that point to that, that index could probably be a Parquet file. We’re increasingly seeing use of Parquet similar formats outside of the analytics space, outside of the data science space.

You don’t have to be writing a data lake for this to be the right choice of format for your data. Iceberg is a newer technology that is typically used on top of parquet or another similar format, and it basically provides an indexing layer for multiple files that you can query using traditional SQL. Again, it’s an OLAP technology. It’s not a replacement for a traditional OLTP database, but it’s something that you should think about. If you have a developer on your team that’s saying, I’ve got this great idea, if we’re going to put our data in S3 we’re going to have an index. It’s worth saying, have you thought about using Iceberg? Because it’s a pretty neat, packaged, generic way of doing that stuff.

Which Storage Technology?

For your technology choice, you should be looking for using object storage as your primary storage. Or if you’re analyzing a product that you’re thinking about using, you should be asking how it uses object storage. It’s generally the right thing for cost. We talked about egress fees across AZs, and remember, Amazon doesn’t charge itself those egress fees. Building something yourself that does replication across AZs is generally going to be less cost effective than using a service that your cloud provider provides.

In the runtime environment, think carefully before adopting StatefulSets. Be careful how you write a readiness check. Test it. I’m not claiming that there is a really compelling alternative to Kubernetes other than bare metal. Running on bare metal to begin with, is a good conservative starting point. I would take the guidelines about places to look for risk in Kubernetes as steps on your journey to moving your stateful services into Kubernetes, rather than as a reason not to do it.

Questions and Answers

Participant 1: If I understood the product of Neon correctly, you’re running Postgres with an S3 backend. Why would I do that? What’s the reason? What problem does it solve?

Spray: The reason for using a disaggregated storage backend, whether it’s S3 or something else, is so that you don’t have to decide at the time you provision your database how big it’s going to be. When somebody creates a database on our platform, which takes under a second, we’re spinning up a tiny VM which has access to a big shared pool of storage. As they write more data into that, there is no step at which they have to say, I want to switch instance types or I want to attach a different disk to it to accommodate more storage. It’s about making it dynamically scale and providing a serverless experience to the user. We have a pretty fast cold start on the compute side as well.

By default, we’ll shut off the Postgres database after about 5 minutes. You can disable that if you don’t want it. Then, again, we’ll do a sub-second cold start to Postgres later. A big part of enabling that is that you don’t have a local disk cache that has to be warmed up on Postgres. It’s all sat there as part of our storage backend. It also enables us to do things that Postgres can’t. We have a full history of all the transactions that somebody writes within a set time window, such as 30 days. If you drop a table that you regret dropping, you don’t have to roll back to last night’s snapshot. You can go back to precisely the point before you did that drop and recover to that point. That could be built inside of Postgres as a feature.

The reason for building it in our backend is so that we can couple it with the serverless experience and the autoscaling that comes with that. Again, our backend services are open source as well.

Participant 2: Is the cost savings primarily from the fact that you have the middle layer, or like the NVMe disk batching up the large amounts of tiny writes.

Spray: That’s the necessary element that you’ll see in most modern products like this. They act as some type of conversion between the low latency that the user needs and the fine granularity that the user needs, and the much coarser granularity that we need to send into S3 to have a cost-effective storage story. It’s a little bit like, to use a more traditional example, you used to have systems that would buffer things up before issuing a big write to a hard disk drive. You might have a disk controller that has a battery backed cache and builds up a load of stuff and writes it to a slower but cheaper piece of storage.

Participant 3: When you write to Neon, when the operation is complete, when the data is in Safekeeper, which means on the read, we need to merge data from Safekeeper and from pageserver, or the write is complete when it reaches pageserver, or basically S3. If the latter, it means that the latency is basically no less than latency of S3.

Spray: That’s exactly right. If we waited for the object to hit S3 then we wouldn’t be providing an improvement to the user. The ACK to the user, the completion of the transaction happens when the data is on two out of three of these servers. That’s a sub-millisecond latency. It’s the time it takes to go to one Safekeeper, hop to another AZ, and hit two NVMe writes, and then for the ACK to make it back to the Postgres.

How does a read work if we’re ACKing as soon as it’s hit here, but we’re not waiting for it to hit the part where we serve reads from?

The answer to that lies in Postgres, which has its own page cache. Even if you’re running vanilla Postgres on a local hard drive, it has the concept of a cache of recently written pages, and we benefit from that as well. The latency for something to be ingested from the Safekeeper to the pageserver is something like low seconds usually.

In principle, it’s sub-second, but we don’t monitor for that. It’s like a few seconds. Within that few seconds, it’s highly likely that that page that was just written is still in memory on Postgres. It’s very rare for our storage backend to see a read of a page which was just written a few milliseconds ago. That’s just not how Postgres behaves. That’s something I’m very grateful for as a storage person. It provides a great deal of smoothing to the user experience that we have the excellent caching that’s built into standard open-source Postgres.

Participant 4: Do you think there’s a gap in the market for a stateful based orchestration model or bare metal is just fine?

Spray: I’ve asked myself, what would good look like? What tool would I want? I don’t have a great answer for that. I think you absolutely could write something that would be perfect for systems like this. I think it would struggle commercially. I’m not sure I see a market for that, given that most products like these have already built what they need. I think it would be super interesting to have an opinionated Kubernetes derivative that knew how to provision onto bare metal and had an even more conservative version of StatefulSet in a canned way. A lot of the issues that I see with teams adopting Kubernetes for stateful services are not because it fundamentally can’t do it, but because you have to know how to configure it just the right way.

It could be that just an opinionated version of Kubernetes combined with a performance-oriented way of provisioning would be a super useful tool. The other thing that I would love is a more generic scheduler. Within our product, we have a scheduler which is somewhat similar to what is going on inside Kubernetes, but we’re not scheduling pods onto nodes. We’re scheduling tenants onto pageservers. I think that maybe there is scope for a libraryized version of a high-quality scheduler with things like soft constraints. Clearly, the code exists. I’m not saying no one’s ever written that, but it’s making it valuable across all of these different projects, doing similar things, is the challenge.

Participant 5: Regarding the problems with the StatefulSet, could this be solved by using other storage engines outside of local storage? Because, for example, with NVMe over Fabrics, or the older iSCSI, like, it can be very fast even with latency with NVMe over Fabrics, so storage will not be local storage, and it removes the problem if you change the node.

Spray: The reason that I don’t recommend that is cost. If you’re building your own infrastructure, if you’re on bare metal in your own data center, then that can be a good way of doing it. If you’re paying list price on a mainstream cloud provider, then you hit this EBS number. If you’re using a replicated system, you potentially end up paying twice, because there’s built in replication in your shared storage backend, but also you’re running multiple replicas.

Participant 5: Also, there is some projects for storage on Kubernetes such as Longhorn that are planned to be used by SUSE and Red Hat solutions, so OpenShift and currently Harvester. I think it can be also very interesting to look at, because it’s about sharing local storage and making replication inside a Kubernetes node. If a node goes out, the, I think it’s an operator, replicates the data into other local storage. Also, for Postgres, there is a tool like CloudNativePG, which creates its own resource. It’s not a StatefulSet. It’s custom resources for the purpose of handling database problems.

Spray: You’ve just called out a couple of really good projects. The buzzword for sharing local storage and doing that under the hood is hyperconverged infrastructure. Back in 10 and a bit years ago, that’s the problem I was working on solving with Ceph. It’s a super hard problem to get right, to figure out how to share the resources on a node, between what’s running on that node and the work you want to do for your peers. I don’t want to labor this too much because it’s a very opinionated take.

There are financial aspects to it as well, that you can buy storage more cheaply per terabyte from somebody who sells you a storage solution. EMC doesn’t pay the same for drives as we pay for drives. Amazon doesn’t pay the same for drives as we pay for drives. If you build this type of storage infrastructure out of retail servers, it’s hard to make it cost competitive. That’s not a disadvantage to the software project you’re pointing out. It’s more just an observation that I see limited adoption of that at scale for financial reasons.

See more presentations with transcripts

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Data Storage Stocks Q2 Recap: Benchmarking MongoDB (NASDAQ:MDB) – Yahoo Finance

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

MDB Cover Image

Data Storage Stocks Q2 Recap: Benchmarking MongoDB (NASDAQ:MDB)

Wrapping up Q2 earnings, we look at the numbers and key takeaways for the data storage stocks, including MongoDB (NASDAQ:MDB) and its peers.

Data is the lifeblood of the internet and software in general, and the amount of data created is accelerating. As a result, the importance of storing the data in scalable and efficient formats continues to rise, especially as its diversity and associated use cases expand from analyzing simple, structured datasets to high-scale processing of unstructured data such as images, audio, and video.

The 5 data storage stocks we track reported a strong Q2. As a group, revenues beat analysts’ consensus estimates by 2.5% while next quarter’s revenue guidance was 1.3% above.

Inflation progressed towards the Fed’s 2% goal recently, leading the Fed to reduce its policy rate by 50bps (half a percent or 0.5%) in September 2024. This is the first cut in four years. While CPI (inflation) readings have been supportive lately, employment measures have bordered on worrisome. The markets will be debating whether this rate cut’s timing (and more potential ones in 2024 and 2025) is ideal for supporting the economy or a bit too late for a macro that has already cooled too much.

Luckily, data storage stocks have performed well with share prices up 11% on average since the latest earnings results.

MongoDB (NASDAQ:MDB)

Started in 2007 by the team behind Google’s ad platform, DoubleClick, MongoDB offers database-as-a-service that helps companies store large volumes of semi-structured data.

MongoDB reported revenues of $478.1 million, up 12.8% year on year. This print exceeded analysts’ expectations by 3%. Overall, it was a very strong quarter for the company with an impressive beat of analysts’ billings estimates and full-year revenue guidance exceeding analysts’ expectations.

“MongoDB delivered healthy second quarter results, highlighted by strong new workload acquisition and better-than-expected Atlas consumption trends. Our continued success in winning new workloads demonstrates the critical role MongoDB’s platform plays in modern application development,” said Dev Ittycheria, President and Chief Executive Officer of MongoDB.

MongoDB Total RevenueMongoDB Total Revenue

MongoDB Total Revenue

MongoDB delivered the slowest revenue growth of the whole group. The company added 52 enterprise customers paying more than $100,000 annually to reach a total of 2,189. Interestingly, the stock is up 12.7% since reporting and currently trades at $277.

Is now the time to buy MongoDB? Access our full analysis of the earnings results here, it’s free.

Best Q2: Commvault Systems (NASDAQ:CVLT)

Originally formed in 1988 as part of Bell Labs, Commvault (NASDAQ: CVLT) provides enterprise software used for data backup and recovery, cloud and infrastructure management, retention, and compliance.

Commvault Systems reported revenues of $224.7 million, up 13.4% year on year, outperforming analysts’ expectations by 4.2%. The business had an exceptional quarter with an impressive beat of analysts’ billings estimates and full-year revenue guidance exceeding analysts’ expectations.

Commvault Systems Total RevenueCommvault Systems Total Revenue

Commvault Systems Total Revenue

Commvault Systems achieved the biggest analyst estimates beat among its peers. The market seems happy with the results as the stock is up 18.6% since reporting. It currently trades at $146.23.

Is now the time to buy Commvault Systems? Access our full analysis of the earnings results here, it’s free.

Weakest Q2: Snowflake (NYSE:SNOW)

Founded in 2013 by three French engineers who spent decades working for Oracle, Snowflake (NYSE:SNOW) provides a data warehouse-as-a-service in the cloud that allows companies to store large amounts of data and analyze it in real time.

Snowflake reported revenues of $868.8 million, up 28.9% year on year, exceeding analysts’ expectations by 2.1%. Still, it was a slower quarter as it posted a miss of analysts’ billings estimates.

As expected, the stock is down 11.6% since the results and currently trades at $119.45.

Read our full analysis of Snowflake’s results here.

Couchbase (NASDAQ:BASE)

Formed in 2011 with the merger of Membase and CouchOne, Couchbase (NASDAQ:BASE) is a database-as-a-service platform that allows enterprises to store large volumes of semi-structured data.

Couchbase reported revenues of $51.59 million, up 19.6% year on year. This number was in line with analysts’ expectations. Zooming out, it was a mixed quarter as it also produced full-year revenue guidance exceeding analysts’ expectations but a miss of analysts’ billings estimates.

Couchbase had the weakest performance against analyst estimates and weakest full-year guidance update among its peers. The stock is down 14.4% since reporting and currently trades at $16.25.

Read our full, actionable report on Couchbase here, it’s free.

DigitalOcean (NYSE:DOCN)

Started by brothers Ben and Moisey Uretsky, DigitalOcean (NYSE: DOCN) provides a simple, low-cost platform that allows developers and small and medium-sized businesses to host applications and data in the cloud.

DigitalOcean reported revenues of $192.5 million, up 13.3% year on year. This number surpassed analysts’ expectations by 2%. Overall, it was a very strong quarter as it also recorded full-year revenue guidance exceeding analysts’ expectations and a solid beat of analysts’ ARR (annual recurring revenue) estimates.

DigitalOcean achieved the highest full-year guidance raise among its peers. The stock is up 49.7% since reporting and currently trades at $43.55.

Read our full, actionable report on DigitalOcean here, it’s free.

Join Paid Stock Investor Research

Help us make StockStory more helpful to investors like yourself. Join our paid user research session and receive a $50 Amazon gift card for your opinions. Sign up here.

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.