JDK 24 and JDK 25: What We Know So Far

MMS Founder
MMS Michael Redlich

Article originally posted on InfoQ. Visit InfoQ

JDK 24, the third non-LTS release since JDK 21, has reached its first release candidate phase as declared by Mark Reinhold, Chief Architect, Java Platform Group at Oracle. The main-line source repository, forked to the JDK stabilization repository in early-December 2024 (Rampdown Phase One), defines the feature set for JDK 24. Critical bugs, such as regressions or serious functionality issues, may be addressed, but must be approved via the Fix-Request process. As per the release schedule, JDK 24 will be formally released on March 18, 2025.

The final set of 24 new features, in the form of JEPs, can be separated into five (5) categories: Core Java Library, Java Language Specification, Security Library, HotSpot and Java Tools.

Seven (7) of these new features are categorized under Core Java Library:

Four (4) of these new features are categorized under Java Language Specification:

Four (4) of these new features are categorized under Security Library:

Eight (8) of these new features are categorized under HotSpot:

And finally, one (1) of these new features is categorized under Java Tools:

We examine some of these new features and include where they fall under the auspices of the major Java projects – Amber, Loom, Panama, Valhalla and Leyden – designed to incubate a series of components for eventual inclusion in the JDK through a curated merge.

Project Amber

JEP 495, Simple Source Files and Instance Main Methods (Fourth Preview), proposes a fourth preview without change (except for a second name change), after three previous rounds of preview, namely: JEP 477, Implicitly Declared Classes and Instance Main Methods (Third Preview), delivered in JDK 23; JEP 463, Implicitly Declared Classes and Instance Main Methods (Second Preview), delivered in JDK 22; and JEP 445, Unnamed Classes and Instance Main Methods (Preview), delivered in JDK 21. This feature aims to “evolve the Java language so that students can write their first programs without needing to understand language features designed for large programs.” This JEP moves forward the September 2022 blog post, Paving the on-ramp, by Brian Goetz, Java language architect at Oracle. Gavin Bierman, consulting member of technical staff at Oracle, has published the first draft of the specification document for review by the Java community. More details on JEP 445 may be found in this InfoQ news story.

Project Loom

Formerly known as Extent-Local Variables (Incubator), JEP 487, Scoped Values (Fourth Preview), proposes a fourth preview, with one change, in order to gain additional experience and feedback from one round of incubation and three rounds of preview, namely: JEP 481, Scoped Values (Third Preview), delivered in JDK 23; JEP 464, Scoped Values (Second Preview), delivered in JDK 22; JEP 446, Scoped Values (Preview), delivered in JDK 21; and JEP 429, Scoped Values (Incubator), delivered in JDK 20. This feature enables sharing of immutable data within and across threads. This is preferred to thread-local variables, especially when using large numbers of virtual threads. The only change from the previous preview is the removal of the callWhere() and runWhere() methods from the ScopedValue class that leaves the API fluent. The ability to use one or more bound scoped values is accomplished via the call() and run() methods defined in the ScopedValue.Carrier class.

Project Panama

After its review had concluded, JEP 489, Vector API (Ninth Incubator), proposes to incorporate enhancements in response to feedback from the previous eight rounds of incubation, namely: JEP 469, Vector API (Eighth Incubator), delivered in JDK 23; JEP 460, Vector API (Seventh Incubator), delivered in JDK 22; JEP 448, Vector API (Sixth Incubator), delivered in JDK 21; JEP 438, Vector API (Fifth Incubator), delivered in JDK 20; JEP 426, Vector API (Fourth Incubator), delivered in JDK 19; JEP 417, Vector API (Third Incubator), delivered in JDK 18; JEP 414, Vector API (Second Incubator), delivered in JDK 17; and JEP 338, Vector API (Incubator), delivered as an incubator module in JDK 16. Originally slated to be a re-incubation by reusing the original Incubator status, it was decided to keep enumerating. The Vector API will continue to incubate until the necessary features of Project Valhalla become available as preview features. At that time, the Vector API team will adapt the Vector API and its implementation to use them, and will promote the Vector API from Incubation to Preview.

Project Leyden

JEP 483, Ahead-of-Time Class Loading & Linking, proposes to “improve startup time by making the classes of an application instantly available, in a loaded and linked state, when the HotSpot Java Virtual Machine starts.” This may be achieved by monitoring the application during one run and storing the loaded and linked forms of all classes in a cache for use in subsequent runs. This feature will lay a foundation for future improvements to both startup and warmup time.

Security Library

JEP 497, Quantum-Resistant Module-Lattice-Based Digital Signature Algorithm, proposes to “enhance the security of Java applications by providing an implementation of the quantum-resistant Module-Lattice-Based Digital Signature Algorithm (ML-DSA)” as standardized by FIPS 204. This will be accomplished by implementing the Java KeyPairGenerator, Signature and KeyFactory classes.

HotSpot

JEP 450, Compact Object Headers (Experimental), inspired by Project Lilliput, proposes to “reduce the size of object headers in the HotSpot JVM from between 96 and 128 bits down to 64 bits on 64-bit architectures.” This feature is disabled by default as it is considered experimental and may cause unintended consequences if enabled by a developer. More details on JEP 450 may be found in this InfoQ news story.

JEP 404, Generational Shenandoah (Experimental), proposes to provide an experimental generational mode, without breaking non-generational Shenandoah Garbage Collector, with the intent to make the generational mode the default in a future JDK release. Originally targeted for JDK 21, JEP 404 was officially removed from the final feature set due to the “risks identified during the review process and the lack of time available to perform the thorough review that such a large contribution of code requires.” At the time, the Shenandoah team decided to target a future JDK release to “deliver the best Generational Shenandoah that they can.

JDK 25

Scheduled for a GA release in September 2025, there are no JEPs targeted for JDK 25 at this time. However, based on a number of JEP candidates and drafts, especially those that have been submitted, we can surmise which additional JEPs have the potential to be included in JDK 25.

In an email to the Java community, Gavin Bierman, Consulting Member of Technical Staff at Oracle, has announced his intention to finalize the so-called “on-ramp” feature from JEP 495, Simple Source Files and Instance Main Methods (Fourth Preview), with the release of JDK 25 in September 2025. As of yet, no drafts have been created, but we expect this to change soon.

JEP 502, Stable Values (Preview), formerly known as Computed Constants (Preview), introduces the concept of computed constants, defined as immutable value holders that are initialized, at most, once. This offers the performance and safety benefits of final fields, while offering greater flexibility as to the timing of initialization.

JEP Draft 8340343, Structured Concurrency (Fifth Preview), proposes a fifth preview, with several API changes, to gain more feedback from the previous four rounds of preview, namely: JEP 499, Structured Concurrency (Fourth Preview), to be delivered in the upcoming GA release of JDK 24; JEP 480, Structured Concurrency (Third Preview), delivered in JDK 23; JEP 462, Structured Concurrency (Second Preview), delivered in JDK 22; and JEP 453, Structured Concurrency (Preview), delivered in JDK 21. This feature simplifies concurrent programming by introducing structured concurrency to “treat groups of related tasks running in different threads as a single unit of work, thereby streamlining error handling and cancellation, improving reliability, and enhancing observability.” One of the proposed API changes involves the StructuredTaskScope interface to be opened via static factory methods rather than public constructors.

JEP Draft 8326035, CDS Object Streaming, proposes to add an object archiving mechanism for Class-Data Sharing (CDS) in the Z Garbage Collector (ZGC) with a unified CDS object archiving format and loader. This feature will keep GC implementation details and policies separate from the CDS archived object streaming mechanism.

JEP Draft 8300911, PEM API (Preview), introduces an easy and intuitive API for encoding and decoding the Privacy-Enhanced Mail (PEM) format to describe value holders that can change, at most, once. This PEM format will be used for storing and sending cryptographic keys and certificates.

JEP Draft 8291976, Support HTTP/3 in the HttpClient, proposes to update JEP 321, HTTP Client, delivered in JDK 11, to support the HTTP/3 protocol. This will allow applications and libraries to interact with HTTP/3 servers and get the benefits of HTTP/3 with minimal code changes.

We anticipate that Oracle will start targeting JEPs for JDK 25 very soon.

About the Author

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Dwight A. Merriman Sells 1,000 Shares of MongoDB, Inc. (NASDAQ:MDB) Stock

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

MongoDB, Inc. (NASDAQ:MDBGet Free Report) Director Dwight A. Merriman sold 1,000 shares of the firm’s stock in a transaction dated Monday, February 10th. The shares were sold at an average price of $281.62, for a total transaction of $281,620.00. Following the transaction, the director now directly owns 1,112,006 shares in the company, valued at $313,163,129.72. The trade was a 0.09 % decrease in their ownership of the stock. The transaction was disclosed in a filing with the Securities & Exchange Commission, which can be accessed through this hyperlink.

MongoDB Trading Down 0.1 %

Shares of MDB traded down $0.24 during mid-day trading on Tuesday, reaching $286.12. 1,238,148 shares of the stock traded hands, compared to its average volume of 1,549,118. MongoDB, Inc. has a 1 year low of $212.74 and a 1 year high of $509.62. The company has a market cap of $21.31 billion, a price-to-earnings ratio of -104.42 and a beta of 1.28. The firm has a fifty day moving average price of $266.30 and a 200 day moving average price of $271.04.

MongoDB (NASDAQ:MDBGet Free Report) last released its quarterly earnings results on Monday, December 9th. The company reported $1.16 EPS for the quarter, topping analysts’ consensus estimates of $0.68 by $0.48. The business had revenue of $529.40 million during the quarter, compared to analysts’ expectations of $497.39 million. MongoDB had a negative net margin of 10.46% and a negative return on equity of 12.22%. The business’s quarterly revenue was up 22.3% compared to the same quarter last year. During the same quarter in the previous year, the company earned $0.96 EPS. On average, sell-side analysts anticipate that MongoDB, Inc. will post -1.78 EPS for the current year.

Analyst Upgrades and Downgrades

Several equities analysts have commented on MDB shares. Stifel Nicolaus lifted their target price on shares of MongoDB from $325.00 to $360.00 and gave the stock a “buy” rating in a research report on Monday, December 9th. Canaccord Genuity Group boosted their price objective on MongoDB from $325.00 to $385.00 and gave the stock a “buy” rating in a research report on Wednesday, December 11th. Mizuho raised their target price on MongoDB from $275.00 to $320.00 and gave the company a “neutral” rating in a research report on Tuesday, December 10th. Wells Fargo & Company boosted their price target on shares of MongoDB from $350.00 to $425.00 and gave the stock an “overweight” rating in a report on Tuesday, December 10th. Finally, JMP Securities restated a “market outperform” rating and set a $380.00 price objective on shares of MongoDB in a report on Wednesday, December 11th. Two investment analysts have rated the stock with a sell rating, four have assigned a hold rating, twenty-three have assigned a buy rating and two have issued a strong buy rating to the company. According to data from MarketBeat.com, the company has a consensus rating of “Moderate Buy” and a consensus price target of $361.00.

Get Our Latest Stock Analysis on MDB

Institutional Investors Weigh In On MongoDB

Several hedge funds have recently added to or reduced their stakes in the company. Vanguard Group Inc. increased its stake in MongoDB by 0.3% in the 4th quarter. Vanguard Group Inc. now owns 7,328,745 shares of the company’s stock worth $1,706,205,000 after purchasing an additional 23,942 shares during the period. Jennison Associates LLC boosted its position in MongoDB by 23.6% during the 3rd quarter. Jennison Associates LLC now owns 3,102,024 shares of the company’s stock valued at $838,632,000 after buying an additional 592,038 shares during the period. Geode Capital Management LLC grew its holdings in MongoDB by 2.9% in the 3rd quarter. Geode Capital Management LLC now owns 1,230,036 shares of the company’s stock worth $331,776,000 after acquiring an additional 34,814 shares during the last quarter. Amundi increased its position in shares of MongoDB by 86.2% in the fourth quarter. Amundi now owns 693,740 shares of the company’s stock valued at $172,519,000 after acquiring an additional 321,186 shares during the period. Finally, Westfield Capital Management Co. LP raised its stake in shares of MongoDB by 1.5% during the third quarter. Westfield Capital Management Co. LP now owns 496,248 shares of the company’s stock valued at $134,161,000 after acquiring an additional 7,526 shares during the last quarter. Institutional investors own 89.29% of the company’s stock.

MongoDB Company Profile

(Get Free Report)

MongoDB, Inc, together with its subsidiaries, provides general purpose database platform worldwide. The company provides MongoDB Atlas, a hosted multi-cloud database-as-a-service solution; MongoDB Enterprise Advanced, a commercial database server for enterprise customers to run in the cloud, on-premises, or in a hybrid environment; and Community Server, a free-to-download version of its database, which includes the functionality that developers need to get started with MongoDB.

Read More

Insider Buying and Selling by Quarter for MongoDB (NASDAQ:MDB)

This instant news alert was generated by narrative science technology and financial data from MarketBeat in order to provide readers with the fastest and most accurate reporting. This story was reviewed by MarketBeat’s editorial team prior to publication. Please send any questions or comments about this story to contact@marketbeat.com.

Before you make your next trade, you’ll want to hear this.

MarketBeat keeps track of Wall Street’s top-rated and best performing research analysts and the stocks they recommend to their clients on a daily basis.

Our team has identified the five stocks that top analysts are quietly whispering to their clients to buy now before the broader market catches on… and none of the big name stocks were on the list.

They believe these five stocks are the five best companies for investors to buy now…

See The Five Stocks Here

Before you consider MongoDB, you’ll want to hear this.

MarketBeat keeps track of Wall Street’s top-rated and best performing research analysts and the stocks they recommend to their clients on a daily basis. MarketBeat has identified the five stocks that top analysts are quietly whispering to their clients to buy now before the broader market catches on… and MongoDB wasn’t on the list.

While MongoDB currently has a “Moderate Buy” rating among analysts, top-rated analysts believe these five stocks are better buys.

View The Five Stocks Here

The Best High-Yield Dividend Stocks for 2025 Cover

Discover the 10 Best High-Yield Dividend Stocks for 2025 and secure reliable income in uncertain markets. Download the report now to identify top dividend payers and avoid common yield traps.

Get This Free Report

Like this article? Share it with a colleague.

Link copied to clipboard.

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Dwight A. Merriman Sells 922 Shares of MongoDB, Inc. (NASDAQ:MDB) Stock – MarketBeat

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

MongoDB, Inc. (NASDAQ:MDBGet Free Report) Director Dwight A. Merriman sold 922 shares of the company’s stock in a transaction that occurred on Friday, February 7th. The shares were sold at an average price of $279.09, for a total transaction of $257,320.98. Following the completion of the transaction, the director now owns 84,730 shares of the company’s stock, valued at $23,647,295.70. This trade represents a 1.08 % decrease in their position. The transaction was disclosed in a document filed with the Securities & Exchange Commission, which is accessible through this link.

MongoDB Stock Down 0.1 %

Shares of MDB stock traded down $0.24 during trading hours on Tuesday, reaching $286.12. The company’s stock had a trading volume of 1,238,148 shares, compared to its average volume of 1,549,118. MongoDB, Inc. has a fifty-two week low of $212.74 and a fifty-two week high of $509.62. The firm’s 50 day simple moving average is $266.30 and its 200-day simple moving average is $271.04. The company has a market capitalization of $21.31 billion, a price-to-earnings ratio of -104.42 and a beta of 1.28.

MongoDB (NASDAQ:MDBGet Free Report) last announced its quarterly earnings results on Monday, December 9th. The company reported $1.16 earnings per share (EPS) for the quarter, beating the consensus estimate of $0.68 by $0.48. The company had revenue of $529.40 million for the quarter, compared to analyst estimates of $497.39 million. MongoDB had a negative net margin of 10.46% and a negative return on equity of 12.22%. The firm’s revenue was up 22.3% on a year-over-year basis. During the same period in the previous year, the firm posted $0.96 EPS. Sell-side analysts predict that MongoDB, Inc. will post -1.78 earnings per share for the current year.

Analyst Upgrades and Downgrades

MDB has been the subject of a number of research analyst reports. Tigress Financial upped their target price on shares of MongoDB from $400.00 to $430.00 and gave the stock a “buy” rating in a research note on Wednesday, December 18th. Scotiabank reduced their price objective on MongoDB from $350.00 to $275.00 and set a “sector perform” rating for the company in a research report on Tuesday, January 21st. Piper Sandler reissued an “overweight” rating and issued a $425.00 target price on shares of MongoDB in a research report on Tuesday, December 10th. Stifel Nicolaus upped their price target on MongoDB from $325.00 to $360.00 and gave the stock a “buy” rating in a research report on Monday, December 9th. Finally, JMP Securities reiterated a “market outperform” rating and issued a $380.00 price objective on shares of MongoDB in a report on Wednesday, December 11th. Two analysts have rated the stock with a sell rating, four have given a hold rating, twenty-three have given a buy rating and two have issued a strong buy rating to the stock. According to MarketBeat.com, the company currently has an average rating of “Moderate Buy” and a consensus target price of $361.00.

View Our Latest Analysis on MDB

Institutional Inflows and Outflows

Large investors have recently bought and sold shares of the stock. B.O.S.S. Retirement Advisors LLC bought a new position in MongoDB in the 4th quarter valued at about $606,000. Aigen Investment Management LP purchased a new stake in shares of MongoDB in the third quarter worth approximately $1,045,000. Geode Capital Management LLC lifted its holdings in shares of MongoDB by 2.9% in the third quarter. Geode Capital Management LLC now owns 1,230,036 shares of the company’s stock valued at $331,776,000 after purchasing an additional 34,814 shares in the last quarter. B. Metzler seel. Sohn & Co. Holding AG purchased a new position in shares of MongoDB during the third quarter valued at approximately $4,366,000. Finally, Charles Schwab Investment Management Inc. grew its holdings in MongoDB by 2.8% during the third quarter. Charles Schwab Investment Management Inc. now owns 278,419 shares of the company’s stock worth $75,271,000 after buying an additional 7,575 shares in the last quarter. 89.29% of the stock is owned by institutional investors.

MongoDB Company Profile

(Get Free Report)

MongoDB, Inc, together with its subsidiaries, provides general purpose database platform worldwide. The company provides MongoDB Atlas, a hosted multi-cloud database-as-a-service solution; MongoDB Enterprise Advanced, a commercial database server for enterprise customers to run in the cloud, on-premises, or in a hybrid environment; and Community Server, a free-to-download version of its database, which includes the functionality that developers need to get started with MongoDB.

Featured Stories

Insider Buying and Selling by Quarter for MongoDB (NASDAQ:MDB)

This instant news alert was generated by narrative science technology and financial data from MarketBeat in order to provide readers with the fastest and most accurate reporting. This story was reviewed by MarketBeat’s editorial team prior to publication. Please send any questions or comments about this story to contact@marketbeat.com.

Before you make your next trade, you’ll want to hear this.

MarketBeat keeps track of Wall Street’s top-rated and best performing research analysts and the stocks they recommend to their clients on a daily basis.

Our team has identified the five stocks that top analysts are quietly whispering to their clients to buy now before the broader market catches on… and none of the big name stocks were on the list.

They believe these five stocks are the five best companies for investors to buy now…

See The Five Stocks Here

Before you consider MongoDB, you’ll want to hear this.

MarketBeat keeps track of Wall Street’s top-rated and best performing research analysts and the stocks they recommend to their clients on a daily basis. MarketBeat has identified the five stocks that top analysts are quietly whispering to their clients to buy now before the broader market catches on… and MongoDB wasn’t on the list.

While MongoDB currently has a “Moderate Buy” rating among analysts, top-rated analysts believe these five stocks are better buys.

View The Five Stocks Here

2025 Gold Forecast: A Perfect Storm for Demand Cover

Unlock the timeless value of gold with our exclusive 2025 Gold Forecasting Report. Explore why gold remains the ultimate investment for safeguarding wealth against inflation, economic shifts, and global uncertainties. Whether you’re planning for future generations or seeking a reliable asset in turbulent times, this report is your essential guide to making informed decisions.

Get This Free Report

Like this article? Share it with a colleague.

Link copied to clipboard.

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


MongoDB, Inc. Announces Date of Fourth Quarter and Full Year Fiscal 2025 Earnings Call

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

NEW YORK, Feb. 12, 2025 /PRNewswire/ — MongoDB, Inc. (NASDAQ: MDB) today announced it will report its fourth quarter and full year fiscal year 2025 financial results for the three months ended January 31, 2025, after the U.S. financial markets close on Wednesday, March 5, 2025.

In conjunction with this announcement, MongoDB will host a conference call on Wednesday, March 5, 2025, at 5:00 p.m. (Eastern Time) to discuss the Company’s financial results and business outlook. A live webcast of the call will be available on the “Investor Relations” page of the Company’s website at http://investors.mongodb.com. To access the call by phone, please go to this link (registration link), and you will be provided with dial in details. To avoid delays, we encourage participants to dial into the conference call fifteen minutes ahead of the scheduled start time. A replay of the webcast will also be available for a limited time at http://investors.mongodb.com.

About MongoDB
Headquartered in New York, MongoDB’s mission is to empower innovators to create, transform, and disrupt industries by unleashing the power of software and data. Built by developers, for developers, MongoDB’s developer data platform is a database with an integrated set of related services that allow development teams to address the growing requirements for today’s wide variety of modern applications, all in a unified and consistent user experience. MongoDB has tens of thousands of customers in over 100 countries. The MongoDB database platform has been downloaded hundreds of millions of times since 2007, and there have been millions of builders trained through MongoDB University courses. To learn more, visit mongodb.com.

Investor Relations
Brian Denyeau
ICR for MongoDB
646-277-1251
ir@mongodb.com

Media Relations
MongoDB
press@mongodb.com

Cision View original content to download multimedia:https://www.prnewswire.com/news-releases/mongodb-inc-announces-date-of-fourth-quarter-and-full-year-fiscal-2025-earnings-call-302375268.html

SOURCE MongoDB, Inc.

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Sumitomo Mitsui Trust Group Inc. Grows Stock Holdings in MongoDB, Inc. (NASDAQ:MDB)

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

Sumitomo Mitsui Trust Group Inc. boosted its stake in MongoDB, Inc. (NASDAQ:MDBFree Report) by 12.2% during the 4th quarter, according to the company in its most recent filing with the Securities and Exchange Commission (SEC). The institutional investor owned 195,443 shares of the company’s stock after buying an additional 21,308 shares during the quarter. Sumitomo Mitsui Trust Group Inc. owned approximately 0.26% of MongoDB worth $45,501,000 at the end of the most recent quarter.

Several other institutional investors have also bought and sold shares of MDB. Nisa Investment Advisors LLC lifted its stake in shares of MongoDB by 3.8% during the third quarter. Nisa Investment Advisors LLC now owns 1,090 shares of the company’s stock valued at $295,000 after acquiring an additional 40 shares during the period. Hilltop National Bank lifted its stake in MongoDB by 47.2% in the fourth quarter. Hilltop National Bank now owns 131 shares of the company’s stock valued at $30,000 after buying an additional 42 shares during the period. Tanager Wealth Management LLP lifted its stake in MongoDB by 4.7% in the third quarter. Tanager Wealth Management LLP now owns 957 shares of the company’s stock valued at $259,000 after buying an additional 43 shares during the period. Rakuten Securities Inc. lifted its stake in MongoDB by 16.5% in the third quarter. Rakuten Securities Inc. now owns 332 shares of the company’s stock valued at $90,000 after buying an additional 47 shares during the period. Finally, Prime Capital Investment Advisors LLC lifted its stake in MongoDB by 5.2% in the third quarter. Prime Capital Investment Advisors LLC now owns 1,190 shares of the company’s stock valued at $322,000 after buying an additional 59 shares during the period. Institutional investors and hedge funds own 89.29% of the company’s stock.

Insider Buying and Selling

In related news, CFO Michael Lawrence Gordon sold 5,000 shares of the stock in a transaction on Monday, December 16th. The stock was sold at an average price of $267.85, for a total value of $1,339,250.00. Following the transaction, the chief financial officer now owns 80,307 shares of the company’s stock, valued at $21,510,229.95. This represents a 5.86 % decrease in their ownership of the stock. The transaction was disclosed in a legal filing with the Securities & Exchange Commission, which is available through the SEC website. Also, CAO Thomas Bull sold 169 shares of the stock in a transaction on Thursday, January 2nd. The shares were sold at an average price of $234.09, for a total value of $39,561.21. Following the completion of the transaction, the chief accounting officer now directly owns 14,899 shares in the company, valued at approximately $3,487,706.91. This represents a 1.12 % decrease in their position. The disclosure for this sale can be found here. Over the last quarter, insiders have sold 44,413 shares of company stock worth $12,082,421. Corporate insiders own 3.60% of the company’s stock.

Wall Street Analyst Weigh In

Several research firms have weighed in on MDB. Rosenblatt Securities began coverage on MongoDB in a research note on Tuesday, December 17th. They issued a “buy” rating and a $350.00 target price on the stock. Guggenheim raised MongoDB from a “neutral” rating to a “buy” rating and set a $300.00 target price on the stock in a research note on Monday, January 6th. Needham & Company LLC increased their price target on MongoDB from $335.00 to $415.00 and gave the stock a “buy” rating in a report on Tuesday, December 10th. Robert W. Baird increased their price target on MongoDB from $380.00 to $390.00 and gave the stock an “outperform” rating in a report on Tuesday, December 10th. Finally, Wedbush upgraded MongoDB to a “strong-buy” rating in a report on Thursday, October 17th. Two analysts have rated the stock with a sell rating, four have given a hold rating, twenty-three have given a buy rating and two have given a strong buy rating to the company. Based on data from MarketBeat, MongoDB has an average rating of “Moderate Buy” and an average price target of $361.00.

View Our Latest Stock Report on MDB

MongoDB Price Performance

Shares of NASDAQ:MDB traded up $2.06 during trading on Wednesday, hitting $288.18. 409,405 shares of the company were exchanged, compared to its average volume of 1,507,438. The company has a market cap of $21.46 billion, a PE ratio of -104.97 and a beta of 1.28. The stock has a 50 day moving average of $265.54 and a 200 day moving average of $271.27. MongoDB, Inc. has a 52-week low of $212.74 and a 52-week high of $488.00.

MongoDB (NASDAQ:MDBGet Free Report) last posted its earnings results on Monday, December 9th. The company reported $1.16 earnings per share (EPS) for the quarter, topping the consensus estimate of $0.68 by $0.48. The firm had revenue of $529.40 million for the quarter, compared to the consensus estimate of $497.39 million. MongoDB had a negative return on equity of 12.22% and a negative net margin of 10.46%. The business’s quarterly revenue was up 22.3% on a year-over-year basis. During the same quarter in the prior year, the business posted $0.96 earnings per share. Sell-side analysts forecast that MongoDB, Inc. will post -1.78 EPS for the current fiscal year.

About MongoDB

(Free Report)

MongoDB, Inc, together with its subsidiaries, provides general purpose database platform worldwide. The company provides MongoDB Atlas, a hosted multi-cloud database-as-a-service solution; MongoDB Enterprise Advanced, a commercial database server for enterprise customers to run in the cloud, on-premises, or in a hybrid environment; and Community Server, a free-to-download version of its database, which includes the functionality that developers need to get started with MongoDB.

Featured Stories

Institutional Ownership by Quarter for MongoDB (NASDAQ:MDB)

Before you consider MongoDB, you’ll want to hear this.

MarketBeat keeps track of Wall Street’s top-rated and best performing research analysts and the stocks they recommend to their clients on a daily basis. MarketBeat has identified the five stocks that top analysts are quietly whispering to their clients to buy now before the broader market catches on… and MongoDB wasn’t on the list.

While MongoDB currently has a “Moderate Buy” rating among analysts, top-rated analysts believe these five stocks are better buys.

View The Five Stocks Here

7 Stocks That Could Be Bigger Than Tesla, Nvidia, and Google Cover

Looking for the next FAANG stock before everyone has heard about it? Enter your email address to see which stocks MarketBeat analysts think might become the next trillion dollar tech company.

Get This Free Report

Like this article? Share it with a colleague.

Link copied to clipboard.

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Senior Backend Developer – C# Javascript MongoDB Playfab – GamesIndustry.biz Jobs

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

Senior Backend Developer – C# Javascript MongoDB Playfab

Opportunity to join this global entertainment company dedicated to delivering magic to gamers. With over 300 titles in their catalogue, they partner with top creators and franchises worldwide.

Their mission is to be the global leader in indie to AA video game creation and transmedia, expanding the universe of video games beyond the platform. They have offices around the world and employ more than 200 professionals.

Senior Game Backend Developer

Are you passionate about building cutting-edge, scalable backend systems that power immersive gaming experiences? Do you have a knack for creating high-performance APIs and optimizing complex game services? If so, we want you to bring your expertise to our innovative team! As a Senior Game Backend Developer, you’ll play a key role in designing and maintaining the systems that support players worldwide. You’ll have the opportunity to work with Microsoft Azure, MongoDB, and PlayFab, creating the foundation for a seamless and exciting gaming experience.

Requirements

3+ years experience in game backend development

Proficient professional communication skills in English

A minimum of one shipped title

Advanced expertise in working with Microsoft Azure, MongoDB, and Playfab backend and services

Soft Skills

Leadership and Mentorship

Clear Communication

Problem-Solving and Critical Thinking

Adaptability and Flexibility

Teamwork and Collaboration

Your tasks will include:

Design and implement high-performance game services and APIs that provide seamless player experiences, while ensuring top-tier security and scalability.

Develop and manage cloud-based services using Microsoft Azure, leveraging powerful features for efficient database management and cloud storage integration with MongoDB.

Utilize PlayFab to streamline player data management, leaderboards, in-game economies, and analytics, ensuring a personalized and immersive player experience.

Optimize backend systems to handle high concurrency and massive volumes of data, delivering smooth, lag-free gameplay across all platforms.

Continuously monitor and analyze system performance, identifying potential bottlenecks and implementing innovative solutions to enhance backend efficiency and speed.

Architect and maintain databases for game data, player profiles, and in-game transactions, ensuring consistency and reliability across all systems.

Lead the development of comprehensive testing strategies, ensuring the quality and reliability of backend services and APIs.

Salary to $65k plus –

Meal tickets

Medical coverage

Hybrid work program (remote and in-office flexibility).  

CVs to simon.pittam@amiqus.com 

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Choreo LLC Sells 69 Shares of MongoDB, Inc. (NASDAQ:MDB) – Defense World

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

Choreo LLC decreased its stake in shares of MongoDB, Inc. (NASDAQ:MDBFree Report) by 2.7% in the fourth quarter, according to its most recent 13F filing with the Securities and Exchange Commission (SEC). The institutional investor owned 2,485 shares of the company’s stock after selling 69 shares during the quarter. Choreo LLC’s holdings in MongoDB were worth $581,000 as of its most recent SEC filing.

Several other large investors also recently added to or reduced their stakes in the stock. GAMMA Investing LLC grew its position in shares of MongoDB by 178.8% during the 3rd quarter. GAMMA Investing LLC now owns 145 shares of the company’s stock worth $39,000 after buying an additional 93 shares in the last quarter. CWM LLC lifted its stake in MongoDB by 5.0% in the third quarter. CWM LLC now owns 3,269 shares of the company’s stock worth $884,000 after acquiring an additional 156 shares during the period. Livforsakringsbolaget Skandia Omsesidigt grew its holdings in MongoDB by 253.2% during the 3rd quarter. Livforsakringsbolaget Skandia Omsesidigt now owns 558 shares of the company’s stock worth $151,000 after acquiring an additional 400 shares in the last quarter. Sapient Capital LLC purchased a new position in MongoDB during the 3rd quarter valued at about $736,000. Finally, Creative Planning raised its holdings in shares of MongoDB by 16.2% in the 3rd quarter. Creative Planning now owns 17,418 shares of the company’s stock valued at $4,709,000 after purchasing an additional 2,427 shares in the last quarter. 89.29% of the stock is owned by hedge funds and other institutional investors.

Wall Street Analyst Weigh In

A number of research firms have recently issued reports on MDB. Needham & Company LLC increased their price objective on shares of MongoDB from $335.00 to $415.00 and gave the company a “buy” rating in a research note on Tuesday, December 10th. Royal Bank of Canada lifted their price target on MongoDB from $350.00 to $400.00 and gave the company an “outperform” rating in a report on Tuesday, December 10th. JMP Securities reissued a “market outperform” rating and issued a $380.00 price target on shares of MongoDB in a research report on Wednesday, December 11th. Oppenheimer raised their price objective on MongoDB from $350.00 to $400.00 and gave the company an “outperform” rating in a research report on Tuesday, December 10th. Finally, Morgan Stanley boosted their target price on shares of MongoDB from $340.00 to $350.00 and gave the stock an “overweight” rating in a report on Tuesday, December 10th. Two research analysts have rated the stock with a sell rating, four have assigned a hold rating, twenty-three have issued a buy rating and two have given a strong buy rating to the stock. Based on data from MarketBeat, MongoDB currently has a consensus rating of “Moderate Buy” and an average target price of $361.00.

<!—->

Get Our Latest Analysis on MDB

MongoDB Price Performance

Shares of MDB stock opened at $286.12 on Wednesday. MongoDB, Inc. has a 1-year low of $212.74 and a 1-year high of $509.62. The company has a market cap of $21.31 billion, a price-to-earnings ratio of -104.42 and a beta of 1.28. The stock has a 50 day simple moving average of $266.30 and a 200-day simple moving average of $271.04.

MongoDB (NASDAQ:MDBGet Free Report) last posted its quarterly earnings data on Monday, December 9th. The company reported $1.16 earnings per share for the quarter, topping analysts’ consensus estimates of $0.68 by $0.48. MongoDB had a negative net margin of 10.46% and a negative return on equity of 12.22%. The company had revenue of $529.40 million during the quarter, compared to analysts’ expectations of $497.39 million. During the same period in the prior year, the business posted $0.96 earnings per share. The firm’s revenue for the quarter was up 22.3% on a year-over-year basis. Analysts anticipate that MongoDB, Inc. will post -1.78 earnings per share for the current fiscal year.

Insider Buying and Selling

In other news, CAO Thomas Bull sold 169 shares of the company’s stock in a transaction on Thursday, January 2nd. The shares were sold at an average price of $234.09, for a total transaction of $39,561.21. Following the completion of the sale, the chief accounting officer now owns 14,899 shares of the company’s stock, valued at $3,487,706.91. This represents a 1.12 % decrease in their ownership of the stock. The transaction was disclosed in a filing with the Securities & Exchange Commission, which is available through this hyperlink. Also, Director Dwight A. Merriman sold 1,000 shares of the firm’s stock in a transaction dated Monday, February 10th. The stock was sold at an average price of $281.62, for a total transaction of $281,620.00. Following the completion of the transaction, the director now directly owns 1,112,006 shares in the company, valued at approximately $313,163,129.72. This represents a 0.09 % decrease in their ownership of the stock. The disclosure for this sale can be found here. Insiders sold a total of 44,413 shares of company stock worth $12,082,421 in the last 90 days. Insiders own 3.60% of the company’s stock.

MongoDB Profile

(Free Report)

MongoDB, Inc, together with its subsidiaries, provides general purpose database platform worldwide. The company provides MongoDB Atlas, a hosted multi-cloud database-as-a-service solution; MongoDB Enterprise Advanced, a commercial database server for enterprise customers to run in the cloud, on-premises, or in a hybrid environment; and Community Server, a free-to-download version of its database, which includes the functionality that developers need to get started with MongoDB.

Featured Articles

Want to see what other hedge funds are holding MDB? Visit HoldingsChannel.com to get the latest 13F filings and insider trades for MongoDB, Inc. (NASDAQ:MDBFree Report).

Institutional Ownership by Quarter for MongoDB (NASDAQ:MDB)



Receive News & Ratings for MongoDB Daily – Enter your email address below to receive a concise daily summary of the latest news and analysts’ ratings for MongoDB and related companies with MarketBeat.com’s FREE daily email newsletter.

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


MongoDB expands Sydney operations to support local growth – Computer Weekly

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

MongoDB is expanding its operations in Sydney in a move that comes on the heels of a growing workforce of more than 220 employees in Australia and New Zealand (ANZ), a key part of its Asia-Pacific (APAC) business that rakes in revenues of over $220m annually.

Its expanded Sydney office will serve as a hub for MongoDB’s growing engineering, support and go-to-market teams, with a focus on research and development. More than 160 technical roles are currently based in ANZ, working on projects that are shaping the future of data management.

For instance, MongoDB’s technical services engineering team, one of the largest and most tenured teams in Australia, use their deep expertise and experience to help solve complex challenges for some of MongoDB’s largest and most sophisticated global customers.

There are also product teams working on artificial intelligence (AI)-enabled tooling and services that are transforming the speed and cost of modernising legacy technologies, a challenge faced by many Australian enterprises.

According to IDC, the ANZ database management systems market will be worth $1.9bn and will grow 17.3% in 2025, but a large portion of software spending in Australia still goes towards maintaining outdated systems that are expensive, risky and impede the development of modern use cases such as AI, a challenge which MongoDB is addressing.

For example, tools built in Sydney, such as Relational Migrator, had helped Bendigo and Adelaide Bank modernise a core banking app with 90% less development time and one-tenth the cost of traditional legacy systems migration.

The company’s core engineering team, which has a presence in Australia, also develops and maintains the MongoDB server storage layer that’s crucial to the experience of the millions of global users of MongoDB.

“Legacy technology is holding back global innovation, which is why we are on a mission to eliminate roadblocks to groundbreaking technology like AI by bringing simplicity, speed and ease to the modernisation process for Australia’s leading companies,” said Simon Eid, senior vice-president for APAC at MongoDB. “Local pilot projects and the fantastic AI tooling built by our Australian team are part of the template that is being rolled out globally.”

The new MongoDB office in Sydney, on 201 Elizabeth Street, is designed to enable MongoDB’s flexible working model that encourages employees to work how and where best suits them and the business. The office has a range of fixed and flexible desks, which allows employees to adapt the tasks and teams they’re working with on a given day. The office also has a number of multi-use spaces that can be used for events.

“Databases are the backbone of modern applications – managing, storing and protecting massive volumes and varieties of data. But it’s a giant technical challenge to do that at scale while offering the great developer experience MongoDB is famous for,” said Mick Graham, vice-president of engineering at MongoDB. 

“The outstanding technical talent we’ve been able to find and develop in Australia has been absolutely crucial to the company’s success and continued growth,” Graham added. “Working on foundational technology like databases is an incredibly exciting area of development. It’s so gratifying to see how the teams’ work is impacting thousands of customers across the world and particularly here in Australia.”

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Presentation: Leveraging Open-source LLMs for Production

MMS Founder
MMS Andrey Cheptsov

Article originally posted on InfoQ. Visit InfoQ

Transcript

Cheptsov: My name is Andrey. I’m the founder of dstack, where we are building an alternative to Kubernetes, a container orchestrator for managing AI infrastructure. Since it’s used by companies that develop AI models, especially large language models, we’re quite passionate about open-source LLMs, and of course want to help more companies understand how to adopt them without feeling overwhelmed, and to see more value in this. The talk isn’t really about covering all their best practices or dirty hacks for deploying open-source LLMs in production. This would be hardly possible. Instead, the goal is to give you a rough sense of what to expect from using open-source LLMs, when and why they might be the right choice, and what the development process might feel like.

Predictions (Closed Source vs. Open Source)

The maturity of AI remains a debated topic, with annual predictions from experts ranging from skepticism about LLMs, utility beyond chatbots, to concerns about AGI and existential risks from it. Open-source AI is also subject to frequent speculations, with ongoing questions whether it can ever rival the quality of proprietary models such as OpenAI and Anthropic. Clem, the founder of Hugging Face, the leading platform for publishing open-source models, actually predicted last year that open-source models would match the quality of closed source ones by 2024. Now that we are in the second half of 2024, do you think that open-source will catch up? You’ll be surprised to learn that in just six months, Meta, the company behind Facebook, released Llama 3.1, an open-source model that for the first time matched the quality of closed source models.

In this chart, carefully compiled by Maxime Labonne, an ML engineer at Liquid AI, it shows the release timelines of both closed source and open-source models over the last two years. The chart tracks each model score on the MMLU benchmark, which is Massive Multitask Language Understanding benchmark, which is one of the most comprehensive measures of a model performance across a wide range of tasks. For open-source models, you’ll notice that the model names here include the number of parameters. If you look closely, you can see the names of the open-source models, also had this number of parameters.

Generally, the higher the benchmark score, basically the better the model is, the larger the model means the more parameters the model has. The model that finally achieved parity with closed source models has 405 billion parameters, which we’ll talk more about what that really means. In this upper right corner of the chart, near Llama 3.1, 405 billion, you can also spot Qwen 2.5, 72 billion, which was released soon after. Despite this significantly smaller size, it nearly matches the performance of Llama 3.1, 405 billion, the largest one by Meta. This Qwen model was released by Alibaba Cloud, the team that trains these foundational models there.

This is basically the current best open-source model, competing directly with the closed source ones. As you see, despite early doubts, open-source models are keeping pace with closed source ones, going head-to-head in quality. Meta announced Llama 3.2. Probably some of you already know a multimodal model which surpassed the best closed source multimodal models in performance. A multimodal model means that it not only generates text, but it also understands pictures, and then can generate pictures as well.

Benchmarks – Llama 3.1 Instruct

Let’s take a closer look at Llama 3.1, why it’s significant, and how it performs on various benchmarks. Llama 3.1 is available in three sizes, 7 billion parameters, 70 billion parameters, and 405 billion parameters. It supports eight languages, offers 127k token context window, which includes the length of text that it can accept, plus the length of the text that it can generate. The longer the context window, the larger text LLM can understand and generate. It allows you to use the model commercially.

Basically, the license allows you to use Llama 3.1 in commercial projects. It’s not only used for inference, but also for fine-tuning. It is capable of also generating synthetic data and doing all sorts of knowledge distillation as well, which we’ll probably talk about in further slides. On the left hand you can see several benchmarks that assess the performance of each model on different tasks. This benchmark, I already mentioned, MMLU, this is one of the most common benchmarks. It measures model performance across a range of language tasks. There are two types of this benchmark. One is known as MMLU. It focuses on general knowledge understanding. There’s another one which is called MMLU-PRO, also known as 5-shot. It assesses, in addition to general knowledge, also reasoning capabilities.

Another key benchmark here is this HumanEval. This benchmark evaluates model code generation skills. Code generation is a valuable use case for LLMs, and not only to be used in code completion in your IDE, which you probably heard of, but also it enables the use of tools and helps in building automated agents with LLMs. As shown, the largest version of Llama 3.1 scores highly on both of these benchmarks, if we compare to other models, including those proprietary ones like GPT-4 and Claude 3.5. It’s worth noting that Llama 3.1 is more than just a model. It’s basically a developer platform. Meta actually refers to it as Llama Stack, basically a developer stack, which includes not only the model itself, in different sizes, but also numerous tools and integrations. Tools that help you make the model more safe. Tools for building agents, doing evals and fine-tuning as well. It’s a lot of different tools. It’s not just a model.

Qwen 2.5 Instruct

Here we see Qwen 2.5, which I mentioned when we looked at this chart of models. This is the newest version of Qwen. We had Qwen 1.0, 2.0, and now this is the most recent one released a couple of months ago. As I said, it’s created by the team behind Alibaba Cloud that specialize in foundational models. Besides demonstrating strong capabilities in fundamental knowledge like reasoning and code generation, it also speaks 29 languages, compared to 8 languages supported by Llama 3.1. It knows more languages, which is great. Notably, it delivers impressive performance while being five times smaller. As you see, the name of the model is 72 billion parameters, which is more than 5 times smaller than the largest Llama 3.1 model. Still, it comes really close to the quality of Llama 3.1, 405 billion parameters. Qwen models come in different sizes, and the majority of models are available under Apache 2.0 license, which is the most permissive license. This is great.

There are basically no conditions except probably the largest one, 72 billion parameters, which comes with a proprietary license. However, it’s still open weight model, and it’s allowed to be used commercially if your number of monthly active users is less than a specific number. It’s a pretty big number. If you are not Google, you certainly don’t have to be concerned about this. You actually can use this Qwen model for commercial purposes. Later, we’ll of course examine how model size greatly influences its practical applications. Because, should I understand that the smaller the model is, the much easier it is to use the model.

When to Use Open-Source Models

We’ve finally arrived to the key question, when should we use open-source models and what are the reasons behind it? Based on my numerous conversations with people, what I see is that most companies tend to strategically underestimate the importance of open-source models and fail actually to recognize why relying on closed source models is not a viable long-term option for the industry. Pretty strong statement. I’d love to elaborate on this, because I think this is important. As we’re witnessing today, much like the internet or computers before, GenAI is becoming integral nearly to every service, product, or human-computer interaction. It basically transforms how we work, how we communicate, how we interact. As we enter the GenAI era, it significantly impacts both economics and the distribution of competitive advantage.

Before GenAI, companies gained the competitive edge through, of course, like technical talent, and through their proprietary technologies, which the company owns. What we see in the GenAI era, this competitive advantage of a company will increasingly stem from how AI is leveraged and applied to the specific use cases and data. We are yet to see this transition from the tech talent and the technology the company has, to how AI is applied in very specific use cases. If a company aims to maintain its competitive edge, in my personal view, it must certainly take GenAI seriously and avoid outsourcing these to external companies, just as they did previously with software development.

Hardware Requirements

Now that we talked about theory a bit, let’s maybe look under the hood, what it actually feels to use open-source models. Do not expect me to go into maybe specific applications or frameworks, but I’d rather spend more time talking about most fundamental aspects of using LLMs. If you are already into LLMs, pretty deep, you might not find some significant insight here. Those people that are curious about the adoption of open-source LLMs, and maybe not necessarily the actual research, but the use of this, the results of this research within the companies, that might be super helpful, because then you understand what are the main constraints. Here’s this slide with the hardware requirement.

You’ve probably heard many times that in order to use LLMs, you need basically GPUs. They are very expensive, and you need a lot of them. We’ll talk about that. If we look at Llama 3.1 which comes in three different sizes, we see that the larger the model is, the more parameters it has, the bigger this GB number, which means the number of GPU memory which you need in order to run inference here. The column is called FP16, means that float point 16 is the weight of the model these tensors are stored in. This is also known as half precision, half of 32. This is mostly how LLMs are stored once they are trained. We can see that in order to run an inference of the smallest model, we would need at least 16 gigabytes. This is pretty close to one of the smallest GPUs. If we talk about, for example, NVIDIA GPUs, the most used GPUs today, like A100 which is 40 gigabytes or 80 gigabytes, or H100 which is 80 gigabytes as well.

If you are not into this yet, all you need to know right now is that the more GPUs you have, the more memory you, in total, have. This is your constraint. Depending on that, it depends which model you can actually run for inference. However, it’s not as simple as that. Of course, if you are, for example, using your model in production, and you have multiple users concurrently accessing your model, and you need to scale, it means that you would need to run more instances of your model concurrently. It means that you would need to use more GPU. For example, if you look at the largest model here, 400B, you see that it requires 810 gigabytes, which won’t even fit into one GPU. You’ve already probably heard that this GPU is pretty expensive. In order to use this model, you would need eight GPUs, the most expensive GPUs, and then you would need two machines like that, just to run this large Llama 3.1 model.

The situations with fine-tuning are a little bit more complex, because when you are running inference, you are only using your memory for the forward path, for basically generating predictions. When you are doing fine-tuning, you would also need memory for the backward propagation, and then for storing the entire batch, because you are actually training in batch. Then, also, some memory is going to be used for the optimizer, which is used for other utilitarian purposes.

Basically, all of that means simply that you just need a lot more memory for fine-tuning. For example, if we look at what amount of memory would be needed for full fine-tuning of the largest Llama model, you would see that basically it’s going to be six nodes like that, which brings us to this famous meme about how I can ever do this. On the right hand and left-hand sides, you actually see that left hand one is non-experts at all. On the right-hand side is experts. In the middle, you have basically the majority. The majority of companies and teams, they actually do all kinds of optimizations in order to reduce this memory. We’ll talk about that. If you look at those experts and non-experts, you’re going to see that you simply have to buy more GPUs. That’s the hard truth that you need to know.

Optimization Techniques – Quantization

However, you don’t have to be on that end of spectrum. There are a lot of optimizations, but we’re going to talk about two most important ones. First one is quantization. A model basically consists of layers, and each layer, basically just think of it as like a dancer. It’s a multi-dimensional matrix. Basically, those are numbers that use this FP16 float point to half precision format.

You can understand that it takes memory, and this is exactly what memory is used for. In order to run inference or fine-tuning, you have to load your model into the GPU memory, and that’s how it works. The bigger the model is, the more memory it takes. There are certain tricks that can significantly reduce the amount of memory which you need, and one of them is quantization. Instead of storing the full precision, you’re converting this float point to int, and by that, you’re basically lowering the rank. With this lower precision, it takes less memory for inference and for fine-tuning as well. This is a research topic, because you’re lowering their precision, there’s a loss in the quality of the prediction.

However, this loss is not as significant. In most cases, you actually can just dismiss that. However, of course, there are cases when cannot you do that, but if you, for example, look at the Llama 3.1 release, you’ll see that they actually recommend you to use FP8, which is a quantized version, which doesn’t have much loss at all. Now if you apply that to these numbers, you can see that if you basically cut the precision twice, and you go from FP16 to FP8, you linearly reduce the amount of memory needed. If you go further, and, for example, switch the model to INT4 precision, then basically it’s a significant drop in memory required. Now if you look at 70 billion model, you would need just one GPU and it will just fit.

Optimization Techniques – Low Rank Adaptation (LoRA)

What about the fine-tuning? There’s another technique which is pretty useful, especially in both inference and fine-tuning, which is called Low Rank Adaptation. This is how it works. Think of this model weight set of tensors. You divide these weights into two parts, pre-trained weights, and you call them frozen weights. You’re saying, we’re not going to use those weights for training. We’re going to only take a subset of this weight, and only use this much smaller subset of weights for fine-tuning, and we only load this smaller set of weights. This is how you reduce the amount of memory required for fine-tuning.

Instead of loading the whole weights, you’re loading only what is called adapter weights, and then you fine-tune adapter weights. Once you’ve trained the model, then you can merge them together, and this is how you get a fine-tuned version of a model without using all the weights. For short, it’s called LoRA. This technique is pretty notable, not only because you can use it for fine-tuning, but also for inference. Imagine that you’d like to actually use multiple fine-tuned models on the same GPU? If you are not using LoRA, you would have to load each model every time, all the weights.

However, when you are using LoRA, you can load pre-trained weights once, and then switch adapters from different models. This way you can actually run inference for multiple models. However, if we look at how LoRA is applied for fine-tuning, we’re going to see that it significantly reduces the amount of memory needed for tuning. There’s also a big body of research how this affects the quality. Now, if we combine both techniques, we’re going to see that by using quantization and LoRA, we can go even further. This is basically how you actually train and run inference without having a lot of GPUs.

Development Process: Pre-Training and Post-Training

Let’s go into the actual development process, from start to end, just to get a rough sense of what it is to actually get a model. We can split the whole process into several parts. First one is pre-processing. This is where you collect all the data and process this data, prepare it for the training. Then there is pre-training. This is where you take this bulk data you have, and you just train your model without any specific assistance here. Once the model learns the basic knowledge from this data, then you can go to the post-training phase, where now you can educate your model, or another way to say it would be, to align your model with specific tasks. This is how you make the model work for very specific tasks, make it to follow instructions at all. This is a very complex area, and there are so many different approaches to that. If you want to learn more about this, you would probably read some technical reports.

Every model, when somebody releases a new model, there is typically a technical report which goes into the very specific process, how data was prepared, how it was pre-trained, and then how it was aligned. One example that might be interesting to know is supervised fine-tuning. Basically, when you train a large model, you basically give it bulk data, like internet kind of data, Reddit data, and then you want to switch from just pre-training to supervised training, where you give it very specific, curated datasets, so it starts to learn high quality data. This is called supervised fine-tuning, where you prepare these additional datasets. This is typically done after the base models is trained.

Another interesting thing here, and sometimes it can be used either in addition or instead of supervised fine-tuning, is, now that we have those large models, we can actually use very high-quality models for generating the pre-training data, and this is called synthetic data. For example, there are proprietary models like Claude, which allow you to generate datasets which you can later use for pre-training your model. If you, for example, look at the technical report of Llama 3.1 or some other models, you’re going to basically see there that the team actually generated a lot of synthetic data to train it on. It’s not only this low-quality internet data, it is actually high-quality generated data, generated by most expensive LLMs.

Of course, this would be strange if we don’t mention RLHF, this is Reinforcement Learning from Human Feedback. This is the main technique, how you make the model not only generate text, but actually follow instructions. The main trick here is that instead of just giving it text that it can learn to generate, it also learns whether the generated text is good or bad. Then you basically come up with some labels like good generation, bad generation, or a number like from 0 to 10, how great their text generation is, so the model can learn from this feedback to not generate bad results and always generate good ones. In general, this works through the reinforcement learning, where you actually train a model which learns how to score what is good, what is a bad result. Then you’ve trained this model, a model learned that, then the model is used when you actually post-train the model.

This is a complicated process, because you first have to train this reward model and only then you actually use that reward model in an actual training. Because this process is a bit complex, there is an alternative to that known as DPO, like Direct Preference Optimization, where instead of training this intermediate reward model, you would directly just provide this labeled data and the trainer will just use it without training the intermediate model. Of course, I’m just giving you an overview, and if you are interested in this, you would go and read in more detail about how this works.

Development Process: Frameworks and Tools

I would like also to mention frameworks and tools that are very important. As I previously said, I’m not here to provide all the hacks, how to leverage open-source models in production, but rather give you intuition of what it feels and what tools you can use. Typically, when you are into open-source models, there are different approaches. One approach is when you actually want to go deeper, and that’s when you would need researchers. Those researchers would go into the architecture of the model and into a very specific process. In order to understand how it is done, you would need to go and read how other models are trained. This is best done by reading those technical reports.

However, in most cases, we don’t have those resources to be very much involved into the research, pre-training. We would probably decide to focus on less expensive parts of the development process and rely on base pre-trained models and rely on all kind of tools. If we go back a little bit here, and we’re going to see that after we’ve done this post-training, there’s this stage called optimization. You already train the model, so now you want to use that model in production, for example, for inference or further fine-tuning, for example. Today, there are enough tools that help you with that.

First of all, let me mention CUDA, ROCm, and XLA. When you want to use open-source models, you can use accelerators, basically GPUs. However, there are different kinds of them. There is NVIDIA. Then there is also AMD, and they actually start to offer some very good accelerators that start to compete with NVIDIA. Then, of course, there are other alternative accelerators, like Google, for example, offers TPU, which can be also an alternative. It’s good to basically know about this choice. Even though there is a lot of NVIDIA GPUs, sometimes, for example, you’d like to use on-prem. There might be cases when you can consider using, for example, AMD. Or if you’re, for example, using Google, there are enough cases when it’s very good to use TPU. CUDA, ROCm, and XLA, those are different drivers. For NVIDIA, you would use CUDA.

For AMD, you would use ROCm. For TPU, for example, you would use XLA. Simply because there’s a team behind those drivers, they’ve done an enormous number of optimizations, which you don’t have to care about. For example, if you just take XLA TPU, there’s a dedicated team that try to optimize the inference and fine-tuning using PyTorch. You don’t even have to think of it. You just basically stand on the shoulders of these giants without worrying about the low-level optimizations, things like optimizing kernels at all, even though, of course, you can do that if you want.

Then there are frameworks for inference, for example. Most known are vLLM, TGI, and NIM. What you really need to know is that they are slightly different. However, they have a lot of commonalities between each other. They offer pretty much everything that you would need for inference. You might have heard of many different optimizations, speculative decoding, batching, and other optimizations that are already built in, so you don’t have to worry about this at all. vLLM, TGI, they are cross-platform. NIM is only NVIDIA. When it comes to training, TRL, this is the most known framework. This is by Hugging Face. It helps you do RLHF, Reinforcement Learning from Human Feedback, supervised fine-tuning, and DPO.

It has all sorts of optimizations for fine-tuning, which you also don’t have to worry about. This is a very good developer experience library, so I totally recommend. If you want to fine-tune one of the most recent LLMs, you would just go to TRL. There’s a lot of different tutorials, examples how to use it. It’s pretty easy to use. Finally, there’s this Axolotl. It’s a wrapper around basically tools like TRL, which makes it much simpler to fine-tune. It’s basically a framework for fine-tuning. Most of the time, if you want to do very classical or typical fine-tuning, you will just go with Axolotl.

Dstack is a project which my team is working on. Maybe just providing a few insights about this one. dstack is a container orchestrator which is vendor agnostic. Means that it can run on any accelerator, on any cloud, or on-prem. Think of it as Docker, except that its interface is designed for AI researchers to simplify the development, training, and deployment. You would probably just define what you want as a simple YAML file, and then don’t worry about what cloud provider you use, or whether you are using on-prem, you would basically be able to run any AI workload without going into managing of the containers yourself.

Questions and Answers

Participant 1: Regarding the first slide about the necessity of the fine-tuned models. If we would talk about the big LLMs, and not the high loss in terms of the consumption of the models, does it make sense to just go into this rabbit hole of fine-tuning, rather than just selecting the more pricey model, just spend a little bit more for tokens, and just don’t do that thing. What is, in your opinion, the criteria to just do the fine-tuning, rather than using maybe proprietary, maybe open source, so a bigger model hosted somewhere, grok or something, and pay.

Cheptsov: When do we need to fine-tune a model? When should we use simply a bigger model? Instead of an open-source model, when should we use a proprietary model?
If you ever face that situation in reality, you would quickly understand that this highly depends on the resources which you have at hand, and then the necessity to reduce the costs. Most of the time, a team starts with something that provides you a baseline of the quality, and you see, ok, so this model actually does exactly what I want. Now let me think how to optimize it. This is also a chance to go into this premature optimization topic.

Basically, the idea is that now that you know that this model works for you, and sometimes it might be even a proprietary model, you basically take OpenAI, you use it, and then you see that it works. Now you’re basically thinking, so how do I make it work given my resources? Then you realize that the only way to do it would be to fine-tune. This is going to be experimentation anyway. You would probably do several different approaches, and you would compare two options. Basically, you try to fine-tune, and you’re going to see where you have better performance, and then you compare. Basically, then you just choose between what you have. If a smaller model which you fine-tune yourself is getting you where you want, then, of course, you would use it, because it will reduce the cost. For fine-tuning, the fine-tuning costs are a lot less than the inference costs.

Especially if you are into fine-tuning, fine-tuning is done from time to time, but inference is done every second. It depends, of course, on the scale, but you would always optimize for inference. That means that sometimes you actually have to fine-tune, and that’s the best way. The better the fine-tuning step is going to be, of course, the less cost is going to be for the inference.

How do I choose between a proprietary model and an open-source model? It’s not even about the quality of the model. It’s always about whether you are allowed to use the proprietary one or not. That’s one. It’s a separate question, probably there are other concerns. If you can get where you want by using the proprietary one, you should go there.

Participant 2: In which field, and what are the most used use cases today in AI, according to your observations? What are the observability tools which we can enable to measure how performant a model is? Are there any tools known to the market so that it measures the performance, for example, of the response, so that we also think about autoscaling, probably.

Cheptsov: What are the use cases? Everybody is now trying to figure this out. Based on what I’ve seen, there’s no single cluster of use cases. It’s basically everywhere. There are companies that use LLMs to generate clothes design. There are companies that actually use it for food design. If we make this list, of course, at the top we’re going to have those chatbots which everybody is talking about, and then all kinds of copilots.

Then, if you take every industry, like financial industry, healthcare industry, whatnot, and then there will be always those chatbots and copilots. Then, at least, what I can speculate about personally is that it’s going to be a rabbit hole, and we’re going to see more use cases, and wherever we look, we’re going to see those use cases basically everywhere. That’s why I mentioned when I was talking about the impact of AI and why companies should really consider investing more into making this GenAI a part of the competitive advantage, is that it’s going to affect all the use cases, in a way. That’s why it’s much easier maybe to answer which use cases are not going to be affected. Then we have more freedom to think of it. Then we can brainstorm, ok, so let’s come up with 10 use cases which GenAI is not going to affect, which will be much easier to come up with, rather than coming up with which use cases.

The question is not, of course, like which use case is going to be affected or not. You probably would like to know which use cases I can already use LLM for now? Which is a totally different topic. This is why we need R&D, research and development. That’s why you need to take your use case, you need to take those LLMs, and then you need to do some research and experiments. Without actually doing that, you never know whether this particular use case is going to work.

Getting back to evaluations. Everybody asks about observability, and how can I solve observability? They think that now somebody will tell them. Then finally they know, and then they will tell everybody else. Nobody knows, but now this guy actually will tell, and then everything is clear now. Everybody keeps on thinking of evaluations because it’s a hard problem. It is a hard problem, and we have certain tools for evaluation, but whenever you look, AI researchers or AI developers, you’re going to hear the same story. We don’t have enough good evaluation tools. That’s why we need to improve them. Those benchmarks is one way. It depends on the use case, but sometimes you actually can now leverage also LLMs as a judge for evaluating your LLM.

For example, whenever you have an expensive LLM and a less expensive LLM, you can always ask a more expensive LLM to judge it. Ideal situation, you would involve a human. However, you can save cost. As it turns out, LLMs are cheaper than humans, so that’s why you can actually use LLMs as well. Finally, of course, there are so many observability tools right now in the market, you go and they somehow help you track metrics. In the end, those are just metrics.

See more presentations with transcripts

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


MongoDB (NASDAQ:MDB) delivers shareholders respectable 10% CAGR over 5 years …

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

When you buy and hold a stock for the long term, you definitely want it to provide a positive return. Better yet, you’d like to see the share price move up more than the market average. Unfortunately for shareholders, while the MongoDB, Inc. (NASDAQ:MDB) share price is up 62% in the last five years, that’s less than the market return. The last year has been disappointing, with the stock price down 39% in that time.

On the back of a solid 7-day performance, let’s check what role the company’s fundamentals have played in driving long term shareholder returns.

See our latest analysis for MongoDB

MongoDB wasn’t profitable in the last twelve months, it is unlikely we’ll see a strong correlation between its share price and its earnings per share (EPS). Arguably revenue is our next best option. When a company doesn’t make profits, we’d generally hope to see good revenue growth. That’s because it’s hard to be confident a company will be sustainable if revenue growth is negligible, and it never makes a profit.

For the last half decade, MongoDB can boast revenue growth at a rate of 31% per year. Even measured against other revenue-focussed companies, that’s a good result. It’s nice to see shareholders have made a profit, but the gain of 10% over the period isn’t that impressive compared to the overall market. That’s surprising given the strong revenue growth. Arguably this falls in a potential sweet spot – modest share price gains but good top line growth over the long term justifies investigation, in our book.

The company’s revenue and earnings (over time) are depicted in the image below (click to see the exact numbers).

earnings-and-revenue-growth
NasdaqGM:MDB Earnings and Revenue Growth February 12th 2025

MongoDB is a well known stock, with plenty of analyst coverage, suggesting some visibility into future growth. You can see what analysts are predicting for MongoDB in this interactive graph of future profit estimates.

A Different Perspective

While the broader market gained around 25% in the last year, MongoDB shareholders lost 39%. Even the share prices of good stocks drop sometimes, but we want to see improvements in the fundamental metrics of a business, before getting too interested. On the bright side, long term shareholders have made money, with a gain of 10% per year over half a decade. If the fundamental data continues to indicate long term sustainable growth, the current sell-off could be an opportunity worth considering. While it is well worth considering the different impacts that market conditions can have on the share price, there are other factors that are even more important. Even so, be aware that MongoDB is showing 2 warning signs in our investment analysis , you should know about…

If you are like me, then you will not want to miss this free list of undervalued small caps that insiders are buying.

Please note, the market returns quoted in this article reflect the market weighted average returns of stocks that currently trade on American exchanges.

New: Manage All Your Stock Portfolios in One Place

We’ve created the ultimate portfolio companion for stock investors, and it’s free.

• Connect an unlimited number of Portfolios and see your total in one currency
• Be alerted to new Warning Signs or Risks via email or mobile
• Track the Fair Value of your stocks

Try a Demo Portfolio for Free

Have feedback on this article? Concerned about the content? Get in touch with us directly. Alternatively, email editorial-team (at) simplywallst.com.

This article by Simply Wall St is general in nature. We provide commentary based on historical data and analyst forecasts only using an unbiased methodology and our articles are not intended to be financial advice. It does not constitute a recommendation to buy or sell any stock, and does not take account of your objectives, or your financial situation. We aim to bring you long-term focused analysis driven by fundamental data. Note that our analysis may not factor in the latest price-sensitive company announcements or qualitative material. Simply Wall St has no position in any stocks mentioned.

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.