Barclays Adjusts Price Target for MongoDB (MDB) Ahead of Earnings | MDB Stock News

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

Barclays has revised its price target for MongoDB (MDB, Financial), lowering it from $280 to $252, while maintaining an Overweight rating on the stock. This adjustment comes as part of the firm’s expectations for the company’s first-quarter earnings. Despite anticipating strong Q1 results, Barclays suggests that the guidance may be cautious, which could influence stock positioning. The overall market for off-cycle software is expected to reflect trends seen in on-cycle counterparts. Barclays continues to show interest in companies that are currently out of favor, such as Salesforce and Workday, while also noting the potential in Intuit. Investors are advised to consider positioning carefully as the earnings season unfolds.

Wall Street Analysts Forecast

1923311552988082176.png

Based on the one-year price targets offered by 34 analysts, the average target price for MongoDB Inc (MDB, Financial) is $273.96 with a high estimate of $520.00 and a low estimate of $160.00. The average target implies an
upside of 43.87%
from the current price of $190.43. More detailed estimate data can be found on the MongoDB Inc (MDB) Forecast page.

Based on the consensus recommendation from 38 brokerage firms, MongoDB Inc’s (MDB, Financial) average brokerage recommendation is currently 2.0, indicating “Outperform” status. The rating scale ranges from 1 to 5, where 1 signifies Strong Buy, and 5 denotes Sell.

Based on GuruFocus estimates, the estimated GF Value for MongoDB Inc (MDB, Financial) in one year is $438.57, suggesting a
upside
of 130.31% from the current price of $190.425. GF Value is GuruFocus’ estimate of the fair value that the stock should be traded at. It is calculated based on the historical multiples the stock has traded at previously, as well as past business growth and the future estimates of the business’ performance. More detailed data can be found on the MongoDB Inc (MDB) Summary page.

MDB Key Business Developments

Release Date: March 05, 2025

  • Total Revenue: $548.4 million, a 20% year-over-year increase.
  • Atlas Revenue: Grew 24% year-over-year, representing 71% of total revenue.
  • Non-GAAP Operating Income: $112.5 million, with a 21% operating margin.
  • Net Income: $108.4 million or $1.28 per share.
  • Customer Count: Over 54,500 customers, with over 7,500 direct sales customers.
  • Gross Margin: 75%, down from 77% in the previous year.
  • Free Cash Flow: $22.9 million for the quarter.
  • Cash and Cash Equivalents: $2.3 billion, with a debt-free balance sheet.
  • Fiscal Year 2026 Revenue Guidance: $2.24 billion to $2.28 billion.
  • Fiscal Year 2026 Non-GAAP Operating Income Guidance: $210 million to $230 million.
  • Fiscal Year 2026 Non-GAAP Net Income Per Share Guidance: $2.44 to $2.62.

For the complete transcript of the earnings call, please refer to the full earnings call transcript.

Positive Points

  • MongoDB Inc (MDB, Financial) reported a 20% year-over-year revenue increase, surpassing the high end of their guidance.
  • Atlas revenue grew 24% year over year, now representing 71% of total revenue.
  • The company achieved a non-GAAP operating income of $112.5 million, resulting in a 21% non-GAAP operating margin.
  • MongoDB Inc (MDB) ended the quarter with over 54,500 customers, indicating strong customer growth.
  • The company is optimistic about the long-term opportunity in AI, particularly with the acquisition of Voyage AI to enhance AI application trustworthiness.

Negative Points

  • Non-Atlas business is expected to be a headwind in fiscal ’26 due to fewer multi-year deals and a shift of workloads to Atlas.
  • Operating margin guidance for fiscal ’26 is lower at 10%, down from 15% in fiscal ’25, due to reduced multi-year license revenue and increased R&D investments.
  • The company anticipates a high-single-digit decline in non-Atlas subscription revenue for the year.
  • MongoDB Inc (MDB) expects only modest incremental revenue growth from AI in fiscal ’26 as enterprises are still developing AI skills.
  • The company faces challenges in modernizing legacy applications, which is a complex and resource-intensive process.

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


2 Mid-Cap Stocks Worth Your Attention and 1 to Approach with Caution – Yahoo Finance

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

Mid-cap stocks often strike the right balance between having proven business models and market opportunities that can support $100 billion corporations. However, they face intense competition from scaled industry giants and can be disrupted by new innovative players vying for a slice of the pie.

This is precisely where StockStory comes in – we do the heavy lifting to identify companies with solid fundamentals so you can invest with confidence. Keeping that in mind, here are two mid-cap stocks with massive growth potential and one that may have trouble.

Market Cap: $11.05 billion

Headquartered in Ohio, Lincoln Electric (NASDAQ:LECO) manufactures and sells welding equipment for various industries.

Why Are We Cautious About LECO?

  1. Organic revenue growth fell short of our benchmarks over the past two years and implies it may need to improve its products, pricing, or go-to-market strategy

  2. Projected sales growth of 2.1% for the next 12 months suggests sluggish demand

  3. Earnings growth underperformed the sector average over the last two years as its EPS grew by just 5.4% annually

Lincoln Electric’s stock price of $197.92 implies a valuation ratio of 21x forward P/E. Dive into our free research report to see why there are better opportunities than LECO.

Market Cap: $15.47 billion

Started in 2007 by the team behind Google’s ad platform, DoubleClick, MongoDB offers database-as-a-service that helps companies store large volumes of semi-structured data.

Why Are We Fans of MDB?

  1. ARR trends over the last year show it’s maintaining a steady flow of long-term contracts that contribute positively to its revenue predictability

  2. High switching costs and customer loyalty are evident in its net revenue retention rate of 119%

  3. Free cash flow margin is anticipated to expand by 5.1 percentage points over the next year, providing additional flexibility for investments and share buybacks/dividends

At $190.20 per share, MongoDB trades at 7.1x forward price-to-sales. Is now the time to initiate a position? See for yourself in our in-depth research report, it’s free.

Market Cap: $27.32 billion

Operating under multiple brands like Orkin and HomeTeam Pest Defense, Rollins (NYSE:ROL) provides pest and wildlife control services to residential and commercial customers.

Why Will ROL Beat the Market?

  1. Impressive 11.9% annual revenue growth over the last two years indicates it’s winning market share this cycle

  2. Offerings are difficult to replicate at scale and result in a best-in-class gross margin of 52.1%

  3. ROL is a free cash flow machine with the flexibility to invest in growth initiatives or return capital to shareholders

Rollins is trading at $56.37 per share, or 49x forward P/E. Is now a good time to buy? Find out in our full research report, it’s free.

The market surged in 2024 and reached record highs after Donald Trump’s presidential victory in November, but questions about new economic policies are adding much uncertainty for 2025.

While the crowd speculates what might happen next, we’re homing in on the companies that can succeed regardless of the political or macroeconomic environment. Put yourself in the driver’s seat and build a durable portfolio by checking out our Top 5 Strong Momentum Stocks for this week. This is a curated list of our High Quality stocks that have generated a market-beating return of 176% over the last five years.

Stocks that made our list in 2020 include now familiar names such as Nvidia (+1,545% between March 2020 and March 2025) as well as under-the-radar businesses like the once-small-cap company Exlservice (+354% five-year return). Find your next big winner with StockStory today for free.

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


MongoDB, Inc. (NASDAQ:MDB) Stock Position Trimmed by Cresset Asset Management LLC

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

Cresset Asset Management LLC cut its position in MongoDB, Inc. (NASDAQ:MDBFree Report) by 47.2% during the 4th quarter, according to the company in its most recent filing with the Securities & Exchange Commission. The fund owned 2,838 shares of the company’s stock after selling 2,533 shares during the quarter. Cresset Asset Management LLC’s holdings in MongoDB were worth $661,000 at the end of the most recent quarter.

Other hedge funds have also recently bought and sold shares of the company. Strategic Investment Solutions Inc. IL bought a new stake in shares of MongoDB during the 4th quarter worth about $29,000. NCP Inc. bought a new stake in shares of MongoDB during the 4th quarter worth about $35,000. Coppell Advisory Solutions LLC raised its position in shares of MongoDB by 364.0% during the 4th quarter. Coppell Advisory Solutions LLC now owns 232 shares of the company’s stock worth $54,000 after buying an additional 182 shares in the last quarter. Smartleaf Asset Management LLC raised its position in shares of MongoDB by 56.8% during the 4th quarter. Smartleaf Asset Management LLC now owns 370 shares of the company’s stock worth $87,000 after buying an additional 134 shares in the last quarter. Finally, Manchester Capital Management LLC raised its position in shares of MongoDB by 57.4% during the 4th quarter. Manchester Capital Management LLC now owns 384 shares of the company’s stock worth $89,000 after buying an additional 140 shares in the last quarter. 89.29% of the stock is currently owned by institutional investors.

MongoDB Stock Performance

Shares of MDB opened at $195.90 on Wednesday. The company’s 50 day moving average price is $174.62 and its 200 day moving average price is $241.04. MongoDB, Inc. has a 52 week low of $140.78 and a 52 week high of $379.06. The company has a market capitalization of $15.90 billion, a PE ratio of -71.50 and a beta of 1.49.

MongoDB (NASDAQ:MDBGet Free Report) last announced its quarterly earnings data on Wednesday, March 5th. The company reported $0.19 EPS for the quarter, missing analysts’ consensus estimates of $0.64 by ($0.45). The company had revenue of $548.40 million for the quarter, compared to analysts’ expectations of $519.65 million. MongoDB had a negative net margin of 10.46% and a negative return on equity of 12.22%. During the same period in the previous year, the firm posted $0.86 earnings per share. On average, equities analysts forecast that MongoDB, Inc. will post -1.78 earnings per share for the current fiscal year.

Analysts Set New Price Targets

A number of analysts have commented on MDB shares. Royal Bank of Canada dropped their price target on shares of MongoDB from $400.00 to $320.00 and set an “outperform” rating on the stock in a research report on Thursday, March 6th. Mizuho lowered their price objective on shares of MongoDB from $250.00 to $190.00 and set a “neutral” rating on the stock in a research note on Tuesday, April 15th. Cantor Fitzgerald assumed coverage on shares of MongoDB in a research note on Wednesday, March 5th. They issued an “overweight” rating and a $344.00 price objective on the stock. Wedbush lowered their price objective on shares of MongoDB from $360.00 to $300.00 and set an “outperform” rating on the stock in a research note on Thursday, March 6th. Finally, Daiwa America raised shares of MongoDB to a “strong-buy” rating in a research note on Tuesday, April 1st. Eight equities research analysts have rated the stock with a hold rating, twenty-four have assigned a buy rating and one has issued a strong buy rating to the stock. According to data from MarketBeat, the company has an average rating of “Moderate Buy” and an average price target of $294.78.

Get Our Latest Report on MongoDB

Insider Buying and Selling at MongoDB

In other MongoDB news, CEO Dev Ittycheria sold 18,512 shares of MongoDB stock in a transaction that occurred on Wednesday, April 2nd. The shares were sold at an average price of $173.26, for a total value of $3,207,389.12. Following the completion of the sale, the chief executive officer now directly owns 268,948 shares in the company, valued at $46,597,930.48. The trade was a 6.44% decrease in their ownership of the stock. The transaction was disclosed in a legal filing with the Securities & Exchange Commission, which is available through the SEC website. Also, CFO Srdjan Tanjga sold 525 shares of MongoDB stock in a transaction that occurred on Wednesday, April 2nd. The shares were sold at an average price of $173.26, for a total transaction of $90,961.50. Following the completion of the sale, the chief financial officer now owns 6,406 shares of the company’s stock, valued at $1,109,903.56. This trade represents a 7.57% decrease in their ownership of the stock. The disclosure for this sale can be found here. Insiders sold 34,423 shares of company stock worth $7,148,369 over the last ninety days. 3.60% of the stock is currently owned by insiders.

About MongoDB

(Free Report)

MongoDB, Inc, together with its subsidiaries, provides general purpose database platform worldwide. The company provides MongoDB Atlas, a hosted multi-cloud database-as-a-service solution; MongoDB Enterprise Advanced, a commercial database server for enterprise customers to run in the cloud, on-premises, or in a hybrid environment; and Community Server, a free-to-download version of its database, which includes the functionality that developers need to get started with MongoDB.

Featured Stories

Institutional Ownership by Quarter for MongoDB (NASDAQ:MDB)

This instant news alert was generated by narrative science technology and financial data from MarketBeat in order to provide readers with the fastest and most accurate reporting. This story was reviewed by MarketBeat’s editorial team prior to publication. Please send any questions or comments about this story to contact@marketbeat.com.

Before you consider MongoDB, you’ll want to hear this.

MarketBeat keeps track of Wall Street’s top-rated and best performing research analysts and the stocks they recommend to their clients on a daily basis. MarketBeat has identified the five stocks that top analysts are quietly whispering to their clients to buy now before the broader market catches on… and MongoDB wasn’t on the list.

While MongoDB currently has a Moderate Buy rating among analysts, top-rated analysts believe these five stocks are better buys.

View The Five Stocks Here

7 Stocks to Buy And Hold Forever Cover

Enter your email address and we’ll send you MarketBeat’s list of seven stocks and why their long-term outlooks are very promising.

Get This Free Report

Like this article? Share it with a colleague.

Link copied to clipboard.

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


MongoDB, Inc. (NASDAQ:MDB) Stock Holdings Lowered by Mercer Global Advisors Inc. ADV

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

Mercer Global Advisors Inc. ADV decreased its holdings in MongoDB, Inc. (NASDAQ:MDBFree Report) by 52.4% in the 4th quarter, according to the company in its most recent disclosure with the SEC. The institutional investor owned 2,008 shares of the company’s stock after selling 2,213 shares during the quarter. Mercer Global Advisors Inc. ADV’s holdings in MongoDB were worth $467,000 as of its most recent filing with the SEC.

Several other institutional investors have also bought and sold shares of MDB. Caisse DE Depot ET Placement DU Quebec grew its holdings in shares of MongoDB by 196.0% in the 4th quarter. Caisse DE Depot ET Placement DU Quebec now owns 44,103 shares of the company’s stock worth $10,268,000 after acquiring an additional 29,203 shares during the last quarter. Utah Retirement Systems grew its stake in MongoDB by 1.7% in the fourth quarter. Utah Retirement Systems now owns 11,840 shares of the company’s stock valued at $2,756,000 after purchasing an additional 200 shares in the last quarter. AQR Capital Management LLC increased its position in MongoDB by 92.1% in the 4th quarter. AQR Capital Management LLC now owns 37,126 shares of the company’s stock worth $8,643,000 after purchasing an additional 17,802 shares during the last quarter. Lido Advisors LLC raised its position in MongoDB by 74.8% during the fourth quarter. Lido Advisors LLC now owns 1,208 shares of the company’s stock valued at $281,000 after acquiring an additional 517 shares in the last quarter. Finally, Northern Trust Corp increased its stake in shares of MongoDB by 6.4% during the 4th quarter. Northern Trust Corp now owns 468,010 shares of the company’s stock worth $108,957,000 after purchasing an additional 27,981 shares during the last quarter. Institutional investors and hedge funds own 89.29% of the company’s stock.

MongoDB Stock Up 2.2%

NASDAQ:MDB opened at $195.90 on Wednesday. The firm’s fifty day simple moving average is $174.62 and its 200 day simple moving average is $241.04. The stock has a market cap of $15.90 billion, a P/E ratio of -71.50 and a beta of 1.49. MongoDB, Inc. has a twelve month low of $140.78 and a twelve month high of $379.06.

MongoDB (NASDAQ:MDBGet Free Report) last announced its earnings results on Wednesday, March 5th. The company reported $0.19 EPS for the quarter, missing analysts’ consensus estimates of $0.64 by ($0.45). MongoDB had a negative net margin of 10.46% and a negative return on equity of 12.22%. The business had revenue of $548.40 million for the quarter, compared to the consensus estimate of $519.65 million. During the same quarter last year, the firm posted $0.86 EPS. On average, analysts expect that MongoDB, Inc. will post -1.78 EPS for the current fiscal year.

Analysts Set New Price Targets

MDB has been the topic of a number of recent analyst reports. Citigroup dropped their price objective on MongoDB from $430.00 to $330.00 and set a “buy” rating on the stock in a research note on Tuesday, April 1st. Redburn Atlantic upgraded shares of MongoDB from a “sell” rating to a “neutral” rating and set a $170.00 target price on the stock in a research report on Thursday, April 17th. Morgan Stanley dropped their target price on shares of MongoDB from $315.00 to $235.00 and set an “overweight” rating for the company in a research report on Wednesday, April 16th. Oppenheimer dropped their price objective on MongoDB from $400.00 to $330.00 and set an “outperform” rating for the company in a report on Thursday, March 6th. Finally, Canaccord Genuity Group reduced their price target on MongoDB from $385.00 to $320.00 and set a “buy” rating on the stock in a research report on Thursday, March 6th. Eight analysts have rated the stock with a hold rating, twenty-four have issued a buy rating and one has given a strong buy rating to the company. According to MarketBeat, MongoDB currently has an average rating of “Moderate Buy” and a consensus target price of $294.78.

Read Our Latest Analysis on MongoDB

Insider Buying and Selling

In related news, CEO Dev Ittycheria sold 18,512 shares of the stock in a transaction that occurred on Wednesday, April 2nd. The shares were sold at an average price of $173.26, for a total transaction of $3,207,389.12. Following the transaction, the chief executive officer now owns 268,948 shares in the company, valued at $46,597,930.48. This represents a 6.44% decrease in their ownership of the stock. The sale was disclosed in a document filed with the Securities & Exchange Commission, which is available at the SEC website. Also, Director Dwight A. Merriman sold 885 shares of MongoDB stock in a transaction on Tuesday, February 18th. The stock was sold at an average price of $292.05, for a total value of $258,464.25. Following the completion of the sale, the director now directly owns 83,845 shares of the company’s stock, valued at $24,486,932.25. The trade was a 1.04% decrease in their position. The disclosure for this sale can be found here. Insiders have sold a total of 34,423 shares of company stock worth $7,148,369 in the last quarter. Corporate insiders own 3.60% of the company’s stock.

MongoDB Company Profile

(Free Report)

MongoDB, Inc, together with its subsidiaries, provides general purpose database platform worldwide. The company provides MongoDB Atlas, a hosted multi-cloud database-as-a-service solution; MongoDB Enterprise Advanced, a commercial database server for enterprise customers to run in the cloud, on-premises, or in a hybrid environment; and Community Server, a free-to-download version of its database, which includes the functionality that developers need to get started with MongoDB.

Read More

Want to see what other hedge funds are holding MDB? Visit HoldingsChannel.com to get the latest 13F filings and insider trades for MongoDB, Inc. (NASDAQ:MDBFree Report).

Institutional Ownership by Quarter for MongoDB (NASDAQ:MDB)

This instant news alert was generated by narrative science technology and financial data from MarketBeat in order to provide readers with the fastest and most accurate reporting. This story was reviewed by MarketBeat’s editorial team prior to publication. Please send any questions or comments about this story to contact@marketbeat.com.

Before you consider MongoDB, you’ll want to hear this.

MarketBeat keeps track of Wall Street’s top-rated and best performing research analysts and the stocks they recommend to their clients on a daily basis. MarketBeat has identified the five stocks that top analysts are quietly whispering to their clients to buy now before the broader market catches on… and MongoDB wasn’t on the list.

While MongoDB currently has a Moderate Buy rating among analysts, top-rated analysts believe these five stocks are better buys.

View The Five Stocks Here

Reduce the Risk Cover

Market downturns give many investors pause, and for good reason. Wondering how to offset this risk? Enter your email address to learn more about using beta to protect your portfolio.

Get This Free Report

Like this article? Share it with a colleague.

Link copied to clipboard.

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Mistral Unveils Medium 3: Enterprise-Ready Language Model

MMS Founder
MMS Robert Krzaczynski

Article originally posted on InfoQ. Visit InfoQ

Mistral AI has unveiled Mistral Medium 3, a mid-sized language model aimed at enterprises seeking a balance between cost-efficiency, strong performance, and flexible deployment options. The model is now available through Mistral’s platform and Amazon SageMaker, with further releases planned for IBM WatsonX, Azure AI Foundry, Google Cloud Vertex AI, and NVIDIA NIM.

According to Mistral, Medium 3 delivers performance comparable to larger models such as Claude Sonnet 3.7, reaching over 90% of its scores on internal benchmark tests, while maintaining lower cost, estimated at $0.40 per million input tokens and $2 for output. The company reports that the model surpasses open models like LLaMA 4 Maverick and outperforms commercial offerings, particularly in coding and STEM-related tasks.


Source: Mistral AI Blog

The model supports deployment in a variety of environments, including hybrid and fully on-premises configurations using systems with as few as four GPUs. It also offers customization options, including post-training, fine-tuning, and integration into private enterprise data and tools.

In real-world use cases, Mistral Medium 3 has shown promise in coding, customer support automation, and technical data analysis. The company notes early adoption in the finance, energy, and healthcare sectors, emphasizing the model’s compatibility with domain-specific applications.

Still, not all community feedback has been positive. One Reddit user commented:

It performs worse than DeepSeek models, yet its API is more expensive. And since they did not release the weights, it is unclear why anyone would pay for this.

This sentiment reflects some ongoing debate about the value of proprietary models versus open-weight alternatives, particularly in developer and research communities that prioritize transparency and fine-tuned control.

On the other hand, the model has also received support from enterprise professionals. Arnaud Bories, Sales Director Emerging at Okta, remarked:

Huge congratulations to the entire Mistral AI team on this exciting launch. The focus on enterprise-grade customization and security really stands out. At Okta, we are always exploring how identity can be a catalyst for secure and seamless AI adoption—looking forward to seeing how we might support and enhance these innovations together.

As the enterprise AI market continues to expand, Mistral Medium 3 enters a competitive space, offering a model that prioritizes deployment flexibility, cost control, and integration readiness.

About the Author

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Expanding Continuous Improvement beyond Agile Practices

MMS Founder
MMS Ben Linders

Article originally posted on InfoQ. Visit InfoQ

After being on an agile journey where practices have primarily been centered on IT, a company is now exploring ways to extend them beyond IT and scale their approach. At Agile Tampere, Ramya Sriram presented how they focus on continuous improvement through agile practices, feedback, and customized maturity assessments. Emphasizing flow metrics with a strong learning culture, they aim for efficiency and sustainable growth.

Sriram mentioned that their company has reached a strong position in its agile journey, and is reflecting on how they want to evolve and transform further:

This marks the beginning of what we’re calling Agile 2.0, where we are focusing on scaling and broadening our impact. Teams outside IT are also learning more about agile practices and we are also studying agile hardware to understand its full usage.

We’re proud of our current maturity level, but as part of our commitment to continuous improvement, we are always learning and adapting, Sriram said. Their next steps are guided by insights drawn from maturity assessments, interviews, and feedback, focusing on areas such as teamwork, agile practices, value delivery, learning and improvement, quality practices, customer satisfaction, organizational alignment, and delivery execution. Each of these areas presents both strengths and opportunities, which require ongoing evaluation and action, Sriram mentioned.

To support this evolution, Sriram mentioned that her company has initiated efforts to expand agile training and coaching, providing teams with the tools and knowledge they need to grow. They have also conducted bootcamps to collaboratively identify challenges, share best practices, and tackle impediments together. This collaborative approach is laying the foundation for a more integrated and impactful agile journey, Sriram said.

Sriram mentioned that they have been using flow metrics and maturity assessment tools to level up their agile practices. These tools have been game-changers for planning outcomes, boosting efficiency, and delivering products with more predictability and agility, she explained:

We started with the SAFe toolset, specifically the Facilitating SAFe Assessments framework, for our maturity assessments. But instead of a one-size-fits-all approach, we customize the templates to fit our unique needs. This helped us focus on what really mattered to our teams and stakeholders.

Their teams keep a close eye on their flow metrics. Sriram mentioned that they have been inspired by measure and grow from the Scaled Agile Framework. These metrics aren’t just numbers—they reveal bottlenecks, highlight challenges, and guide us toward better planning and smoother workflows, Sriram explained:

It’s all about finding balance. If teams only focus on technically completing requirements without considering real stakeholder needs, people aren’t happy. On the flip side, if teams take on too little work, stakeholders might feel like their priorities are constantly being pushed to the next quarter.

The sweet spot lies in balancing work intake with capacity while staying flexible enough to adapt to changing requirements—that’s what agility is all about, Sriram said.

In today’s era of digital transformation, with advancements like Artificial Intelligence, Machine Learning, and Generative AI, the question often arises—are these a boom or a bane? Regardless of the innovations, the cornerstone remains the same: continuous improvement and learning, Sriram said. These drive innovation and pave the way for sustainable progress, she concluded.

InfoQ interviewed Ramya Sriram about continuous improvement.

InfoQ: How do you use feedback from customers for improvement?

Ramya Sriram: Customer feedback plays a big role. Through surveys, retrospectives, and demos, we gather insights that help us continuously improve. Sometimes, it’s a big change, like tweaking release cycles. Other times, it’s small but impactful—like improving communication during a tribe demo, scheduling testers more effectively for UAT, or enhancing end-user documentation.

This continuous feedback loop keeps us aligned with what truly matters, helping us deliver better results while staying adaptable and grounded.

InfoQ: What’s your advice for sustainable improvement in software organizations?

Sriram: For me, the focus is on prioritizing quality over quantity while fostering strong feedback loops to continuously evolve, learn, and refine how we work. It’s crucial to nurture and even deepen our commitment to a culture of continuous learning and innovation. Breaking down silos and promoting cross-team collaboration is essential to this effort.

We must also keep a close eye on measuring and optimizing workflow by addressing bottlenecks and enhancing efficiency, all while emphasizing the importance of people and ensuring a culture of psychological safety.

About the Author

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Presentation: Renovate to Innovate: Fundamentals of Transforming Legacy Architecture

MMS Founder
MMS Rashmi Venugopal

Article originally posted on InfoQ. Visit InfoQ

Transcript

Venugopal: These are some typical growth trajectories of successful companies. The hockey stick being the most sought after and popular one. The software systems that worked well during the initial phases of a company, phase A, will not be sufficient as you prepare to scale your business in phase B. The exponential growth phase, phase C, requires drastically different software capabilities than A or B. Successful companies, like the ones that live to see exponential growth, outgrow their software systems one way or the other. I’m making the case that legacy systems are a byproduct of success. Despite being a byproduct of success, legacy systems have a bad rep. Just the word legacy evokes strong emotions, and for good reason. We associate legacy with technical debt, painful migrations, high maintenance costs, and poor developer experience.

For the long-term success of your company, build the muscle to renovate legacy systems. While legacy systems are a byproduct of success in the past, success in the future depends on your ability to not let legacy systems get in the way of the growth for your company. That brings me to my first takeaway, that legacy systems are inevitable. Don’t let them weigh you down. Make them work for you instead. In fact, this takeaway is the inspiration for my talk.

Background

Welcome to Renovate to Innovate: The Fundamentals of Transforming Legacy Architecture. I’m Rashmi Venugopal, a staff engineer at Netflix. I spent the last decade building and operating reliable distributed systems at scale. During that time, I’ve been fortunate to work with, learn from, and grow amongst some of the brightest minds at Microsoft, CMU, Uber, and most recently at Netflix. First, we’ll unpack what legacy systems are and why they exist. The focus of this talk is technical renovation. We’ll cover what it means, when a technical renovation is applicable, and discuss strategies for effective renovation.

Legacy Systems – What?

What is the first thing that comes to mind when you think of the word legacy? Old, unsupported, unchangeable, and no tech. Let’s see how you all match up with OpenAI’s word cloud for the term legacy. I see a few in here, not bad. As you can tell, the term legacy is quite overloaded. Let’s spend a couple minutes to get on the same page about what legacy means in the context of this talk. I define a system as legacy if it is incapable of keeping up with business requirements. After all, your software systems exist to serve your business goals. Let’s make this more concrete and talk through some symptoms of legacy systems. There are numerous dimensions of complexity in software engineering. Legacy systems usually have substantial complexity in one or more of these dimensions. Working across a large number of teams and people slows engineers down. This is because of coordination tasks. That is organizational complexity.

Operational complexity is when there’s insufficient automation, testing, monitoring, observability leading to high operational costs. Cognitive complexity is when institutional knowledge builds up. Documentation becomes outdated or people turn over. Why is complexity such a bad thing after all? As complexity goes up, innovation velocity goes down. There’s a direct correlation between complexity and innovation velocity. Product and project managers expect productivity to scale linearly with complexity. Engineers, we know better. Our past experience has primed us to be more pragmatic. It is a sign of a legacy system when the reality of how long it takes far exceeds expectations in a bad way.

Another sign of legacy system is degraded quality of experience. Quality of experience measures the overall satisfaction of end users when they interact with a system. I’m sure we’ve all experienced the very real frustration of waiting many seconds for a page to load. Amazon has an infamous study where they quantify the impact of latency on their business. They find that every 100-millisecond increase in latency impacts their sales by 1%. A dip in the quality of experience despite your best efforts to tune them is a symptom of a legacy system. To recap, I consider a system to be legacy if it is incapable of keeping up with business requirements.

Legacy Systems – Why?

Now that we’ve covered what legacy systems are, let’s talk about why software systems become legacy in the first place. The most obvious reason is the rapid pace at which technology advances today. Who here has used two or more of these devices? Systems that were once considered cutting edge struggle to keep up with modern industry standards just a few years down the line. Technology choices become outdated. In addition to this obvious reason, there are two schools of thought that explain software degradation. The first school of thought is the bit rot theory. It states that software gradually degrades over time due to incremental changes to itself or its surroundings. An unused code path is an example of bit rot, so is code duplication. A lack of documentation or a loss of knowledge is yet another example. In theory, bit rot can be kept in check with good software engineering practices.

In reality, bit rot accumulates over time. The second school of thought is the Law of Architectural Entropy. It states that software systems lose their integrity when features are added without much consideration of the original architecture. The primary driving factor for architectural entropy is the real and unintentional tradeoffs that engineers have to make in order to deliver results faster or meet deadlines. Imagine the growth of a successful e-commerce company. In the early stages, they’re focused on establishing a thriving business. Evolving their architecture to be perfect is just not a priority. In fact, changes to the architecture is driven by business needs.

In this example, every new feature is added to the existing monolith, steadily increasing the architectural entropy. In the real world, software systems are affected by all of these phenomena. This explains why outdated and legacy systems are more commonplace than we’d like them to be. Now that we’ve agreed that legacy systems are commonplace, let’s ask ourselves, do we always proactively renovate legacy systems? I wish we did. The inevitability of software degradation on one hand, combined with the lack of renovation of legacy systems on the other, leaves us with systems that are difficult to maintain, understand, and extend. These are the systems that are very likely to get in the way of growth and success for your organization’s future.

Technical Renovation – What?

That brings us to technical renovation. What does technical renovation actually mean? I define technical renovation as the act of upgrading or replacing outdated systems and technology to improve the software’s state of affairs. Every time I bring up technical renovation, I get asked, how does refactoring fit in? Why is technical renovation different from refactoring? I’d like to address the elephant in the room with a closet analogy. Refactoring is like organizing your closet. Organizing involves moving things around. You make it easy to access all pieces of your clothing. You might even get rid of some stuff to make room for more things.

This whole process has a side effect of reminding you what you already have and it potentially influences your future wardrobe investments. Renovation is when you break down the walls of your closet to replace a regular one with a walk-in one. Renovation goes beyond just moving things around. Renovation is when you make a drastic change to shake things up and the end result gives you capabilities that you did not have before. Renovation is usually a much larger undertaking and therefore occurs less frequently than refactoring. While this talk is about technical renovation, I just wanted to pause to say that refactoring is valuable. It is a valid strategy to maintain a healthy codebase and there’s many benefits to maintaining a healthy codebase.

Technical Renovation – When?

Now that we’ve discussed what technical renovation is and how it’s different from refactoring, let’s review some scenarios for which technical renovation is applicable. In other words, if technical renovation were a hammer, what do the nails look like? As your business needs evolve, attempting to reuse existing systems to solve for something drastically different doesn’t typically end very well. Here’s an example of a business-driven renovation. Netflix evolved from a DVD distribution company to a streaming service. The capabilities required to deliver DVDs is drastically different from the capabilities required to stream video on-demand. The systems that served Netflix well in the DVD era isn’t going to be sufficient to run a successful streaming service.

The point being, drastic changes in business needs eventually call for a renovation. Technical renovation is also a valid strategy for an ecosystem-driven change. When the ecosystem changes, the underlying assumptions built into the existing systems are challenged. If you’re going from hosting REST APIs to now serving data behind a GraphQL gateway, a renovation is in order. Technical debt occurs when you borrow from the future to make a tradeoff for the present. Even with the most well-intentioned engineers, there are scenarios when technical debt accumulates, and accumulates to a point of no return. Unexpected longevity is one such example.

Sometimes software systems turn out to be more successful than anyone imagined they would be. While that specifically is a good problem to have, the unexpected longevity accumulates significant technical debt. That makes technical renovation a viable option to improve the state of affairs. These are some nails for the hammer that is technical renovation. Time for our next takeaway. Use the right tool for the right problem. Leverage renovation for the scenarios similar to the ones we just discussed. Refactor your code as often as it makes sense to do so.

Technical Renovation – Strategies

Let’s talk about how to approach a technical renovation next. I’d like to share four strategies to consider as you embark on your renovation journey. I found these strategies useful to do renovation right. My first strategy is evolutionary architecture. Historically, architecture is viewed as something that has to be developed ahead of time, even before a single line of code gets written. It’s also perceived as something that’s set in stone, never to change. In the world of modern technology, this pre-planned approach to architecture doesn’t keep up with the evolving needs of your business. Here’s an alternate approach to consider for your renovation initiative. Evolutionary architecture emphasizes incremental changes because complex systems cannot be fully designed upfront. It advocates for evolvability. When your priority changes, your tools should change with it. How do we make evolvable architecture a reality?

Step one, identify a set of fitness functions that represent the desired qualities of your end state, such as performance, scalability, security. Once you’ve picked the quality that matters the most for your business, use that to inform engineering decisions. In that process, ask yourself some hard questions. Does performance really matter? If yes, by how much? Will users actually notice the difference between a 1-second page load and an 800-millisecond page load? The point is, don’t optimize prematurely. As a rule of thumb, if you don’t regret any of your early decisions, chances are you overengineered. Excessive abstractions or overly generic solutions and premature scalability are some common pitfalls to watch out for.

Step two, invest in continuous delivery. Create an infrastructure that you can use to execute fast. Automate the steps between developing, testing, and releasing a feature. Step three, make small and incremental changes. Making changes in the Big Bang fashion is difficult to get right. Incremental changes makes it easy to course correct as you go. The crux of evolutionary architecture is to make small changes, release them often, and use feedback loops to see how well you’re doing against your fitness functions. As the business requirements evolve, your fitness functions are going to change. Lean into continuous delivery and incremental changes to keep up with your evolving needs.

Speaking of incremental changes, my next strategy breaks down an incremental approach to renovation: make it work, make it right, make it fast. I’m sure most of us have heard of this quote from Kent Beck, we apply to writing code. I’m making the case that this applies more broadly to software engineering, including your renovation initiatives. The first part, make it work, is all about getting the assurance that your problem can be solved one way or the other. From a coding perspective, make it work is all about giving yourself the permission to write ugly, unreadable code. Even if it means you have to hardcode inputs along the way, that’s fine.

In the case of your renovation, use this time to validate your technology choices, handle the common use cases, and eliminate some bad solutions along the way. If your integration is just barely held together by duct tapes, so be it. Now is not the time for perfection. I consider a proof of concept as a good outcome of the make it work phase. It’s time to make it right once you’ve established the validity of your solution. From a coding perspective, you would prioritize readability, adding tests, or even refactoring. For your renovation initiative, make sure your edge cases are accounted for, your fitness functions are being met, and even test against some real-world users to see how your solution holds up. I view a minimum viable product as a good outcome of this phase. A working solution that’s a natural extension of your proof of concept. That brings us to make it fast.

From a coding perspective, make it fast feels like a performance thing. It gets interpreted very literally. How do I make this piece of code run faster? For your renovation initiative, however, make it fast is so much more than just performance. It’s adding documentation. It’s integrating with continuous delivery. It’s setting up observability and monitoring. All this speeds up your development process and is very much in the realm of make it fast. The output of this phase is production grade software that’s ready for prime time. This structured approach to tackling the different aspects of a technical renovation helps break down a daunting endeavor into trackable and manageable milestones. You’re set up to overcome analysis paralysis because you’ve given yourself the permission to just focus on making it work.

Then you iterate to make it right and ensure that your fitness functions are met. If you care about performance, and performance is one of your fitness functions, now is the time to get it right. Lastly, optimize for speed of execution. This structured approach also gives me the clarity I need to move fast without breaking things. My third takeaway is a combination of the two strategies that we just discussed. As you renovate your systems, build incremental and evolvable software that is capable of aligning with changing business needs.

On to the third strategy. Deprecation driven development focuses on what we gain from deprecating as opposed to what we lose. I’m making the case that removing code is as important as adding code. Systematically removing obsolete technology is a prerequisite for healthy software systems. Weigh the tradeoffs before you renovate a feature. Be honest about the return on investments especially when it doesn’t justify the effort required to migrate them, because not all features are equally important. When you encounter a feature that is not critical to the success or the growth of your organization, consider leaving them in the legacy system.

Better yet, deprecate them because the cost of maintaining is often higher than the cost of building them in the first place. Netflix winding down DVD.com is a good example of a product deprecation that was driven by a similar tradeoff, the tradeoff that’s between the cost to maintain and the benefits to business. As the number of DVD members continued to shrink, it became increasingly difficult to justify the cost of providing the best-in-class experience for DVD users. Once the decision to deprecate was made, this clarified engineering priorities. We didn’t invest in DVD related technology the year leading up to the deprecation. No points for guessing what my fourth takeaway is. Removing features is as important as adding new features. Be ruthless about deprecating features that don’t serve your business, because they either weigh you down with high maintenance cost or make your renovation journey endless and expensive.

My fourth strategy is intentional organization design. As your company grows from phase A to B to C, renovating your organization is just as important as renovating your software. Intentional organization design is all about identifying the optimal collaboration model to drive the best business outcomes. The goal is to make it easy for ideas to flow through the organization. In addition to the flow of ideas, organization design also has an impact on architecture. Conway’s Law explains the synergy between the two. It suggests that the way teams are organized influences the architecture of the systems they create.

For any organization that’s undertaking a technical renovation initiative, it’s a good idea to first take a step back, assess the org structure, identify any changes that might be worth making. It could be to streamline communication or minimize cross-team collaboration. Let’s look at an example of organizing teams to make this more concrete. Design A involves grouping engineers based on their function, like frontend, middle-tier, backend. Design B groups engineers based on a common product deliverable, but as full-stack teams. Each design has its pros and cons. A optimizes for engineers to ramp up quickly and provides space for them to become experts at their craft.

If every new feature requires making changes to all three parts of the stack, they will need somebody outside to coordinate and assign tasks and manage dependencies. If you happen to be optimizing for minimum cross-team collaboration, Design B might be more efficient for you. In summary, you have to choose to strengthen the communication paths that are most important for your organization, because every communication path can’t be the strongest.

The Growth Mindset

While this brings us to the end of the renovation strategies that are important to consider as you work to transform your legacy systems, I’d like to talk about an important piece of the puzzle that’s required to bring all the work for a technical renovation to actually come together, the growth mindset. The strategies I shared may seem ambitious. They’re intentionally aspirational in the spirit of shooting for the stars and landing on the moon. Also, because there are no silver bullets for technical renovation. The ideal approach is highly context dependent. Your strategy and decisions should be debated on a case-by-case basis and accounted for the unique circumstances and goals for your organization.

Recap

Start with the right perspective, the perspective that legacy systems are inevitable in successful companies because they are a byproduct of success. For continued success, don’t let them weigh you down. Invest in building the muscle to renovate legacy systems. Use the right tool for the right problem. Refactoring solves some problems, but not all. Invest in technical renovation when it makes sense to do so. When you do invest in renovating your systems, approach that with a focus on building incremental and evolvable systems that keep up with your business. Removing lines of code is as important as adding lines of code. Regardless of what the growth trajectory of your company looks like, you can rest assured that the path to successfully transforming legacy systems will be bumpy. Embrace the growth mindset, seek feedback, learn from your mistakes, and enjoy your renovation journey.

See more presentations with transcripts

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


CMU Researchers Introduce LegoGPT: Building Stable LEGO Structures from Text Prompts

MMS Founder
MMS Robert Krzaczynski

Article originally posted on InfoQ. Visit InfoQ

Researchers at Carnegie Mellon University have introduced LegoGPT, a system that generates physically stable and buildable LEGO structures from natural language descriptions. The project combines large language models with engineering constraints to produce designs that can be assembled manually or by robotic systems.

LegoGPT is trained on a new dataset called StableText2Lego, which includes over 47,000 LEGO models of more than 28,000 unique 3D objects, each paired with detailed captions. The models are derived by converting 3D meshes into voxelized LEGO representations, applying random brick layouts, and filtering unstable designs using physics simulations. Captions are generated using GPT-4o based on renderings from multiple viewpoints.

Source: https://avalovelace1.github.io/LegoGPT/

The model architecture is based on Meta’s LLaMA-3.2-1B-Instruct and fine-tuned using an instructional format that pairs LEGO brick sequences with descriptive text. At inference time, the system predicts one brick at a time in a bottom-to-top raster-scan order, applying several validation checks to ensure that each brick placement adheres to known constraints such as part existence, collision avoidance, and structural feasibility.

To handle instability during generation, LegoGPT includes a rollback mechanism. If a newly added brick leads to a physically unstable structure, the system reverts to the last stable state and continues to generate from that point. This approach is intended to produce final structures that are both prompt-aligned and mechanically sound.

Reactions from the community have been mixed. One user on Hacker News noted:

This does not seem like a very impressive result. It is using such a small set of bricks, and the results do not really look much like the intended thing. It feels like a hand-crafted algorithm would get a much better result.

In contrast, another response emphasized the methodological contribution:

But I think the cool part here is not photorealism, it is the combo of language understanding and physical buildability.

The system includes tooling for visualization and texturing using external packages like ImportLDraw and FlashTex. The team also provides scripts for fine-tuning on custom datasets and supports interactive inference through a command-line interface.

LegoGPT, along with its dataset and associated tools, is released under the MIT License. Submodules used for rendering and texturing have separate licenses. Access to some components, such as the base language model and Gurobi solver for stability analysis, may require separate agreements.

The work aims to support future research in grounded text-to-3D generation, physical reasoning, and robotics, offering a reproducible benchmark for evaluating structural soundness and prompt alignment in generative models.

About the Author

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Presentation: Ultra-fast In-Memory Database Applications with Java

MMS Founder
MMS Markus Kett

Article originally posted on InfoQ. Visit InfoQ

Transcript

Kett: This talk is all about ultra-fast in-memory database processing with Java. Who of you is a Java developer? Who of you is a database developer? Who develops database applications or works on database stuff in general? Who has performance issues? This is a question for the database vendors. In this session, I will show you how you can build the fastest database applications on the planet. This depends on you, what you’re doing, which solution you choose. This is an approach that is not new. It’s used by gaming companies, online banking companies already for 20 years. Now we have a framework.

This talk is about how you can use this approach. Here we can see a lot of fancy new applications, applications of the future, so virtual reality, AI, everything is about AI these days, blockchain, and so on. For all of these modern applications, there are some factors, super important and critical. Everybody wants high performance, of course. Of course, we want low data storage costs in a cloud. Simplicity is very important for developers. Sustainability is very important for managers and organizations. Today, the reality is different. I will show you why.

My name is Markus. I’ve worked on Java for more than 20 years now. With my team, I work on several open-source projects. I’m also an organizer of a conference, try to give something back to the Java community. This is always a lot of fun. With my company, we are very active in the community. We are a member of the Eclipse Foundation. Most people know about Eclipse Foundation because of the Eclipse development environment, but it’s much more. We run more than 100 open-source projects under the roof of the Eclipse Foundation. Java Enterprise is now part of the Eclipse Foundation, it’s now called Jakarta EE. We are also a member of the Micronaut Foundation. Who of you knows Micronaut or uses Micronaut? This is a microservice framework. We are also contributing to the Helidon project. Who of you knows what Helidon is? Helidon is also a microservice framework and runtime for building microservices in Java. It’s driven by Oracle, but it’s open source.

Data Processing, Today

Let me talk about database development of today. The situation is a little bit different. In my previous project, we’ve worked on a development environment, based on Eclipse. It should become a Visual Basic for Java. We developed a GUI builder. Everything went fine. We developed a Swing GUI builder, then a JavaFX GUI builder, then a WADing GUI builder for creating HTML user interfaces. The problem we had, as soon as we want to show data on a screen, then everything went bad, slow, complex, so we tried to improve this. We’ve worked on the JBoss Hibernate tools for Eclipse for almost 10 years now. We tried to simplify the Hibernate tools to accelerate speed, and we were not successful. Why? It’s because there are so many technical problems. This is my background.

When I talk about database programming, please keep this in mind. I worked on database stuff, traditional databases, for more than 10 years. It’s great. What we have in Java is great. We have a lot of challenges with this technology. Here’s why. Today, database programming is mostly too slow. Performance is too slow. This is why you were laughing when we talked about performance issues. Database costs in the cloud are mostly too high. All managers talk about the cloud costs are skyrocketing, and the complexity is way too high, and the systems are mostly not sustainable. Now I want to show you why.

Let’s have a look at how the database server concept works, actually. We have an application. Here we have a JVM, and we have memory. We have an application or a microservice, and we have a relational database. Let’s have a look inside the relational database, because mostly it seems like a black box. We send an SQL to a database, and we get the result. Great. When we have a look inside the database, then we can see there is a lot of memory. We have a server, of course. Then we have storage. We have a database management system, and probably there is also business logic running inside a database, stored procedures, stored functions.

Please keep these components in mind. Storage, computing, a lot of memory, and maybe business logic. What’s the problem here? When I came to Java more than 20 years ago, I was stupid enough to ask one question, what’s the difference between Java and JavaScript? The Java developers, they told me, “Markus, you cannot compare Java with JavaScript, because Java is object-oriented. It’s type safe”. That’s great. That is super important. That is what we love. Now I got it. This is important for database programming. Everything is great when we do it in Java. Everything is object-oriented, type safe.

Clean code is super important. As soon as we want to store data in a database, then the horror begins, because all database systems on the market are incompatible with the programming language. It’s the same in .NET. It’s the same with object-oriented programming languages, incompatible. It’s because you cannot store native Java objects seamlessly in a relational database. This is impossible. We have some impedance mismatches here. Granularity mismatch subtypes, so inheritance is not supported by the relational model. Then we have different data types. In Java, we have some primitive data types. In PostgreSQL, we have around 40 or even more data types supported by the database. This is always a challenge.

The question is, what about the NoSQL databases? Who of you uses NoSQL databases today? Are they better? The fact is they are very different. What’s the difference? The NoSQL databases now, they introduce new data types, new data structure. This is the biggest difference. The functional principle is pretty much the same. They are also server databases, mostly. Now they introduce key-value. They introduce documents like JSON, XML, or a column store, or graph database like Neo4j. We have the object-oriented databases in the 1990s, because, initially, we want to store objects in a database, so it was obvious to invent object-oriented databases.

Obviously, it didn’t work well. We have time series databases. Now with AI, we have the vector databases. What database should we choose? They are all also incompatible with the native object model of Java. That’s a fact. They are also incompatible, and that’s a challenge. In Java, we can do everything. We can handle all types. We can store and process all data structure and data types. We can deal with everything. That’s great. This is different with databases. They are limited in terms of the use case. This leads to big challenges. You can read more about this in the internet, or even on Wikipedia, we can find an article about object-relational mapping or impedance mismatches.

This is how it works. In our application, we need something additional to store data in a database. We use object-relational mapping. This is a well-known concept, and it’s worked great for decades. Who of you uses Hibernate, EclipseLink? Object-relational mapping is very common to store data in a database, or Java object in a relational database. There are drawbacks. This is super expensive because object-relational mapping is very time-consuming and it leads to high latencies. Suddenly, your queries become really slow. This is what we found out. Is this true? Yes, we agree. Not always? Mostly? Sometimes? We can fix this problem, of course. Let’s add a cache. This is what we did in our development environment. We introduced Hibernate. Then it was too slow. Then we added a cache. Then we have additional complexity.

Now we have to deal with cache configurations and so on. Now the results are stored in memory. This will be way faster. We were not satisfied with the performance, actually. Why? Have you ever measured how long it takes to read data from a cache? I grew up with assembly programming. In the keynote, we heard about assembly will become, hopefully, more popular in the future when we deal with quantum computing. When I had a Commodore 64, then I was able to process data in memory in microseconds. When I read data from a local cache with Hibernate, then it takes milliseconds. I was like, what’s the problem here? Why does it take milliseconds when I fetch data from a local cache? The problem is object-relational mapping. Obviously, this is super expensive.

Then we talked about single-node applications. Who of you develops distributed applications? That’s a little bit more complex. Now, here we have an application that runs on multiple machines. What’s happening when you change data on one machine? Then the machine will be synchronized with the database. Everything is fine. The problem is all other nodes are not in sync with the database. This can be a problem. We are developers. We can solve this problem. There is another cache strategy. Let’s put the cache in between the database and the application layer. Because we are in the cloud, so we want to avoid a single point of failure, so we use a distributed cache. We use a cache that is executed on multiple machines. Who uses a distributed cache like Redis? Very common.

Then we have such an architecture. You can see the machines growing more and more. Does it make sense to run a cache without memory? No, it’s nonsense. Of course, we need a lot of memory. We use memory, and we need memory. What about the database? Do we run a database application on a single database node? Probably, yes. If the application is mission critical, maybe you will run a database cluster to share the load, data redundancy, and so on. We have a database running on multiple nodes means there are more machines running. Does it make sense to run a database without memory or low memory? It can do that, but it will be slow. You need a lot of memory in a database server as well.

Now, we talk about an application that runs on multiple machines. Now we deal with microservices. We split the application in multiple services, and it looks like that. We have a lot of machines running to maintain. This is very common. Then we have a great database, and databases are so fast today. Who of you uses Elasticsearch? Why? The database is fast enough. Obviously, sometimes it’s not fast enough, so you add another solution, and now you can explain to your managers why cloud computing is so expensive. This is really true. This is not the case in all applications. Sometimes you have only one solution or two or three solutions.

On top of that, we talked about data structure, data types. Let’s say you have your Oracle database, and then you need some sensor data, you will have a time series database, probably. Then you deal with vector. Then we have a vector database for AI, so you have, on top of that, multiple database systems running. This is the reason why database development is super effortful, expensive in the cloud, slow. It’s not sustainable, actually. It will produce a lot of CO2 emission and consume energy. Let’s wait for quantum computing. See you next year.

Alternative Java-Native Approach

What’s the alternative? Is there an alternative, actually? Yes, it is already. You don’t have to wait for quantum computing, if you change the software stack. This is not magic, it’s actually obvious. Let’s have a look at how it works. Here is a solution for cheap data storage. When we use a PostgreSQL database, for instance, it’s a server database, and this is an example based on AWS. You use PostgreSQL as a service, just with 2 CPUs, 8 gigabyte memory, and 1 terabyte memory. Run it on one node. It will cost you around $4,000 per year. If you need multiple instances, of course, your price will double, triple, and so on.

If you need more nodes, six nodes will cost you around $30,000 per year. The cloud providers, they provide us Blob storage, or binary data storage like AWS S3. The cool thing here is it costs almost nothing. 1 terabyte S3 costs only $300 per year. That’s great. You can have the same on Azure or Google Cloud. There is a solution where we can save a lot of cost in the cloud, and look at the CO2 emission. It’s almost nothing. The energy consumption is 99% lower. You don’t have to maintain it, it’s managed by the cloud provider.

Here are some facts about Java. Because on all conferences, we talk about, we love Java. If you attend a Java conference, you will hear this phrase, we love Java. Now let’s have a look on why we love Java. It’s so fast. Everything that’s executed in memory in Java is executed in microseconds. This is similar to my Commodore 64. Sometimes it’s even faster, even nanoseconds, because of our great JIT compiler. We have the best data model on the planet, objects, object graphs. We can deal with all data types. We can deal with all data structure, vectors, JSON, XML, relations, graph, like graph database. Everything is possible. This is a multi-model data structure from the beginning. No limitations in terms of the use case. What about searching and filtering? We have Streams API. With Java Streams, you can search and filter in memory in microseconds. You can compare this with a JPA or a SQL query. Mostly 1,000x faster than a comparable JPA query.

Now I will show you a brief demo. Here we have two applications running in parallel. One is built with the JPA stack, so with Hibernate. We have a PostgreSQL database, 250 gigabytes. This is a bookstore application. We use Ehcache. It’s a hot Ehcache. This is in memory, so we fetch data directly from memory. On the right, you can see the query code. Here we use Spring Data as a framework. The second application is built with EclipseStore. We use a Blob store, like S3, and it is S3. We use Java Streams to search and filter. All queries are executed, sometimes 10 times faster, sometimes 100 times faster, sometimes more than 1,000 times faster than the comparable JPA query. Keep in mind, we fetch data directly from a cache. With Java Streams, we are up to 1,000x faster than Hibernate Cache. This is the performance of Java. You can improve it even by changing the JVM, for instance. JVM, you can accelerate in-memory processing by, for instance, the OpenJ9 JVM. It’s also an Eclipse project. It can be 20% more efficient and faster than HotSpot. You can play around with the different JVMs. It’s incredibly fast.

EclipseStore

What’s the problem? The only thing missing in Java was persistence. How can we now store data on disk? This is what we have developed at the Eclipse Foundation. This project is not a prototype or just an idea. We have been developing this for more than 10 years. It’s production-ready. It’s in use. It’s in production use by companies like Allianz, Fraport, here in Germany. More companies are using this framework. It’s under Eclipse public license, which means you can use it for commercial purposes free of charge. There are four benefits. 1,000x faster data processing in memory. You save more than 90% cloud database costs, and we do not talk about license fees. It’s Java-Native, which means simple to use. It’s fully object-oriented. It’s type safe. It feels like a part of Java. This is very important. Because we don’t need a database server anymore, just storage, we save 99% energy and CO2 emissions, and you develop the fastest application on the planet, and at the same time, you save the planet. How great is this? How does it work? What actually is EclipseStore?

It is a micro-persistence engine, so it is a persistence similar to Hibernate, to store native objects. This is the difference to Hibernate, to store your native Java objects seamlessly to disk, and to restore it when needed. That’s the functional principle of the framework, without object-relational mapping, without any mappings, without any data conversion, there’s no more JSON conversion behind the scenes or something like that. It’s the biggest difference, very important to all databases on the market, no mappings, no data conversion, the original Java model is used. Use the original Java object model, and you can persist your POJOs seamlessly into any data storage.

It’s just a Maven dependency. It’s very easy to use. The whole framework has only one dependency to the Eclipse Serializer that’s used behind the scenes. The only thing you need is an EclipseStore Instance. This is how it works through runtime. You need an instance of your data storage in memory, and in memory it works like a tree. Who of you was a Swing developer? What about JavaFX? It’s the same here with EclipseStore, you need a node, an instance, a root object, and then you add objects, and all objects that are reachable from this root object can be persisted and stored on disk. This is the functional principle. I create a root object, add some objects. You can use all Java types. Only Java types that can be recreated can be used and stored. You cannot store a thread, obviously, but all other Java objects can be used.

Then you call a store method, and then a binary representation of your object will be created and stored on disk. The information is stored in a binary form, and we use the Eclipse Serializer for creating the binary, and store it on disk. This operation is transaction safe. We get a commit from the engine, and then it’s guaranteed that the object is really stored on disk. Let’s add some more objects. We call a store method, and another binary file is created. This is how it works. In each store method, each store operation creates a new binary file in the storage. It’s different to the relational model. It’s an append log strategy. The method call is very simple, just one method to call, and then you can store your objects. This is a blocking, transaction safe, all or nothing atomic operation. Vice versa, when you start the application, what’s happening? When you start an application, then the framework will load your object graph into the memory.

Handling Concurrency in Java

Kett: How does it work with multiple threads? You can use all Java concepts to handle concurrency, but you have to care for concurrency. We have to handle this in Java, or we can handle this in Java. Then you have full control on which objects and which threads store the object transaction, save to disk. You will get a commit from the library.

EclipseStore

Kett: When we start an application, then the engine will load the whole object graph into memory. Now this is, at this point, very important to mention. The object graph information is all loaded. Only the object graph information, which means only object IDs are loaded into the memory. We will not load the whole database into the memory. Only the object IDs are loaded, so you’ve got an indexed object graph in memory. Then you can define which object references should be preloaded in memory or should be loaded on demand by using lazy loading. You can have a terabyte, tons of object in your storage, you have only 2 gigabyte memory, it will work. It’s super easy to define your classes as lazy or eager, this is just a wrapper class. Then the engine will either preload object references in memory or load it when you call the object with a GET method. This is how it basically works.

Queries are simple, because we use Java Stream’s API for searching and filtering. This is very fast. You can check this out, each query will take only microseconds, mostly because of the speed of the Java Stream’s API memory. The storage will grow more, and so this is the reason why there is also a garbage collector for your file storage. If you have older objects in the memory, and we change the data model, then we have lazy objects in the memory or corrupt objects, and a garbage collector process will clean up the file storage constantly and will keep your storage small. This is the functional principle.

The Eclipse Serializer is the heart of this framework. On top of that, we provide an implementation for the JVM. Eclipse storage is built for the JVM, but there is also an implementation for Android. Who of you is a mobile developer or develops mobile applications as well? What happens if your classes change? This can be challenging, but it’s not with EclipseStore, because we have a concept that’s called the legacy-type mapping, and the framework cares for all of your changes automatically, or you can also, for complex cases, define a so-called legacy-type mapping.

Then the storage or the legacy objects will be updated through runtime, so you never have to stop your application and refactor the whole storage. This is not how it works. We have a file system garbage collector, as mentioned, a file system abstraction, which means you can store your data in a Blob store, but you can also store your data locally, just on disk. You can store your data almost everywhere, so in any binary data storage. This is confusing because a relational database can deal with binaries. You can even store your binaries in a relational database, but keep in mind, there is no object-relational mapping anymore. We just store binary data. There are database connectors that you can use on Oracle database, you can use PostgreSQL.

Actually, it makes no sense, but in some business cases, it can make sense. We had a customer. They used Oracle. They told us, that’s a great approach, but we have to use Oracle. Now we store EclipseStore binaries in an Oracle database. It’s possible. The Oracle guys do pretty much the same with their graph layer, so they provide a graph database, but it’s not the graph database, it is actually a graph API layer on top of the relational database. They store graph information as a binary in a relational database.

Then we have a storage browser, where you can browse through your storage data, and a REST interface, so you can get access to your storage and search and query your storage directly via REST. There are backup functions and converter to CSV, for instance, that you can migrate easily to EclipseStore or from EclipseStore to any other database, if you like. It runs with JVM from Java version 11. It runs with any JVM languages on Android. It runs in containers. It runs in Docker containers on Kubernetes, even with GraalVM Native Images.

We talked about single-node applications, and this functional principle works also in distributed systems. For this scenario, MicroStream provides you a PaaS platform for deploying and running distributed EclipseStore applications. We also provide an additional version for even more performance, with indexing, for instance. You get out the most speed that’s possible. How does it work? Now we can execute an Eclipse application on multiple machines, and the MicroStream cluster provides you data replication, data redundancy. The service is fully managed or available on-prem. There is eventual consistency approach. This is how it looks like in a distributed environment.

Back to our previous architecture, we have a Hibernate application running on multiple machines, a distributed cache, we have a database system. Now we replace the Hibernate applications with the EclipseStore applications. As we keep and query all data in memory, it is already working like a distributed cache. We store data in a Blob store, AWS, for instance, so we can skip the database cluster completely. As mentioned, we keep data in memory, we replicate data in memory through multiple JVMs. We don’t need a distributed cache anymore, so we can also skip the local cache. Then you can still use Elasticsearch if you like, but you can also use Lucene, and you don’t need a search cluster anymore. It depends on you. Then, the end result is a really small cluster architecture, low cost, super-fast, easy to implement and maintain because everything is Core Java. It feels like a part of the JVM, it feels like a part of the JDK.

Importing an EclipseStore Binary File into Lucene

Participant 2: You mentioned Lucene, so can you import an EclipseStore binary file into Lucene and then it will just work?

Kett: No, this is not how it works. You can use and combine Lucene with EclipseStore as you can use all Java libraries and combine it, that are available in the Java ecosystem. Lucene cannot parse the binaries. You include Lucene and you will search and filter in memory. The binary files are only used for storing the object persistently on disk. You never touch the binary file. It’s the same with your database server. Your database system will store the data in an internal format on disk. You never touch it, actually. It’s the same here.

Rules and Challenges (EclipseStore)

There are also some rules and challenges with EclipseStore because every technology has pros and cons. There’s a comparison. Here’s, again, the traditional database server paradigm. We have an application and we have a database server. Queries are executed on the database server. The persistent data are stored in the database server, obviously. With EclipseStore, it changes. Now, your database is in memory. You don’t have to load the whole database in memory, but it works like the same. It feels like the whole database is in memory, but it’s not. It’s managed by lazy loading by the engine.

Keep in mind, your database is in memory. We search and filter in memory in the application node. Only the storage data are stored in a S3 bucket or something like that. That’s the main difference. You have to think a little bit different. There are no more classic select, you send to a server. You don’t use SQL, you use Java Streams. There is no database server. We have a graphical user interface where you have to create a database model. You just have to create classes. That’s it. There is no more database model anymore.

Again, in-memory means everything is executed in memory, so you actually need a lot of memory. If you have a lot of memory, I showed you how you can save a lot of memory, because we don’t need a database cluster. We don’t need a distributed cache cluster. We have a little bit more money left for buying a little bit more memory. If you don’t have enough memory, you have small memory machines, it will work. This is very important. The more memory you have, the faster your system will be. This is not standard, but I like to mention it, if you need a way faster approach for really blocking operations, transaction safe operations, with the speed of an asynchronous approach, with really high write performance, then you can use, for instance, persistent memory. This is super interesting. You just have to add persistent memory to your server.

Then, all write operations are not directly stored to disk, it’s stored in a persistent memory area. It’s transaction safe. It takes microseconds to store it and not milliseconds because of disk I/O operation stuff. You can store it in a high-performance way. It’s like you copied from one memory area to another memory area, but this area is persistent, and it provides you persistence. It’s called persistent memory. Then, behind the scenes, you can synchronize the persistent memory with your disk asynchronously. This is extremely fast.

Challenges with EclipseStore. The biggest challenges are, you have to think like a Java developer. Java developers mostly don’t think like Java developers in terms of database programming. In terms of database programming, our brain works like a relational database. If I tell you, “Please create a database application, I need a shop system. I have customers. I have articles”.

Then, your brain will create a relational model in microseconds, sometimes milliseconds, because we are used to using a relational model sometimes 10 years, 20 years, or 30 years even. You have to stop with relational modeling. Create an object model that fits for Java. Forget what you have ever heard about a relational model. Forget what you have ever heard about a relational database system, how it works. Focus on how Java works, how you would implement it in Java. Trust the framework will be able to store it. That’s it. That’s the biggest challenge, to create a proper object model. It’s built for Java developers. We have no surprises for DevOps and for database admins.

This is the reason why, if you have colleagues, they are database admins, probably they will not like it. This is not a drop-in replacement. Please stop dreaming about, there is a magic button. I can now replace my Hibernate stack and my relational database with EclipseStore and it will work seamlessly. This is not going to happen. There is a migration effort and path, but it’s doable. It’s not complicated, but there is an effort. Keep this in mind. No SQL support, but the application can be queried by external services and applications by using GraphQL, REST. This is possible, but no native SQL support, obviously.

Conclusion

Traditional database applications. This approach provides you simplicity. It’s because you can deal with all Java types. There is no more mapping, no more data conversion behind the scenes. It’s Core Java. There are no dependencies. You can use POJOs, and everything can be stored. It can be replicated. You can build distributed applications very easily. You will have high performance. Because of the speed of Java, all operations are executed in-memory with Java Streams in microseconds or even faster. It’s suited for low latency, real-time data processing. You have really awesome throughput. It will save a lot of cloud costs because there is no database server anymore. There is just storage. Just storage is more than 90% cheaper than any database server in the cloud. That’s great. Because there is no more server required, and these numbers are from Amazon, you will save more than 99% of CPU power, energy, and CO2 emission. Here is a comparison of what could be saved if we replaced all database servers with object storage. This would be amazing. This is not going to happen. This is only in theory. Between 20% to 30% or probably even more servers on the planet are database servers. These numbers are growing because of AI. More vector databases are required. We could save a lot of energy and CO2 emission.

Resources

If you are interested in learning this approach, I have a free course for you. You can enroll for EclipseStore course for free. We provide advanced training and even fundamental training for free. If you’re interested, check it out, www.javapro.io/training. Build the fastest applications on the planet by using Java.

Questions and Answers

Losio: You say, I never have to access directly the storage layer, so S3. I don’t care about how you store the data in S3. If you have 1 million records in your table, do you store 1 million binaries? It’s one file? How is structure there?

Kett: Behind the scenes, the engine will reconfigure the storage constantly, and reorganize the storage constantly. You don’t have to care about, how does the structure look like in my storage. That’s done automatically by the engine. We have a garbage collector process which deletes the legacy objects. You can configure that. This is how it works.

Participant 3: As far as I understood, you position the solution as a drop-in replacement, for DBMSs, or for enterprise applications, or just different, or just for embedded applications.

Kett: It is a persistence framework for storing Java objects, this is what we had in mind, to replace Hibernate. You use Hibernate to store your objects in a database. You use it for almost all use cases. You can build complex enterprise applications, or you just store your tests, or anything that can be stored. You can use it for almost any purpose. It is great for low-latency applications, where you need real-time speed, where you really need high speed. That’s great to use it for that purpose.

Participant 3: I’m not a DBA, but I would like to protect the DBMSs. There’s five points on the slide regarding the implementation that you will have to do. On your application side, it’s not fair for me to mention something on top of your business logic.

Basically, all the stuff, if you know about the Postgres and MVCC, Multi-Version Concurrency Control, a very complicated thing that allows you access to the data storage from multiple applications. Also, regarding the tools, so good luck with doing the updates of these Java applications, as soon as your enterprise application requirements change. You have DML, DDL, and all these high-level abstractions. I’m talking about SQL like things, that allows you to do very complicated things, just with a few lines of code, instead of implementing very challenging code on the Java side. Do I understand right, that this very complicated layer, like concurrency thing, that is not comparable to this enterprise-y thing that we just do. It’s very complicated. Does it mean that the enterprise application developers have to deal with that as well?

Kett: Obviously, the database cares for concurrency and everything. You don’t have to care for anything. In practice, we see that we have to care for concurrency. We do it anyway, in Java, very often. With microservices, it changes completely, transaction safety and so on. With Java, we have great solutions for that. In our perspective, this is not more effort. It is pretty much the same effort, because mostly you have to do it anyway. You need experience with concurrency handling in Java. Actually, it’s Core Java stuff. There are no new things to learn. This is not like a SQL database, you have to learn a new data model, new query language. It’s Core Java stuff.

See more presentations with transcripts

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


MongoDB (MDB US): Fast-Exit from Nasdaq100 in May 2025 – Dimitris Ioannidis

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

Begin exploring Smartkarma’s AI-augmented investing intelligence platform with a complimentary Preview Pass to:

  • Unlock research summaries
  • Follow top, independent analysts
  • Receive personalised alerts
  • Access Analytics, Events and more

Join 55,000+ investors, including top global asset managers overseeing $13+ trillion.

Upgrade later to our paid plans for full-access.

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.