Ostrum Asset Management Acquires 1,774 Shares of MongoDB, Inc. (NASDAQ:MDB)

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

Ostrum Asset Management grew its stake in MongoDB, Inc. (NASDAQ:MDBFree Report) by 460.8% during the fourth quarter, according to the company in its most recent filing with the Securities and Exchange Commission. The institutional investor owned 2,159 shares of the company’s stock after purchasing an additional 1,774 shares during the period. Ostrum Asset Management’s holdings in MongoDB were worth $503,000 as of its most recent filing with the Securities and Exchange Commission.

Several other hedge funds and other institutional investors also recently made changes to their positions in the stock. Vanguard Group Inc. raised its position in MongoDB by 0.3% in the fourth quarter. Vanguard Group Inc. now owns 7,328,745 shares of the company’s stock valued at $1,706,205,000 after purchasing an additional 23,942 shares during the last quarter. Franklin Resources Inc. raised its holdings in MongoDB by 9.7% in the 4th quarter. Franklin Resources Inc. now owns 2,054,888 shares of the company’s stock valued at $478,398,000 after buying an additional 181,962 shares during the last quarter. Geode Capital Management LLC grew its holdings in MongoDB by 1.8% during the 4th quarter. Geode Capital Management LLC now owns 1,252,142 shares of the company’s stock worth $290,987,000 after acquiring an additional 22,106 shares during the last quarter. First Trust Advisors LP raised its stake in shares of MongoDB by 12.6% during the fourth quarter. First Trust Advisors LP now owns 854,906 shares of the company’s stock valued at $199,031,000 after acquiring an additional 95,893 shares during the last quarter. Finally, Norges Bank bought a new stake in shares of MongoDB in the fourth quarter worth $189,584,000. 89.29% of the stock is owned by institutional investors.

Analysts Set New Price Targets

A number of research firms have commented on MDB. KeyCorp lowered MongoDB from a “strong-buy” rating to a “hold” rating in a research report on Wednesday, March 5th. Mizuho decreased their target price on shares of MongoDB from $250.00 to $190.00 and set a “neutral” rating for the company in a research report on Tuesday, April 15th. Monness Crespi & Hardt upgraded shares of MongoDB from a “sell” rating to a “neutral” rating in a research report on Monday, March 3rd. UBS Group set a $350.00 target price on shares of MongoDB in a research note on Tuesday, March 4th. Finally, Truist Financial decreased their price target on MongoDB from $300.00 to $275.00 and set a “buy” rating for the company in a report on Monday, March 31st. Eight equities research analysts have rated the stock with a hold rating, twenty-four have given a buy rating and one has given a strong buy rating to the company’s stock. According to MarketBeat, MongoDB presently has an average rating of “Moderate Buy” and an average price target of $299.78.

Check Out Our Latest Analysis on MongoDB

Insider Buying and Selling at MongoDB

In other MongoDB news, CFO Srdjan Tanjga sold 525 shares of the business’s stock in a transaction on Wednesday, April 2nd. The stock was sold at an average price of $173.26, for a total value of $90,961.50. Following the sale, the chief financial officer now directly owns 6,406 shares in the company, valued at $1,109,903.56. The trade was a 7.57 % decrease in their ownership of the stock. The sale was disclosed in a filing with the SEC, which is available through the SEC website. Also, CAO Thomas Bull sold 301 shares of the stock in a transaction on Wednesday, April 2nd. The stock was sold at an average price of $173.25, for a total value of $52,148.25. Following the transaction, the chief accounting officer now owns 14,598 shares of the company’s stock, valued at $2,529,103.50. The trade was a 2.02 % decrease in their position. The disclosure for this sale can be found here. Insiders sold 48,680 shares of company stock worth $11,084,027 in the last quarter. 3.60% of the stock is currently owned by company insiders.

MongoDB Trading Down 0.5 %

Shares of NASDAQ:MDB opened at $159.26 on Monday. MongoDB, Inc. has a 1-year low of $140.78 and a 1-year high of $387.19. The stock has a market cap of $12.93 billion, a PE ratio of -58.12 and a beta of 1.49. The stock’s fifty day simple moving average is $208.15 and its two-hundred day simple moving average is $252.42.

MongoDB (NASDAQ:MDBGet Free Report) last issued its quarterly earnings data on Wednesday, March 5th. The company reported $0.19 earnings per share (EPS) for the quarter, missing analysts’ consensus estimates of $0.64 by ($0.45). MongoDB had a negative return on equity of 12.22% and a negative net margin of 10.46%. The firm had revenue of $548.40 million during the quarter, compared to the consensus estimate of $519.65 million. During the same period in the previous year, the company posted $0.86 earnings per share. On average, sell-side analysts forecast that MongoDB, Inc. will post -1.78 earnings per share for the current fiscal year.

MongoDB Company Profile

(Free Report)

MongoDB, Inc, together with its subsidiaries, provides general purpose database platform worldwide. The company provides MongoDB Atlas, a hosted multi-cloud database-as-a-service solution; MongoDB Enterprise Advanced, a commercial database server for enterprise customers to run in the cloud, on-premises, or in a hybrid environment; and Community Server, a free-to-download version of its database, which includes the functionality that developers need to get started with MongoDB.

See Also

Institutional Ownership by Quarter for MongoDB (NASDAQ:MDB)

This instant news alert was generated by narrative science technology and financial data from MarketBeat in order to provide readers with the fastest and most accurate reporting. This story was reviewed by MarketBeat’s editorial team prior to publication. Please send any questions or comments about this story to contact@marketbeat.com.

Before you consider MongoDB, you’ll want to hear this.

MarketBeat keeps track of Wall Street’s top-rated and best performing research analysts and the stocks they recommend to their clients on a daily basis. MarketBeat has identified the five stocks that top analysts are quietly whispering to their clients to buy now before the broader market catches on… and MongoDB wasn’t on the list.

While MongoDB currently has a Moderate Buy rating among analysts, top-rated analysts believe these five stocks are better buys.

View The Five Stocks Here

The 10 Best AI Stocks to Own in 2025 Cover

Wondering where to start (or end) with AI stocks? These 10 simple stocks can help investors build long-term wealth as artificial intelligence continues to grow into the future.

Get This Free Report

Like this article? Share it with a colleague.

Link copied to clipboard.

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Capital Research Global Investors Has $128.64 Million Holdings in MongoDB, Inc. (NASDAQ:MDB)

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

Capital Research Global Investors reduced its holdings in MongoDB, Inc. (NASDAQ:MDBFree Report) by 61.5% during the 4th quarter, according to its most recent filing with the Securities and Exchange Commission (SEC). The institutional investor owned 552,540 shares of the company’s stock after selling 881,000 shares during the quarter. Capital Research Global Investors owned approximately 0.74% of MongoDB worth $128,638,000 at the end of the most recent reporting period.

Several other hedge funds and other institutional investors have also recently added to or reduced their stakes in the stock. Vanguard Group Inc. raised its position in MongoDB by 0.3% in the 4th quarter. Vanguard Group Inc. now owns 7,328,745 shares of the company’s stock worth $1,706,205,000 after purchasing an additional 23,942 shares during the last quarter. Franklin Resources Inc. boosted its stake in MongoDB by 9.7% during the 4th quarter. Franklin Resources Inc. now owns 2,054,888 shares of the company’s stock worth $478,398,000 after acquiring an additional 181,962 shares during the last quarter. Geode Capital Management LLC grew its holdings in MongoDB by 1.8% during the fourth quarter. Geode Capital Management LLC now owns 1,252,142 shares of the company’s stock valued at $290,987,000 after purchasing an additional 22,106 shares during the period. First Trust Advisors LP raised its holdings in MongoDB by 12.6% in the fourth quarter. First Trust Advisors LP now owns 854,906 shares of the company’s stock worth $199,031,000 after purchasing an additional 95,893 shares during the period. Finally, Norges Bank acquired a new position in shares of MongoDB in the 4th quarter valued at $189,584,000. Institutional investors and hedge funds own 89.29% of the company’s stock.

Insiders Place Their Bets

In related news, CAO Thomas Bull sold 301 shares of the stock in a transaction that occurred on Wednesday, April 2nd. The shares were sold at an average price of $173.25, for a total transaction of $52,148.25. Following the sale, the chief accounting officer now owns 14,598 shares in the company, valued at approximately $2,529,103.50. This trade represents a 2.02 % decrease in their position. The transaction was disclosed in a legal filing with the SEC, which is accessible through this link. Also, insider Cedric Pech sold 1,690 shares of the firm’s stock in a transaction on Wednesday, April 2nd. The shares were sold at an average price of $173.26, for a total value of $292,809.40. Following the completion of the sale, the insider now directly owns 57,634 shares in the company, valued at approximately $9,985,666.84. This represents a 2.85 % decrease in their position. The disclosure for this sale can be found here. Insiders have sold 48,680 shares of company stock worth $11,084,027 in the last 90 days. Company insiders own 3.60% of the company’s stock.

MongoDB Trading Down 0.5 %

MDB stock opened at $159.26 on Monday. The business’s 50 day simple moving average is $208.15 and its 200 day simple moving average is $252.42. MongoDB, Inc. has a 1-year low of $140.78 and a 1-year high of $387.19. The firm has a market capitalization of $12.93 billion, a PE ratio of -58.12 and a beta of 1.49.

MongoDB (NASDAQ:MDBGet Free Report) last released its quarterly earnings results on Wednesday, March 5th. The company reported $0.19 earnings per share for the quarter, missing the consensus estimate of $0.64 by ($0.45). MongoDB had a negative net margin of 10.46% and a negative return on equity of 12.22%. The company had revenue of $548.40 million during the quarter, compared to the consensus estimate of $519.65 million. During the same period in the prior year, the company earned $0.86 earnings per share. As a group, sell-side analysts predict that MongoDB, Inc. will post -1.78 EPS for the current fiscal year.

Analysts Set New Price Targets

Several research analysts have weighed in on MDB shares. Barclays reduced their target price on MongoDB from $330.00 to $280.00 and set an “overweight” rating on the stock in a report on Thursday, March 6th. Wedbush decreased their price objective on MongoDB from $360.00 to $300.00 and set an “outperform” rating for the company in a research report on Thursday, March 6th. Rosenblatt Securities restated a “buy” rating and set a $350.00 target price on shares of MongoDB in a report on Tuesday, March 4th. Wells Fargo & Company cut shares of MongoDB from an “overweight” rating to an “equal weight” rating and reduced their price target for the company from $365.00 to $225.00 in a report on Thursday, March 6th. Finally, The Goldman Sachs Group dropped their price objective on shares of MongoDB from $390.00 to $335.00 and set a “buy” rating on the stock in a report on Thursday, March 6th. Eight research analysts have rated the stock with a hold rating, twenty-four have given a buy rating and one has issued a strong buy rating to the company’s stock. According to data from MarketBeat.com, MongoDB currently has a consensus rating of “Moderate Buy” and an average target price of $299.78.

Get Our Latest Stock Report on MDB

MongoDB Profile

(Free Report)

MongoDB, Inc, together with its subsidiaries, provides general purpose database platform worldwide. The company provides MongoDB Atlas, a hosted multi-cloud database-as-a-service solution; MongoDB Enterprise Advanced, a commercial database server for enterprise customers to run in the cloud, on-premises, or in a hybrid environment; and Community Server, a free-to-download version of its database, which includes the functionality that developers need to get started with MongoDB.

Featured Articles

Want to see what other hedge funds are holding MDB? Visit HoldingsChannel.com to get the latest 13F filings and insider trades for MongoDB, Inc. (NASDAQ:MDBFree Report).

Institutional Ownership by Quarter for MongoDB (NASDAQ:MDB)

This instant news alert was generated by narrative science technology and financial data from MarketBeat in order to provide readers with the fastest and most accurate reporting. This story was reviewed by MarketBeat’s editorial team prior to publication. Please send any questions or comments about this story to contact@marketbeat.com.

Before you consider MongoDB, you’ll want to hear this.

MarketBeat keeps track of Wall Street’s top-rated and best performing research analysts and the stocks they recommend to their clients on a daily basis. MarketBeat has identified the five stocks that top analysts are quietly whispering to their clients to buy now before the broader market catches on… and MongoDB wasn’t on the list.

While MongoDB currently has a Moderate Buy rating among analysts, top-rated analysts believe these five stocks are better buys.

View The Five Stocks Here

12 Stocks Corporate Insiders are Abandoning Cover

If a company’s CEO, COO, and CFO were all selling shares of their stock, would you want to know? MarketBeat just compiled its list of the twelve stocks that corporate insiders are abandoning. Complete the form below to see which companies made the list.

Get This Free Report

Like this article? Share it with a colleague.

Link copied to clipboard.

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Appian: Serge Tanjga Named As Chief Financial Officer – Pulse 2.0

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

Appian announced the appointment of Serge Tanjga as Chief Financial Officer (CFO), effective May 27, 2025. Tanjga will report directly to Appian CEO Matt Calkins. Tanjga succeeds Mark Matheos, who became CFO of Dragos in November.

Tanjga brings over 20 years of financial experience to Appian. And he was the Senior Vice President of Finance at MongoDB where he oversaw financial planning, strategic finance, business operations, and analytics. And most recently, he served as MongoDB’s interim CFO.

Before MongoDB, Tanjga was a Managing Director at Emerging Sovereign Group (a subsidiary of The Carlyle Group). And Tanjga also held leadership positions at the Harvard Management Company and 40 North Industries.

Tanjga received a B.A. in Mathematics and Economics from Harvard College and an MBA from Harvard Business School, where he was a Baker Scholar.

Appian is a company that delivers a software platform that helps organizations run better processes that reduce costs, improve customer experiences, and gain a strategic edge. And Appian will release financial results for the first quarter ended March 31, 2025 before the U.S. financial markets open on Thursday, May 8, 2025. And the company will host a conference call and live webcast to review its financial results and business outlook.

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Generative AI is reshaping the legacy modernization process for Indian enterprises

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

In an era where digital agility is no longer optional, Indian enterprises find themselves at a pivotal crossroad. With nearly 98% of enterprise applications still tethered to rigid, legacy systems, the challenge of modernization looms large—entangled in a web of technical debt, resistance to change, and the pressing demand for compliance in regulated sectors.

As India intensifies its push toward a digital-first economy, a cloud-agnostic approach is critical in transforming legacy roadblocks into scalable, AI-ready infrastructure. Boris Bialek, Field CTO at MongoDB, who brings global insight and deep technological expertise to the conversation on legacy modernization, shares with us how organizations can turn legacy challenges into launchpads for digital excellence.

Some edited excerpts:

What are the top challenges Indian enterprises face when modernizing legacy systems—be it technical debt, skill gaps, or resistance to change?
Modernizing legacy systems in India, or anywhere in the world, has historically been challenging, expensive and prone to stalling or complete failure. But one of the things we’re most excited about in 2025 is that our new AI-driven modernization process has proven it can dramatically accelerate the speed and reduce the cost of these projects.

But first, let’s look at what the challenges really are.

One of the primary obstacles enterprises face is, of course, technical debt. Outdated systems are deeply embedded in business operations, making migration complex, costly, and time-consuming. These legacy systems often have hardcoded dependencies and intricate architectures, necessitating substantial investment in re-engineering efforts.

Beyond technical debt, introducing new development processes and technologies across engineering teams remains a critical challenge. Organizations must ensure seamless adoption of AI-ready architectures while overcoming resistance to change. Legacy systems have often been in place for decades, and decision-makers fear disruptions to core operations, which slows down modernization efforts. Striking a balance between innovation and operational stability is crucial for enterprises undergoing transformation.

Given that 98% of enterprise applications in India still rely on legacy systems, how should Indian enterprises overcome the limitations of rigid relational databases, particularly in terms of scalability and innovation?

One of the most effective ways to overcome these challenges is by adopting a modern, document-based database like MongoDB. Unlike traditional RDBMS, MongoDB offers a flexible schema that allows organizations to evolve, adapt and scale. This adaptability is critical in today’s fast-paced business environment, where rapid iteration and responsiveness to market needs are key to staying competitive.

From a scalability perspective, MongoDB’s distributed architecture enables enterprises to scale horizontally, ensuring systems can handle growing workloads seamlessly—whether on-premises, in the cloud, or across hybrid environments. This is especially relevant for Indian enterprises expanding into digital-first services and real-time operations.

Moreover, MongoDB’s Application Modernization Factory (AMF) provides structured advisory and migration services, helping enterprises replatform legacy monolithic apps, rewrite core systems with modern tech stacks, and rehost databases on the cloud with MongoDB Atlas.

To move from a legacy infrastructure to a modern solution like MongoDB, enterprises must go on a modernization journey. As I mentioned earlier, AI is massively changing the dynamic of what’s possible in this area.

How is Generative AI reshaping the legacy modernization process for Indian enterprises, and what specific capabilities does MongoDB bring to the table to integrate GenAI into these transitions?
Generative AI is reshaping the legacy modernization process for Indian enterprises by streamlining application migration, reducing technical debt, and accelerating innovation. MongoDB plays a crucial role in this transformation by offering a cloud-agnostic, developer-friendly platform that integrates seamlessly with AI-driven modernization strategies. With tools like the MongoDB Modernization Factory, enterprises can migrate legacy SQL databases, transition from outdated application servers, and automate regression testing using GenAI. This significantly reduces the time and effort required for code migration, freeing up IT resources for more strategic AI-driven initiatives.

For Indian enterprises navigating large-scale modernization, MongoDB’s scalable and AI-ready architecture ensures flexibility, improved developer productivity, and compliance with regulatory requirements.

With India’s digital transformation accelerating, what is MongoDB’s strategy to capture the growing market opportunity for legacy modernization, particularly among PSUs and traditional enterprises?
For the Indian market—particularly public sector undertakings and traditional enterprises—our goal is customer focused. We want to make modernization faster, more cost-effective, and scalable—unlocking innovation and delivering better citizen and customer experiences.

Depending on the customer or the exact use case we have a number of proven methods for modernizing. More recently, we’ve combined these with the power of Generative AI to accelerate the modernization journey—intelligently assisting in rewriting legacy code, redesigning database schemas, and streamlining application migration.

As AI evolves, we foresee even more intuitive tools that will make application development and modernization easier than ever—turning India’s legacy burden into a leapfrog opportunity.

Beyond modernization, MongoDB is trusted by some of India’s most dynamic businesses and institutions. Our customer base includes names like Canara HSBC, Zepto, Zomato, and SonyLIV—reflecting the platform’s flexibility, scale, and performance across diverse use cases.

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


TencentDB, MongoDB Renew Strategic Partnership for AI Data Management

MMS Founder
MMS RSS

Posted on nosqlgooglealerts. Visit nosqlgooglealerts

TencentDB and MongoDB announced the renewal of their strategic partnership agreement, focusing on delivering cutting-edge data management solutions tailored for the AI era. This collaboration aims to empower global users with advanced technological innovations.

MongoDB, a leading NoSQL database, is renowned for its flexible data schema, high performance, and native distributed scalability. It dominates the NoSQL category in the DB-Engines global rankings and is widely adopted across industries such as gaming, social media, e-commerce, finance, and IoT.

Since their initial five-year collaboration in 2021, TencentDB and MongoDB have jointly expanded in the Chinese market. Leveraging Tencent’s vast user scenarios and technical innovation, Tencent Cloud enhanced MongoDB with enterprise-grade capabilities, including:

Backup and Restore: Intelligent O&M and key-based flashback for rapid recovery.

Elastic Scaling: Dynamic resource allocation to handle fluctuating workloads.

Cross-Region Disaster Recovery: Ensuring business continuity for global operations.

These enhancements have supported high-profile clients like Kuro Games’ Tides of Thunder (32 million pre-registered players), Xiaohongshu (小红书), and NIO (蔚来), optimizing stability, scalability, and cost efficiency.

The renewed partnership prioritizes AI integration, equipping Tencent Cloud with features such as full-text search and vector search to address modern application demands. These tools enable clients to build intelligent, future-proof digital solutions.

Beyond China, the collaboration will target the Asia-Pacific region and support domestic enterprises in overseas expansion. TencentDB for MongoDB offers:

Industry-leading backup/restore capabilities.

Robust security compliance frameworks.

Cross-region data synchronization for seamless global operations.

Over the past four years, Tencent Cloud contributed multiple optimizations to the MongoDB open-source community, improving user experience. Both parties emphasized their commitment to fostering a superior MongoDB ecosystem.

Li Qiang, Vice President of Tencent Group

Our partnership has delivered world-class MongoDB services while contributing to the community. We aim to further elevate the ecosystem and provide industry-leading database solutions.

Simon Eid, MongoDB’s APAC SVP

Combining Tencent’s cloud expertise with MongoDB’s robust technology accelerates innovation, particularly for gaming, automotive, and internet sectors. As AI adoption grows, our joint expertise becomes indispensable.

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Presentation: GenAI for Productivity

MMS Founder
MMS Mandy Gu

Article originally posted on InfoQ. Visit InfoQ

Transcript

Gu: I’m very excited to share about some of the ways we’re leveraging generative AI for productivity at Wealthsimple, and the journey that got us to this place. My talk is going to be roughly structured and broken into four sections. I’ll start by sharing some context about what we do. We’ll dive deeply into our LLM journey. I’ll talk also about the learnings that came out of it. Then I’ll end with sharing a quick snapshot overview of generative AI today.

Wealthsimple is a Canadian FinTech company. Our mission is to help Canadians achieve their version of financial independence. We do this through our unified app, where investing, saving, and spending comes together as one. At Wealthsimple, our generative AI efforts are primarily organized into three streams. The first is employee productivity. This was the original thesis of how we envision LLMs to add value, and continues to be an area of investment today. As we started building up the foundations, the tools, the guardrails for employee productivity, this also gave us the confidence to start extending the same technologies for our clients, to actually optimize operations, which becomes our second stream of focus.

In optimizing these operations, our goal is to use LLMs and generative AI to provide a more delightful experience for our clients. Third, but certainly not least, there’s the underlying LLM platform, which powers both employee productivity and optimizing operations. Through the investments in our platform, we have a few wins to share in the past 1.5 years since we’ve embarked on our LLM journey. We developed and open sourced our LLM gateway, which, internally, is used by over half the company. We developed and shipped our in-house PII redaction model. We made it really simple to self-host open source LLMs within our own cloud environment. We provided the platform support for fine-tuning and model training with hardware accelerations. We have LLMs in production optimizing operations.

LLM Journey (2023)

How do we get here? Almost two years ago, on November 30, 2022, OpenAI released ChatGPT, and that changed the way the world understood and consumed generative AI. It took what used to be a niche and hard to understand technology and made it accessible by virtually anyone. This democratization of AI led to unprecedented improvements in both innovation and productivity. We were just one of the many companies swept up in this hype and in the potential of what generative AI could do for us. The first thing that we did in 2023 was launching our LLM gateway. When ChatGPT first became popularized, the security awareness from the general public for fourth, and third-party data sharing was not as mature as it was today. There were cases where companies were inadvertently oversharing information with OpenAI, and this information was then being used to train new models that would become publicly available.

As a result, a lot of companies out there, Samsung being one of them, had to actually ban ChatGPT among employees to prevent this information from getting out. This wasn’t uncommon especially within the financial services industry. At Wealthsimple, we really did see GenAI for its potential, so we quickly got to work building a gateway that would address these concerns while also providing the freedom to explore. Our gateway, this is a screenshot of what it used to look like in the earlier days. In the first version of our gateway, all it did was maintain an audit trail. It would track what data was being sent externally, where was it being sent externally, and who sent it.

Our gateway was a tool that we made available for all employees behind a VPN, gated by Okta, and it would proxy the information from the conversation, send it to various LLM providers such as OpenAI, and track this information. Users can leverage a dropdown selection of the different models to initiate conversations. Our production systems could also interact with these models programmatically through an API endpoint from our LLM service, which also handles retry and fallback mechanisms. Another feature that we added fairly early on in our gateway was the ability to export and import conversations. Conversations can be exported to any of the other platforms we work with, and they can be imported as checkpoints to create a blended experience across different models.

After we built the gateway, we ran into another problem, which was adoption. A lot of people saw our gateway as a bootleg version of ChatGPT, and there wasn’t that much incentive to use it. One of our philosophies at Wealthsimple is, whenever it comes to new technologies or new tools, we want to make the right way the simple way, or the right way the path of least resistance. We wanted to do something similar with our gateway as well. We wanted people to actually want to use it, and we want to make it really easy for people to use it. We emphasized and amplified a series of sticks and carrots to guide them towards that direction. There was a lot of emphasis on the carrots, and we let a lot of the user feedback drive future iterations of our gateway. Some of the benefits of our gateway is that, one, it’s free to use. We pay for all of the cost. Second, we want to provide optionality. We want to provide a centralized place to interact with all of the different LLM providers.

At the beginning, it was just OpenAI and Cohere, so not much to choose from. This list also expanded as time went on. We also wanted to make it a lot easier for developers. In the early days of interacting with OpenAI, their servers were not the most reliable, so we increased reliability, availability through a series of retry and fallback mechanisms. We actually worked with OpenAI to increase our rate limits as well.

Additionally, we provided an integrated API with both our staging and production environments, so that anyone can explore the interactions between our gateway and other business processes. Alongside those carrots, we also had some very soft sticks to nudge people into the right direction. The first is what we called these nudge mechanisms. Whenever anyone visited ChatGPT or another LLM provider directly, they would get a gentle nudge on Slack saying, have you heard about our LLM gateway? You should be using that instead. Alongside that, we provided guidelines on appropriate LLM use which directed people to leverage the gateway for all work-related purposes.

Although the first iteration of our LLM gateway had a really great paper trail, it offered very little guardrails and mechanisms to actually prevent data from being shared externally. We had a vision that we were working towards, and that drove a lot of the future roadmap and the improvements for our gateway. Our vision was centered around security, reliability, and optionality. Our vision for the gateway was to make the secure path the easy path, with the appropriate guardrails to prevent sharing sensitive information with third-party LLM providers. We wanted to make it highly available, and then again, provide the options of multiple LLM providers to choose from.

In building off of those enablement philosophies, the very next thing we shipped in June of 2023 was our own PII redaction model. We leveraged Microsoft residuals framework along with an NER model we developed internally to detect and redact any potentially sensitive information prior to sending to OpenAI or any external LLM providers. Here’s a screenshot of our PII redaction model in action. I provide this dummy phone number, I would like you to give me a call at this number. This number is recognized by our PII redaction model as being potentially sensitive PII, so it actually gets redacted prior to being sent to the external provider. What was interesting is that with the PII redaction model, while we closed a gap in security, we actually introduced a different gap in the user experience. One of the feedback that we heard from a lot of people is that, one, the PII redaction model is not always accurate, so a lot of the times it interfered with the accuracy, with the relevancies of the answers provided.

Two, for them to effectively leverage LLMs into their day-to-day work, it needs to be able to accept some degree of PII, because that fundamentally was the data that they worked with. For us, and going back to our philosophy of making the right way the easy way, we started to look into self-hosting open source LLMs. The idea was that by hosting these LLMs within our own VPCs, we didn’t have to run the PII redaction model. We could encourage people to send any information to these models, because the data would stay within our cloud environments. We spent the next month building a simple framework using llama.cpp, a quantized framework for self-hosting open source LLMs. The first three models that we started self-hosting was Llama, it was Llama 2 at the time, the Mistral models, and also Whisper, which OpenAI had open sourced. I know technically, Whisper is not an LLM, it’s the voice transcription model. For simplicity, we included in the umbrella of our LLM platform.

After introducing these self-hosted LLMs, we made a fast follow by introducing retrieval augmented generation as an API, which also included a very deliberate choice of our vector database. We heard from a lot of the feedback, and we saw in both industry trends and the use cases that the most powerful use cases of LLMs involved grounding it against context that was relevant to the company. Making these in similar investments within our LLM platform, we first introduced Elasticsearch as our vector database.

We built pipelines and DAGs in Airflow, our orchestration framework, to update and index our common knowledge bases. We offered a very simple semantic search as our first RAG API. We encouraged our developers and our end users to build upon these APIs and building blocks that we provided in order to leverage LLMs grounded against our company context. What we found very interesting was that even though grounding was one of the things that a lot of our end users asked for, even though intuitively it made sense as a useful building block within our platform, the engagement and adoption was actually very low. People were not expanding our knowledge bases as we thought they would. They were not extending their APIs. There was very little exploration to be done. We realized that we probably didn’t make this easy enough. There was still a gap when it came to experimentation. There was still a gap when it came to exploration. It was hard for people to get feedback on the LLM and GenAI products that they were building.

In recognizing that, one of the next things that we invested in was what we called our data applications platform. We built an internal service. It runs on Python and Streamlit. We chose that stack because it’s easy to use and it’s something a lot of our data scientists were familiar with. Once again, we put this behind Okta, made it available behind our VPNs, and created what was essentially a platform that was very easy to build new applications and iterate on those applications. The idea was that data scientists and developers, or really anyone who was interested and willing to get a little bit technical, they were able to build their own applications, have it run on a data applications platform, and create this very fast feedback loop to share with stakeholders, get feedback.

In a lot of the cases, these proof-of-concept applications expanded into something much bigger. Within just the first two weeks of launching our data application platform, we had over seven applications running on it. Of those seven, two of them eventually made it into production where they’re adding value and optimizing operations and creating a more delightful client experience. With the introduction of our data applications platform, our LLM platform was also starting to come together. This is a very high-level diagram of what it looks like. In the first row, we have a lot of our contextual data, our knowledge bases, is being ingested through our Airflow DAGs to our embedding models, and then updated and indexed in Elasticsearch. We also chose LangChain to orchestrate our data applications, which sits very closely with both our data applications platform and our LLM service. Then we have the API for our LLM gateway through our LLM service, tightly integrated within our production environments.

As our LLM platform came together, we started also building internal tools that we thought would be very powerful for employee productivity. At the end of 2023, we built a tool we called Boosterpack, which combines a lot of the reusable building blocks that I mentioned earlier. The idea of Boosterpack is we wanted to provide a personal assistant grounded against Wealthsimple context for all of our employees. We want to run this on our cloud infrastructure with three different types of knowledge bases, the first being public knowledge bases, which was accessible to everyone at the company, with source code, help articles, and financial newsletters. The second would be a private knowledge base for each employee where they can store and query their own personal documents.

The third is a limited knowledge base that can be shared with a limited set of coworkers, delineated by role and projects. This is what we call the Wealthsimple Boosterpack. I have a short video of what it looks like. Boosterpack was one of the applications we actually built on top of our data applications platform. In this recording, I’m uploading a file, a study of the economic benefits of productivity through AI, adding this to a private knowledge base for myself. Once this knowledge base is created, I can leverage the chat functionality to ask questions about it. Alongside the question answering functionalities, we also provided a source, and this was really effective, especially when it came to documents as a part of our knowledge bases. You could actually see where the answer was sourced from, and the link would take you there, so if you wanted to do any fact checking or further reading.

LLM Journey (2024)

2023 ended with a lot of excitement. We rounded the year off by introducing our LLM gateway, introducing self-hosted models, providing a RAG API, and building a data applications platform. We ended the year off by building what we thought would be like one of our coolest internal tools ever. We were in a bit of a shock when it came to 2024. This graph, this is our Gartner’s hype cycle, which maps out the evolution of expectations and changes when it comes to emerging technologies. This is very relevant, especially for generative AI, which in 2023 for most of us, we were entering this peak of inflated expectations. We were so excited about what LLMs could do for us. We weren’t exactly sure in concrete ways where the business alignment came from, but we had the confidence, we wanted to make big bets in this space.

On the other hand, as we were entering 2024, it was sobering for us as a company and for the industry as a whole too. We realized that not all of our bets had paid off. That in some cases, we may have indexed a little bit too much into investments for generative AI, or building tools for GenAI. What this meant for us, for Wealthsimple in particular was, our strategy evolved to be a lot more deliberate. We started focusing a lot more on the business alignment and on how we can get business alignment with our generative AI applications. There was less appetite for bets. There was less appetite for, let’s see what happens if we swap this out for one of the best performing models. We became a lot more deliberate and nuanced in our strategy as a whole. In 2024, we actually spent a big chunk of time at the beginning of the year just going back to our strategy, talking to end users, and thinking really deeply about the intersection between generative AI and the values our business cared about.

The first thing we actually did as a part of our LLM journey concretely in 2024 was we unshipped something we built in 2023. When we first launched our LLM gateway, we introduced the nudge mechanisms, which were the gentle Slack reminders for anyone not using our gateway. Long story short, it wasn’t working. We found very little evidence that the nudges were affecting and changing behavior. People who are getting nudged, it was the same people getting nudged over again, and they became conditioned to ignore it. Instead, what we found was that improvements to the platform itself was a much stronger indicator for behavioral changes. We got rid of these mechanisms because they weren’t working and they were just causing noise.

Following that, in May of this year, we started expanding the LLM providers that we wanted to offer. The catalyst for this was Gemini. Around that time, Gemini had launched their 1 million token context window models, and this was later replaced by the 2-plus million ones. We were really interested to see what this could do for us and how it could circumvent a lot of our previous challenges with the context window limitations. We spent a lot of time thinking about the providers we wanted to offer, and building the foundations and building blocks to first introduce Gemini, but eventually other providers as well. A big part of 2024 has also been about keeping up with the latest trends in the industry.

In 2023, a lot of our time and energy were spent on making sure we had the best state-of-the-art model available on our platform. We realized that this was quickly a losing battle, because the state-of-the-art models were evolving. They were changing every week or every few weeks. That strategy shifted in 2024 where instead of focusing on the models themself, we took a step back and focused higher level on the trends. One of the emerging trends to come out this year was multi-modal inputs. Who knew you could have even less friction-full mediums of interacting with generative AI? Forget about text, now we can send a file or a picture. This was something that caught on really quickly within our company. We started out by leveraging Gemini’s multi-modal capabilities. We added a feature within our gateway where our end users could upload either an image or a PDF, and the LLM would be able to drive the conversation with understanding what was being sent.

Within the first few weeks of launching this, close to a third of all of our end users started leveraging a multi-modal feature at least once a week. One of the most common use cases we found was when people were running into issues with our internal tools, when they were running into program errors, or even errors working with our BI tool. As humans, if you’re a developer, and someone sends you a screenshot of their stack trace, that’s an antipattern. We would want to get the text copy of it. Where humans offered very little patience for that sort of things, LLMs embraced it. Pretty soon, we were actually seeing behavioral changes in the way people communicate, because LLM’s multi-modal inputs made it so easy to just throw a screenshot, throw a message. A lot of people were doing it fairly often. This may not necessarily be one of the good things to come out of it, but the silver lining is we did provide a very simple way for people to get the help they needed in the medium they needed.

Here is an example of an error someone encountered when working with our BI tool. This is a fairly simple error. If you asked our gateway, I keep running into this error message while refreshing MySQL dashboard, what does this mean? It actually provides a fairly detailed list of how to diagnose the problem. No, of course, you could get the same results by just copying and pasting it, but for a lot of our less technical users, it’s a little bit hard sometimes to distinguish the actual error message from the full result.

After supporting multi-modal inputs, the next thing we actually added to our platform was Bedrock. Bedrock was a very interesting addition, because this marked a shift in our build versus buy strategy. Bedrock is AWS’s managed service for interacting with foundational large language models, and it also provides the ability to deploy and fine-tune these models at scale. There was a very big overlap between everything we’ve been building internally and what Bedrock had to offer. We had actually considered Bedrock back in 2023 but said no to it, in favor of building up a lot of these capabilities ourselves. Our motivation at that time was so that we could build up the confidence, the knowhow internally, to deploy these technologies at scale. With 2024 being a very different year, this was also a good inflection point for us, as we shifted and reevaluated our build versus buy strategy.

The three points I have here on the slides are our top considerations when it comes to build versus buy. The first is that we have a baseline requirement for security and privacy. If we wanted to buy something, they need to meet that. The second is the consideration of time to market and cost. Then, third, this was something that changed a lot between 2023 and 2024, was in considering and evaluating our unique points of leverage, otherwise known as the opportunity cost of building something, as opposed to buying it. There were a lot of trends that drove the evolution of these strategies and this thinking. The first was that vendors and LLM providers, their security awareness got a lot better over time. LLM providers were offering mechanisms for zero-day data retention. They were becoming a lot more integrated with cloud providers. They had learned a lot from the risks and the pitfalls of the previous year to know that consumers cared about these things.

The second trend that we’ve seen, and this was something that affected us a lot more internally, is that as we got a better understanding of generative AI, it also meant we had a better understanding of how to apply it in ways to add value, to increase business alignment. Oftentimes, getting the most value out of our work is not by building GenAI tools that exist on the marketplace. It’s by looking deeply into what we need as a business and understanding and evaluating the intersections with generative AI there. Both of these points actually shifted our strategy to what was initially very build focus, to being a lot more buy focused. The last point I’ll mention which makes this whole thing a lot more nuanced is that, over the past year to two years, a lot more vendors, both existing and new, are offering GenAI integrations. Almost every single SaaS product has an AI add-on now, and they all cost money.

One analogy we like to use internally is, this is really similar to the streaming versus cable paradigm, where, once upon a time, getting Netflix was a very economical decision when contrasted against the cost of cable. Today, with all of the streaming services, you can easily be paying a lot more for that than what you had initially been paying for cable. We found ourselves running into a similar predicament when evaluating all of these additional GenAI offerings provided by our vendors. All that is to say is the decision for build versus buy has gotten a lot more nuanced today than it was even a year ago. We’re certainly more open to buying, but there are a lot of considerations on making sure we’re buying the right tools that add value and not just providing duplicate value.

After adopting Bedrock, we turned our attentions to the API that we offered for interacting with our LLM gateway. When we first put together our gateway, when we first shipped our gateway, when we first offered this API, we didn’t think too deeply about what the structure would look like, and this ended up being a decision that we would regret. As OpenAI’s API specs became the gold standard, we ran into a lot of headaches with integrations. We had to monkey patch and rewrite a lot of code from LangChain and other libraries and frameworks because we didn’t offer a compatible API structure. We took some time in September of this year to ship v2 of our API, which did mirror the OpenAI’s API specs. The lesson we learned here was that it’s important to think about, as this industry, as the tools and frameworks within GenAI matures, how those providers were thinking about like, what is the right standard and the right integrations?

Learnings

This brings us a lot closer to where we are today, and over the past few years, although our platform, our tools, and these landscapes have changed a lot. We’ve also had a lot of learnings along the way. Alongside these learnings, we also gain a better understanding of how people use these tools and what they use them to do. I wanted to share some statistics that we’ve gathered internally on this usage. The first is that, there is, at least within Wealthsimple, a very strong intersection between generative AI and productivity.

In the surveys and the client interviews we did, almost everyone who used LLMs found it to significantly increase or improve their productivity. This is more of a qualitative measure. We also found that LLM gateway adoption was fairly uniform across tenure and level. It’s a fairly even split between individual contributors and people leaders. This was great affirmation for us, because we had spent a lot of time in building a tool and a platform that was very much bottoms-up driven. This was good affirmation that we were offering these tools that were genuinely delightful and frictionless for our end users.

In terms of how we were leveraging LLMs internally. This data is a few months outdated, but we actually spent some time annotating a lot of the use cases. The top usage was for programming support. Almost half of all of the usage was some variation of debugging, code generation, or just general programming support. The second was content generation/augmentation, so, “Help me write something. Change the style of this message. Complete what I had written”.

Then the third category was information retrieval. A lot of this was focused around research or parsing documents. What’s interesting is that almost everything, all the use cases we saw, basically fell within these three buckets, there was very little use case outside. We also found that about 80% of our LLM usage came through our LLM gateway. This is not going to be a perfectly accurate measure, because we don’t have a comprehensive list of all of the direct LLM accesses out there, but only about 20% of our LLM traffic hit the providers directly, and most of it came through the gateway. We thought this was pretty cool. We also learned a lot of lessons in behavior. One of our biggest takeaways this year was that, as our LLM tooling became more mature, we learned that our tools are the most valuable when injected in the places we do work, and that the movement of information between platforms is a huge detractor. We wanted to create a centralized place for people to do their work.

An antipattern to this would be if they needed seven different tabs open for all of their LLM or GenAI needs. Having to visit multiple places for generative AI is a confusing experience, and we learned that even as the number of tools grew, most people stuck to using a single tool. We wrapped up 2023 thinking that Boosterpack was going to fundamentally change the way people leverage this technology. That didn’t really happen. We had some good bursts in adoption, there were some good use cases, but at the end of the day, we actually bifurcated our tools and created two different places for people to get their GenAI needs. That was detrimental for both adoption and productivity. The learning from here is that we need to be a lot more deliberate about the tools we build, and we need to put investments into centralizing a lot of these toolings. Because even though this is what people said they wanted, even though this intuitively made sense, user behavior for these tools is a tricky thing, and that will often surprise us.

GenAI Today

Taking all of these learnings, I wanted to share a little bit more about generative AI today at Wealthsimple, how we’re using it, and how we’re thinking about it going into 2025. The first is that, in spite of the pitfalls we’ve made, overall, Wealthsimple really loves LLMs. Across all the different tools we offer, over 2200 messages get sent daily. Close to a third of the entire company are weekly active users. Slightly over half of the company are monthly active users. Adoption, engagement for these tools is really great.

At the same time, the feedback that we’re hearing is that it is helping them be more productive. We also learn all the lessons, all of the foundations and the guardrails that we learned and developed for employee productivity also paves the way to providing a more delightful client experience. These internal tools establish the building blocks to build and develop GenAI at scale, and they’re giving us the confidence to find opportunities to optimize operations for our clients. By providing the freedom for anyone at the company to freely and securely explore this technology, we had a lot of organic extensions and additions that involve generative AI, a lot of which we had never thought of before. As of today, we actually do have a lot of use cases, both in development and in production, that are optimizing operations. I wanted to share one of them. This is what our client experience triaging workflow used to look like. Every single day, we get a lot of tickets, both through text and through phone calls from our clients.

A few years ago, we actually had a team dedicated to reading all of these tickets and triaging them. Which team should these tickets be sent to so that the clients can get their issue resolved? Pretty quickly, we realized this is not a very effective workflow, and the people on this team, they didn’t enjoy what they were doing. We developed a transformer-based model to help with this triage. This is what we’re calling our original client experience triaging workflow. This model would only work for emails. It would take the ticket and then map it to a topic and subtopic. This classification would determine where this ticket gets sent to. This was one of the areas which very organically extended into generative AI, because the team working on it had experimented with the tools that we offered. With our LLM platform, there were two improvements that were made.

The first is that by leveraging Whisper, we could extend triaging to all tickets, not just emails. Whisper would transcribe any phone calls into text first, and then the text would be passed into the downstream system. Generations from our self-hosted LLMs were used to enrich the classification, so we were actually able to get huge performance boosts, which translated into so many hours saved by both our client experience agents and our clients directly themselves through these improvements in performance. Going back to this hype chart, 2023 we were climbing up that peak of inflated expectations. 2024 was a little bit sobering as we made our way down. Towards the end of this year, and as we’re headed into next year, I think we’re on a very good trajectory to ascend that slope of enlightenment. Even with the ups and downs over the past two years, there’s still a lot of optimism, and there’s still a lot of excitement for what next year could hold.

Questions and Answers

Participant 1: When it comes to helping people realize that putting lots of personal information into an LLM is not necessarily a safe thing to do, how did you help ensure that people weren’t sharing those compromising data from a user education standpoint?

Gu: I think there’s two parts to this. One is that we found, over the years of introducing new tools, that good intentions are not necessarily enough. That we couldn’t just trust people. That we couldn’t just focus on education. That we actually needed the guardrails and the mechanisms within our system to guide them to ensure they make the right decision, outside of just informing them about what to do. That was one part to our philosophy. I think, to your point, definitely leveling up that understanding of the security risk was very important. Being a financial services company, we work with very sensitive information for our clients. As a part of our routine training, there’s a lot of education already about like, what is acceptable to share and what is not acceptable to share. The part that was very hard for people to wrap their heads around is what happens when this information is being shared directly with OpenAI, for instance, or in a lot of cases like fourth-party data sharing.

For instance, Slack has their AI integration. Notion has their AI integration. What does that mean? To an extent, it does mean all of this information will get sent to the providers directly. That was the part that was really hard for people to wrap their heads around. This is definitely not a problem that we’ve solved, but some of the ways that we’ve been trying to raise that awareness is through onboarding. We’ve actually added a component for all employee onboarding that includes guidelines for proper AI usage. We’ve added a lot more education for leaders and individuals in the company who may be involved in the procurement process for new vendors, and the implications that may have from a security point of view.

Participant 2: What consisted of the data platform, and how did you use that in your solution?

Gu: There’s definitely a very close intersection between our data platform and our machine learning platform. For instance, one of the bread and butters to our data platform is our orchestration framework through Airflow. That was something we use to update the embeddings within our vector database and make sure it was up to date with our knowledge bases. Outside of that, when it comes to exploration, and especially for our data scientists, as they’re building new LLM and ML products, there’s a very close intersection between the data we have available in our data warehouse and the downstream use cases. I would call those two out as the biggest intersections.

Participant 3: Early in the conversation, you talked about Elasticsearch as your vector database capability for similarity search for RAG purposes. Later, you talked about transitioning to Bedrock. Did you keep Elasticsearch, or did you get off of that when you transitioned to Bedrock?

Gu: We didn’t get off of that. Actually, we’re using OpenSearch, which is AWS’s managed version of Elasticsearch. At the time we chose OpenSearch/Elasticsearch, because it was already part of our stack. It was easy to make that choice. We didn’t go into it thinking that this would be our permanent choice. We understand this is a space that evolves a lot. Right now, Bedrock is still fairly new to us. We’re primarily using it to extend our LLM provider offerings, specifically for Anthropic models. We haven’t dug or evaluated as deeply, like their vector database or like their fine-tuning or their other capabilities. I think that’s definitely one of the things we want to dig deeper into for 2025 as we’re looking into what an evolution the next iteration of our platform would look like.

Participant 3: Are you happy with the similarity results that you’re getting with OpenSearch?

Gu: I think we are. I think studies have shown that this is usually not the most effective way of doing things from a performance and relevancy perspective, at least. Where we’re really happy with it is like, one, it’s easy to scale. Latency is really good. It’s just overall simple to use. I think depending on the use cases, like maybe using a reranker, or leveraging a different technique may be better suited, depending on the use case.

See more presentations with transcripts

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Google Go Module Mirror Served Backdoor for 3+ Years

MMS Founder
MMS Craig Risi

Article originally posted on InfoQ. Visit InfoQ

In February 2025, researchers at Socket uncovered a significant supply chain attack within the Go programming ecosystem. A malicious package, named github.com/boltdb-go/bolt, was discovered impersonating the legitimate and widely-used BoltDB module. This backdoored package exploited the Go Module Proxy’s caching mechanism to persist undetected for years, underscoring vulnerabilities in module management systems.

​The Go Module Proxy is designed to cache modules indefinitely to ensure consistent and reliable builds. While this immutability offers benefits like reproducible builds and protection against upstream changes, it also presents a risk: once a malicious module is cached, it remains available to developers, even if the source repository is cleaned or altered. In this incident, the attacker leveraged this feature to maintain the presence of the backdoored package within the ecosystem, despite subsequent changes to the repository. 

This case is part of a broader trend where attackers exploit package management systems through techniques like typosquatting. Similar incidents have been observed in other ecosystems, such as npm and PyPI, where malicious packages mimic popular libraries to deceive developers. 

To reduce the risk of supply chain attacks, developers should carefully verify package names and sources before installation, ensuring they’re using trusted libraries. Regular audits of dependencies can help catch signs of tampering or malicious behavior early. Security tools that flag suspicious packages offer another layer of protection, and staying up to date with known vulnerabilities and ecosystem alerts is essential for maintaining safe development practices.

By adopting these practices, developers can enhance the security of their software supply chains and reduce the risk of introducing malicious code into their projects.

About the Author

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Podcast: Implement the EU Cyber Resilience Act’s Requirements to Strengthen Your Software Project

MMS Founder
MMS Eddie Knight

Article originally posted on InfoQ. Visit InfoQ

Transcript

Olimpiu Pop: Hello, everybody. I’m Olimpiu Pop, an editor with InfoQ, and tonight, we have Eddie Knight to demystify the European Cyber Resilience Act. I can try to give an introduction, but he’s a proper Knight, and he has so many titles in so many organizations, so I’ll just let Eddie do the intros. Eddie, tell us a bit about what you’re doing these days.

Eddie Knight: So Sonatype is my employer within Sonatype. I am the OSPO lead, which means I manage our relationships primarily externally with the Linux Foundation and a few others, such as the Eclipse Foundation and Apache Foundation. But most of my scope is within the Linux Foundation personally, which has me on the Technical Oversight Committee for FINOS, the financial technology arm of the Linux Foundation. And that’s because my background is in finance. I was at Bank of America and then at Morgan Stanley before Sonatype. Because I have a security and compliance background I am the co-chair for the security technical Advisory group for CNCF. And in the course of those duties, some of the stuff that we’ll talk about here in a bit, a lot of the activities have overlapped with OpenSSF, the Open Source Security Foundation, and so I maintain a few projects over there as well. So yes, you’re right. I am kind of everywhere.

Olimpiu Pop: So just to make it short, you are supported by Sonatype, your employer, to do a lot of good stuff in the open source field.

Eddie Knight: That’s exactly it.

Software supply-chain threats are growing at alarming rates [02:03]

Olimpiu Pop: Thank you for simplifying that for me. First of all, congrats for the keynotes that you just gave on KubeCon. Michael and you did an excellent job. So I will just try to dig deeper on those points because the Cyber Resilience Act in Europe raised a couple of eyebrows through the years. But before we go there, as you said, you’re with Sonatype and Sonatype has a decade behind it of doing reports on what happens in the open source community in the supply chain. And as we know the software is hitting the world, and obviously now the AI is hitting the world. So let’s look a bit at that. How did the supply chain evolve in this decade? Okay, good. Thank you for that.

Eddie Knight: Yes, so the last decade of the software supply chain. I think one of the biggest wins that we’ve had is there’s fewer and fewer people that are saying, “We don’t use open source”. I should put that in air quotes. Fewer people are saying that, and you’ll find that a lot of people who are saying that are using Java and they’re running Linux machines, and they say, “We don’t use open source”. Like, okay, cool, how are you doing anything? Like Python. “We’re a Python shop that doesn’t use open source”. Exactly, my friend. That is an open-source language. And so we’re seeing fewer and fewer of that. So that’s a big win.

We’re also seeing a massive influx in just discussion around supply chain security. A decade ago, we didn’t have the term supply chain security. I wasn’t working in this space. I wasn’t thinking about this space. I was a culprit in this space of backdooring dependencies into my firms and trying to take shortcuts to increase business value by taking shortcuts. But we see a lot more people just having these discussions now, and that’s a huge, huge benefit because on the flip side, the last three years we’ve seen the number of attacks on the open-source supply chain double every single year, and it’s just a huge space.

Olimpiu Pop: That means if it doubled every year, now after a decade, it’s a lot. You are the quant here.

Eddie Knight: That’s a good number.

Artificial Intelligence is an accelerator; it can be used for defending or on the offensive side [04:12]

Olimpiu Pop: You are the quant here. But for me it seems that that’s a lot. I remember the last time when I checked it, and I think that was last year in October, it was a quarter of a million attacks solely in the supply chain. So that’s a lot, and I’m expecting that things changed also, given that AI is creating a new leverage. Do you have any insights about that? How did the AI change the game?

Eddie Knight: On the defensive side, at least at Sonantype, we’ve always used machine learning for, I think the last… Again, before I joined the firm, maybe six or seven years ago, machine learning was starting to get brought into analyzing, and that’s how when you crack open our tools, we’re able to tell you, “Hey, your software has copy pasted code from a known vulnerability or known exploitable”. Whether it’s malware or a vulnerability, the way that we’re able to tell you, you copy pasted it over is through machine learning.

So machine learning in that aspect has been around for a while, but the generative AI that we have these days is kind of having the opposite effect. It’s allowing known exploits to be obfuscated, to be done in different ways, to be manipulated or just to be performed by people who otherwise knew nothing about it. At the beginning days of ChatGPT, I was able to go in there and describe the types of attacks I wanted to protect against and get ChatGPT to explain exactly how I would build malware to distribute. So that is definitely the reason that since generative AI has become public, we’ve seen this doubling every single year is definitely, it’s downstream from AI becoming a more publicly accessible resource for the good and the bad.

Olimpiu Pop: Okay, so just to put it that plainly, this new wave of generative AI, it’s a bicycle more or less. It makes you run faster, but if you’re on the wrong side, you just go over the cliffs or somebody will push you over the cliffs and then your falling will be much longer. Okay, thank you for that. And in order to just put, I know that I said a very academic term here, and it’s a lot in terms of the effects hackers have around us. I just read the other day a paper from Harvard that was putting it in context financially, and it was something like, currently the money that are being put into open source is around $4.15 billion, but the impact financially that is going on the other side, it’s around $8.8 trillion. So for each dollar that we invest in open source, we get $2,100 as a return. So that’s a good ROI. I know, and given it’s-

Eddie Knight: I think that even get past my marketing budget. I think they’d even be happy with that ROI.

Olimpiu Pop: Okay. That’s good. So just to make it even more plain than that, it could be a return of 212X. So that’s a lot. So I think everybody will be very happy with that output. Okay. And let me remember a bit the things that I did while I was doing research on these kind of things. And looking at what was happening in the supply chain and the hackers, I found a couple of enemies, let’s call them, for the plane developer, and those were malicious actors, obviously the people that they just want to put the hand in your pocket, take your money, bring it private information or anything else that can be sold on the black market. Then a couple of years back, state actors came into play and that was something new for me. It was becoming espionage. So all these kind of things that are done for. I’ll not name countries or anything else, but mainly state actors that just wanted our skin.

The open-source communities worked with the EU to shape the Cyber Resilience Act into the form that allows it to help developers [08:02]

And last but not least, it was used as a weapon, again, in different parts of the world. But the feeling was that also bureaucracy is an enemy of the plane programmer. And I know that you as a company, Sonatype, especially Brian Fox was very involved with the Cyber Resilience Act. I know that he was a partisan of making things plain. And me being a European, I was very happy to see that the European Union listened, took in the information that was provided by the companies involved in open source, and it actually came something out that everybody was happy with.

I know that last year when the final version was signed or voted, people were very happy and a bunch of companies, the Eclipse Foundation and probably the Linux Foundation as well, they just came together and they said, “Okay, now we’re going to work to see through in terms of implementation”. What’s your insight on that? I mean, you are actually one of the main contributors into making things right. Anything to be added there?

Eddie Knight: Oh, that’s an overloaded question.

Olimpiu Pop: Let’s make it plain. What’s the most important thing that we have to know now as an industry. Just plainly when should we start worrying about CRA?

Eddie Knight: I would say don’t worry about it. Think about it. Don’t have anxiety about it. There’s so many benefits to the CRA if we all play ball, but that’s not what you were meaning. You’re meaning when do we need to start actioning? And there’s two big numbers to remember. The first is, I believe it’s June 11th, 2026, which is if you have a known exploitable vulnerability in your software, you will have reporting requirements after that date. The second is going to be the… Well, there’s a mid-tier, there’s midline, but the end of 2027, so December 11th, 2027 is going to be the full effect. So all of the rules that are written down that your compliance staff are going to need to understand and metabolize are all going to be in full effect at the end of 2027, which is a pretty good amount of time to train people up, make sure we have the systems in place.

Olimpiu Pop: So we still have two years more or less?

Eddie Knight: Yes.

Olimpiu Pop: So it would be like a soft landing. There are intermediate steps, right?

Eddie Knight: That’s the intent. That’s the intent. Yes. And I’m actually really proud of the folks who made those decisions and those timelines because that middle timeline is actually more for themselves ish in that there’s this middle of 2027, there’s a requirement on themselves to have tooling, resources. They need to be notifying auditors. Those types of activities are all needing to be done something like half a year before the full rollout. And so there’s this staggering to it that’s actually really, really beneficial and makes it just a lot more possible for it to be done well.

Olimpiu Pop: Okay, good. That sounds digestible, but I know that during your keynote, you had the slide with a lot of dots and a lot of lines. I felt that I need a PhD only to comprehend half of it. So let’s look a bit-

Eddie Knight: Yes, the FUKAMI slide.

Olimpiu Pop: Yes.

Eddie Knight: That’s the FUKAMI slide. It’s the scary slide. It’s all you need to know is how these a hundred actors connect to each other in the relationships and responsibilities between them, and then you’re done. There’s a lot of different nuanced bits that are in there. Now in the keynote, the intention is like, hey, you don’t need to worry about all of that right now. If you find yourself in this picture, worry about the lines that you connect to. But understanding this entire ecosystem of the auditing, the regulation, the other regulations that are impacted by this regulation, all of those different types of things, don’t worry about understanding. Absolutely all of it. Action. Take action on finding yourself in this picture, finding the relationships that you have with other people in this picture and what responsibilities you have because of that. There is action that can be taken here, and it doesn’t mean you need to understand the entire giant picture.

How does the CRA help prevent other “Christmas Miracles” like the Log4Shell [12:10]

Olimpiu Pop: Okay, great. So let’s see if I got it right. It depends where I’m positioned in that picture. So in my plain understanding, that will be I’m either downstream or upstream. Somebody is using what I’m building or I’m using somebody else’s. Of course, it’s oversimplified because theoretically it should be both ways and I would consume other libraries and other people might use mine. So let’s get back to history. A couple of years back in December morning around 4:00 AM, I was just trying to do some proper work, and then I got an email that said that a brown splash hit the fan, and that brown splash was Log4J.

Eddie Knight: Yes. The Christmas Miracle.

Olimpiu Pop: Exactly the Christmas Miracle. That pretty much started everything. Well, it didn’t start everything, but it created a domino effect in that area because from that point on, a lot of countries started putting cyber legislation in place. I know that United States started doing something. I don’t know if something out of that is still available or it’s still in use today, but that what happened. So now I’m just thinking about the guys in Nebraska that were proper heroes for doing that stuff for the Log4J library and making sure that everything is fixed. What would the CRM mean for them today and tomorrow?

Eddie Knight: Yes, I made Log4J. You’re using Log for J. Somebody found an exploit on my software that I wrote, and now you have to clean up and find every single place that it was installed and do an update and make sure that that update’s not breaking things. So in that situation, your question is how does the CRA help you in this story?

Olimpiu Pop: To me as a consumer, and what’s the impact for you as the maintainer of Log4J?

Eddie Knight: So in the past, especially prior to this example of Log4J, it was not a universal standard to have much of anything between the maintainer and the consumer. Now we know that the financial services industry is highly regulated and it’s very standard. It’s an industry standard to have some middle steps in there where there’s an approval process, there’s a scanning process, there’s a artifact storage process in between you and me as the maintainer and the consumer. And when those systems are in place, we saw that the financial services industry, at least the customers of Sonatype who are using of those things, we know that those proper tools are properly in place, had an average of four days to recover all of their Log4J instances for large enterprises.

Comparatively speaking, the universal average of updating all resources from the Log4J incident was four weeks. So not having anything between the maintainer and the consumer is a serious problem because you need that visibility, you need those pre-checks. You need reminders, alerts, tracking, just everything. You need a lot of support there. And when it’s there, life is easy.

The CRA is doing something similar to that, not in a technical sense, but in a kind of a social sense. There’s these rules that are being put in place in between me as the maintainer of this code base and you as the consumer. So that way you know that somebody has come in and looked at this process all along the way before it got to you, and that might mean your bosses had to follow more rules. It might mean that the steward who is hosting and supporting me as a maintainer has more rules that they have to follow, but because of those rules and putting more steps between us, what we’re going to find is that there’s just going to be a lot more of a streamlined relationship so that way there’s less to worry about. And it’s going to be a win. It’s going to shift from compliance being done 100% inside your firm.

When you need to pull something down, you need to have somebody on your team go and research and look at who’s maintaining that. Just all the data that you need to pull in to be able to do a proper analysis of this. Instead of that always being done by you, what we’re going to see is a shifting outward to more shared responsibility, especially for these bigger packages like Log4J. We’re going to see a lot more shared responsibility happening because everybody’s going to be needing to follow the same rules, and it’s just going to be significantly more practical to have the stewards who are supporting those maintainers offset some of those costs and have the audits be done in a public space so that way everybody can share this knowledge and these resources. And when one enterprise is adopting a tool and bringing it up to snuff, everybody in the world is going to benefit from that.

Olimpiu Pop: So, for me, that doesn’t sound that scary. It sounds like we are just putting some steroids in open source, making sure that everybody really benefits from what it’s doing and that it becomes more of a community. Is that correct, more or less?

Eddie Knight: I know not everybody’s looking at it that way. That’s the way I’m looking at it. Absolutely. Yes.

OpenSSF Baseline assist with the CRA adoption [17:36]

Olimpiu Pop: Happy to have the same optic. Well, obviously while I was talking about these things, because I did the first share of presentations in this space, I was starting with the line that the European Union doesn’t innovate, but it regulates. We are very good at regulating things. But now looking at that, my feeling is that they are just looking that we do it properly and people are actually safe from that point of view because it has a huge responsibility. And just an example popped in my mind during that period, I was working in the company that was doing only JavaScript and everybody was laid back, okay, it’s good. We are not using Java. Nothing affect us. Two, three hours down the line, everybody was in panic mode because actually we were using a cloud service that under the hood obviously was using Java due to its benefits. And then again, we had to start over again and just work with other stuff. So yes, I understand the benefit for that.

And as you started initially, you are part of a lot of organizations, a lot of work groups in the open source space. What tools can we use? I mean, obviously understanding the legislation is very complex, but I know that OpenSSF has a bunch of tools that are very useful. For instance, I liked a lot the scorecards. What do you have in the back pocket that we can send developers to?

Eddie Knight: Yes, so scorecards is a really good tool to get a quick pulse. I would say make sure you’re sending developers to scorecard and not your regulatory compliance folks, because scorecard, there’s a set of recommendations that are in there that are actually really good recommendations, but they’re not vetted by a large community body. They’re not mapped to guidelines such as frameworks and regulations and things like that yet conversations are in place about changing that.

But as things currently stand, it’s what a core very good set of engineers has identified as things to improve, to lock down your projects, and a really good tool to be able to point that at a million and a half repos every single week and give every single developer a quick little snippet of code that you can put in your pipeline so you can update your checks whenever you want. You can just run it. And for the general public, there’s this massive database of results. So you can see what the score of any one of a million and a half projects is rating according to these checks.

So it’s really good for a quick view. It’s best for developers because it’s giving you actual practical changes that you can do right now. The other thing inside OpenSSF that contrasts to that. So the downsides that I just mentioned from Scorecard are being addressed in the project called the Baseline, which we’ve just been talking about, which it’s the open source project security baseline. And the purpose of the baseline is to compensate for exactly the things that I just listed off. We are trying to take a set of known cybersecurity best practices and guidance, things like NIST 800-53, things like the CRA, and bring those down and say, how does this apply to open source projects? But how does this apply to every single open source project generically as a literal baseline for open source projects?

In these set of controls there’s 40 of them. I think that’s the right number. It’s round numbers. So I never trust round numbers. Last I checked, there’s 40 and they’re divided into three levels. There’s some topical organization to them, and there are assessment requirements for every single one of them. So you can look at it and you can say… I almost want to recite off some to you, but just as an example, you need to have MFA turned on. And so now I can just stop and think, “Oh yes, for all of my projects MFA’s turned on except for, you know what? I think I didn’t think about it for one of them. Let me go and do that right now”.

And so it’s really good for developers in that aspect to just give this checklist all the way down of, hey, these are the things that are true for every single project. And those 40 are divided up into three levels where the bottom level is 18 or 20 checks that are just like, hey, if you’re a single developer, you could still do this. And the top ones are like, hey, you need to do a security self-assessment. And that’s connecting a lot to the CRA where the CRA is asking open source projects to assess their own security and make attestations and say like, hey, this is where I stand. And that’s the kind of stuff that we would expect from projects with more maintainers, more users, things like that.

But that level one criteria is something that just has never existed for open source projects. So you just have that key central cohesive recommendations. And so that’s something coming out of OpenSSF that I’m very proud of. I think it’s just a really good project that everybody can benefit from. And on the horizon from that last thing I’ll talk about, because there’s too much, and you’ll never get to talk again if I keep going, but on that topic of the baseline, we are currently working with the Linux Foundation, the LFX Insights platform, to get a subset of those checks that can be run against public repos, the way that scorecards being run against 1.5 million, some odd repos. We are working with LFX Insights to set up a system where that same kind of scanning can be done, but now the results are actually mapped to regulatory expectations such as the CRA, and that’s something that’s trying to be provided to open source maintainers, which is really, really exciting.

Olimpiu Pop: So let me break down in levels and points what you said. So the scorecard tool, by the way, it has really nice logo. I like the logo, so that’s how I choose the scorecard. Yes, it’s the best logo ever. That’s useful for day-to-day developers to ensure that they, first of all, they can check their open source project or even internally their project. And that was my recommendation usually. So now you can correct me whether it was proper or improper recommendation to be used when we are choosing a library to incorporate, to adopt in our project. At least that was my view. It was good to check and then compare between that.

Eddie Knight: Yes, so I’m a little bit on the fence about that because there’s not a guidance on what is a good threshold, and it’s a score of zero to 10, and the average score is less than 4.5. It’s something like four. So if you want to have just a decent number, five is a decent number, but that’s not considering. You might have done some of this stuff that isn’t actually securing your project. You might’ve set up a fuzzing tool, but for your project, that might not be boosting your security for your particular situation as much as some other elements. So you might be increasing your score without necessarily making a significant impact.

Olimpiu Pop: Improving the security.

Eddie Knight: And then on the other side of the exact same point is five out of 10 sounds horrible. That’s horrible. And so I think a lot of folks just don’t know how to read. If you see a nine, you see a 10, it’s like, oh, cool, they’re passing. No, no, no. Those projects are doing everything that these engineers that are maintaining scorecard could think that they could measure. That’s a really, really good score. I looked at one of the open telemetry scores, they have like 70 repos, but one of them was a 9.9, and I was here in middle of a conversation with them and I’m like, “How did you do that?” He’s like, “Oh, that’s not my repo. But yes, I guess they just did everything wild to see scores that high”. But as a reader, as a consumer, we’re not trained to know, “Hey, dude, above four is a pretty good score”.

How the CRA-related controls can be used to enhance the security of your project [25:32]

Olimpiu Pop: So that means that in life, like in everything else, we just have to make sure that the tool is fit for our purpose and understand exactly what’s there. So actually look beside the number. So we’ll have the number as a guiding principle, but then we should look at the things that are actually of interest to us because as you said, there are some things that are there, okay, we are communicating if we have bugs or not or stuff like that. So we actually have to aim to have the most important things in place. Good.

And then you mentioned about the baseline. And the baseline, it reminded me about during the period when I was doing certification. Not really happy about those times. So I still have some cold sweats during the night about those things, but my feeling is that these things are appropriate and they are level. So theoretically if I’m doing those, I’ll be ticking some boxes that will assure my audience that some particular operations were made and now some sanity mechanisms are in place. One more question on this. How often should it happen? Should it happen every time when we are doing a release or is it on a time-scale?

Eddie Knight: Yes, that’s a really good question. So on one hand, if you have automated checks, you should just run them all the time, put them on a cron, put them on your pull requests, put them on your commits, just run them all the time. Now granted, you’re going to burn a lot of energy doing that, and you might just lose your mind. So I think the thing to consider is, again, which of these values matter to you? With the baseline, what we’ve tried to do is say, hey, every single one of these values always matters. But some of them are like, hey, is this data populated?

The quality of that data might be arbitrary like your security policy. You might have a janky security policy, but you have it. It exists, but it’s maybe not super clear for readers. It doesn’t matter how often you run that check. There’s a degree to which compliance will not always equate to security, and that’s a really important thing that I think we don’t talk about enough.

Olimpiu Pop: So we should talk more about it, right? Okay, so what more… I mean, talking is okay, but more than that, what would be actions that we need to take in that spectrum to be on the safer side? Can you name one, two?

Eddie Knight: As far as what controls can we meet or what can we do as a community? Because I’ve got answers for both.

Olimpiu Pop: Let’s have both because it’s the day when we can be eager to hear more about it.

Eddie Knight: Yes. So there are plenty of controls on multi-factor that I talked about. There are very technical controls. Those should just always be in place and always check those. Always. You should just be scanning for those. If you’re a user and you’re seeing that some of these detailed technical controls are out of place, like there’s status checks aren’t being run on commits, things like that, it’s like, oh, they should raise a red flag. I have a repo right now that I am not running my status checks and not requiring code reviewers on. I hope that you would come to me and tell me, “Hey, we want to use this”, and this is where the community part comes in. I hope you would come to me and say like, “Hey, we want to be using this, but we know that the standard is that you should be doing these certain things”.

And then I would say, “Oh, well, I wasn’t ready for you to be using that in prod”. If your response is, well, we are ready to use it in prod, then you and I should work on implementing those things and making sure that’s in place. And then there’s the other side, which is the clarity of the security documentation. The secure by default is not always possible, so we have to have secure configuration documentation. This is how you turn on Flux, the continuous deployment GitOps platform. You can’t just kick it up and turn it on without there being some security risks that you need to account for in your system, and you need to flip some switches and stuff. It just came out of a meeting with their maintainers. Flux has documentation around what you should do, but every time a new feature is added, there’s a chance that that documentation is going to go stale.

Knowing what it means to do a securely configured deployment of this application is extremely, extremely important. And so as a community, we need to be more ready and willing to raise our hand when we say, “That wasn’t clear. That looks like it was out of date”. Nobody likes hearing make a pull request, but at least file an issue. At minimum say you have some security documentation that you clearly cared about at one point, and I don’t think it’s up-to-date anymore, or I’m not sure it’s up-to-date anymore. Could you just timestamp this and let everybody know that you reviewed it?

Maintainers are almost universally motivated by end user requests. What the user is asking for is what the maintainers are going to build. The exception is when their bosses are a user. That might be the more powerful line of feedback. But if you’re able to come in and just let folks know like, “Hey, I’m using this. I was trying to use that piece of documentation. Your documentation is a feature for me. Your security documentation is a key feature for me”. Help the maintainer prioritize it, even if you can’t help the maintainer improve it.

Olimpiu Pop: Okay, fair enough. So I don’t know why, but in my head, a simple rule came out, applied the Little Girl Scout rule. So either raise a hand, tell somebody that you found a problem or even better just go on and fix the issues. And that’s it. Okay. One last question. We are obviously in the land of CNCF. Kubernetes is obviously a very big community and looking around here, there are a lot of folks, what’s here to be taken from other people? I mean, not everybody’s in the operation side, but underneath the tool, a lot of things are running on Kubernetes. There are a lot of other tools that are here that are used on a day-to-day basis by all of us, either knowingly or unknowingly. Anything else that we have to take either as a learning from this event or even more, what should the guys on the CNCF learn from the CRA and what they have to do next?

Eddie Knight: So a lot of what I’ve been doing has been the security slam this week, right? We’ve had four different sessions, OpenTelemetry Flux, of the two graduated projects. The two sandbox projects are Mesury and Oskal Compass. And so my Headspace very much in lessons learned from this experience where we have been working with the project maintainers to create a backlog list of security tasks that could be done, security documentation that could be improved with the Flux guys. It’s been prototyping a new security feature, which has just been absolutely wild, occupying a lot of my Headspace clearly.

I think the biggest takeaway that I’ve been hammering on and trying to drill in is that all of these needs to be a community effort. So the Cyber Resilience Act divides us up into manufacturers as well as maintainers who might be manufacturers and might not be manufacturers. And then we have stewards that an open source project may fall under or an open source project may not fall under a steward.

So we’ve got maintainers, we’ve got stewards, we’ve got manufacturers, and we’ve got the consumers. And in middle of all of those are the members. So everybody here is most likely a member of CNCF, and if not, you’re a beneficiary of the members who are paying to keep the lights on here. So the members are at the middle of being maintainers and manufacturers and stewards and consumers of all of this open source tooling. And if we can just together collectively decide that we’re in this together for real, for real this time, we can start sharing so much of this burden of regulatory compliance. And instead of just doing compliance, we can start doing compliance in a way that results in real security outcomes. And that is only going to be coming from actual cross the aisle, cross the like JFrog and Sonatype working together sort of thing, right?

Cats and dogs need to be solving problems together. And when we do that and we approach these really difficult complex topics with an open heart, we are going to be able to upgrade and elevate the community in ways that have just never happened before. And this double every year attack on the software supply chain is going to keep going. People are going to keep just burning electricity on AI programs, trying to get vulnerabilities in trying to get exploits, building malware. And what they’re going to be met with is a mountain of community resistance that is growing just as fast.

How the CRA describes different roles of the individuals and organisation involved in open-source [34:32]

Olimpiu Pop: First of all, I think this would be a very good speech for winning the Miss Universe thing. It’s a lot of peace in the world and a lot, but yes, you’re totally right. So I think the message is that we should all work together for a brighter future, meaning that it’s us or them. On my presentation, I used to have a pirate flag that was positioning the dark side of the web, also pretty much the hacking right under China in terms of GDP. And that’s scary because both China and the US are the first in the second position, and we are talking about trillions, and that’s important that we all work together regardless of the name of the company that we are working under to make the future brighter and safer for all of us. And I know I promised that that will be the last question for you, but you said something that raised the question. You mentioned open source maintainers and then manufacturers. What’s the difference? How should I position myself from that point of view?

Eddie Knight: In our keynote, we’re not allowed to discuss vendors and manufacturers. We can discuss products and maintainers. With you, I can actually just say names to use examples about anything that is an open source piece of technology, and the maintainers are everybody who brings that thing to life. They’re reputationally associated with it. They’re the leaders in producing this thing.

Olimpiu Pop: That’s an individual.

Eddie Knight: An individual, yes.

Olimpiu Pop: Okay, so me, if I’m doing a pool request to Kubernetes, I’m becoming a maintainer of Kubernetes or it’s about the company that is powering and putting money and burning hours of their employees.

Eddie Knight: No. So you could be a contributor. The contributors aren’t really captured in the CRA as much just by making a PR, your contributor. The maintainers also kind of aren’t really called out too much in the CRA, right? Because even though the maintainers are part of the governance structure of this project, right? Open source is software standards and community. And so this open source project is the software that is built by the community. In the case of Kubernetes, you’ve got all three. Those maintainers who are part of the governance structure in this project are not necessarily manufacturers. However, our cloud providers are delivering Kubernetes to us at a price. They are bringing Kubernetes to market. The maintainers are not bringing Kubernetes to market. So Kubernetes, while it is still just in a code base, it is an idea, it’s a concept, it’s fun, but it’s not a product at that point.

Olimpiu Pop: Okay, good. So that means that if I’m manufacturing something, I’m just, I don’t know writing books and I put them on the shelf and then I’m more or less a maintainer and whatever. But if I transform that in a bookshop, then I’m providing that service. So that’s the point when I should be worried about, right?

Eddie Knight: Yes. There’s another spin on this where you have folks such as Control Plane who are the exclusive support providers for Flux. They pay their maintainers to work on Flux. So those humans are at once. Those humans are themselves maintainers and they’re employed by and associated with the manufacturer. So the manufacturer is the corporate entity at this point who is providing support, but the individuals within that corporate entity might also themselves be maintainers.

Olimpiu Pop: You can see that you work in finance. It’s a lot. It’s a mouthful. So yes, I understood it. Thank you. Any close statements, anything that I missed asking you to just wrap up everything?

Eddie Knight: Yes, I appreciate talking about this. I appreciate you creating space for this. Definitely. I appreciate you entertaining my Miss Universe philosophy. I think there’s going to be an increase in money changing hands where you’re going to have third party audits. You’re going to have a rise of manufacturers providing support, where now the consumers of Kubernetes are going to be incentivized to not run vanilla Kubernetes anymore because they don’t want to be the manufacturers. They want to offset some of that risk. And so their choice is going to be either continue paying their compliance and security staff to lock down their vanilla deliveries prior to bringing it to market or work with a support system, somebody that’s providing support, somebody that’s delivering this. And so where the money is being spent might start changing hands a little bit more. But the net value I anticipate is going to be largely beneficial.

I think that it is a zero-sum game in how we’re spending the money. It’s going to be spent somewhere, but what we’re going to see is we’re going to start consolidating who is actually taking on the liability of securing these different products. And in doing so, they’re going to have the capability of doing it at a much higher level than they’ve ever had before. And that is going to be a change. It’s going to be a different way of doing things, but it will be a net improvement for everybody. And that’s why Miss Universe, I’m saying, this is awesome. This is really good for us. We need to pay attention that we are doing this together.

Olimpiu Pop: Yes, I totally agree with that. It feels that the coming of age of the software industry, because up to now it felt like more or less people were working in their garages, even if they were working in corporations. And now it’s actually putting some structure into place that would allow us to play ball and just have a united front Eddie, thank you. Enjoy the rest of the conference.

Eddie Knight: Thank you. It’s always a pleasure.

Olimpiu Pop: Thank you, Eddie.

Mentioned:

About the Author

.
From this page you also have access to our recorded show notes. They all have clickable links that will take you directly to that part of the audio.

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Java News Roundup: JDK 25 Schedule, Spring 7.0-M4, Payara Platform, JobRunr 7.5, Jox 1.0, Commonhaus

MMS Founder
MMS Michael Redlich

Article originally posted on InfoQ. Visit InfoQ

This week’s Java roundup for April 14th, 2025 features news highlighting: the JDK 25 release schedule; the fourth milestone release of Spring Framework 7.0.0; the April 2025 edition of the Payara Platform; the release of JobRunr 7.5.0 and Jox 1.0.0; and Kroxylicious having joined the Commonhaus Foundation.

OpenJDK

Oracle has released versions 23.0.2, 21.0.6, 17.0.14, 11.0.26, and 8u441 of the JDK as part of the quarterly Critical Patch Update Advisory for April 2025. More details on this release may be found in the release notes for version 23.0.2, version 21.0.6, version 17.0.14, version 11.0.26 and version 8u441.

It was a busy week in the OpenJDK ecosystem during the week of April 14th, 2025, highlighting eight new JEPs having been elevated from their JEP Drafts to Candidate status. Four of these will be finalized after their respective rounds of preview. Further details may be found in this InfoQ news story.

JDK 25

Build 19 of the JDK 25 early-access builds was made available this past week featuring updates from Build 18 that include fixes for various issues. More details on this release may be found in the release notes.

After its review has concluded, Mark Reinhold, Chief Architect, Java Platform Group at Oracle, formally declared the release schedule for JDK 25 as follows:

  • Rampdown Phase One (fork from main line): June 5, 2025
  • Rampdown Phase Two: July 17, 2025
  • Initial Release Candidate: August 7, 2025
  • Final Release Candidate: August 21, 2025
  • General Availability: September 16, 2025

For JDK 25, developers are encouraged to report bugs via the Java Bug Database.

BellSoft

Concurrent with Oracle’s Critical Patch Update (CPU) for April 2025, BellSoft has released CPU patches for versions 21.0.6.0.1, 17.0.14.0.1, 11.0.26.0.1, 8u451, 7u461 and 6u461 of Liberica JDK, their downstream distribution of OpenJDK, to address this list of CVEs. In addition, Patch Set Update (PSU) versions 24.0.1, 21.0.7, 17.0.15, 11.0.27 and 8u452, containing CPU and non-critical fixes, have also been released.

With an overall total of 740 fixes and backports, BellSoft states that they have participated in eliminating 38 issues in all releases.

Spring Framework

The fourth milestone release of Spring Framework 7.0.0 delivers improvements in documentation, dependency upgrades and new features such as: a new OptionalToObjectConverter class to automatically convert an Optional to its contained object; and a new ClassFileMetadataReader class that supports JEP 484, Class-File API, for reading and writing classes as Java bytecode. Further details on this release may be found in the release notes.

The first release candidate of Spring for GraphQL 1.4.0 ships with improvements in documentation, dependency upgrades and new features such as: a new graphql.dataloader observation that measures data loading operations so that recorded traces are much more precise; and improvements to the server transports so that reactive data fetcher operations will be cancelled in-flight and further data fetching calls (blocking or reactive) will be avoided. More details on this release may be found in the release notes.

Payara

Payara has released their April 2025 edition of the Payara Platform that includes Community Edition 6.2025.4, Enterprise Edition 6.25.0 and Enterprise Edition 5.74.0. All three releases deliver dependency upgrades and new features: the ability to customize logs sent to the remote system log servers for more control over log management; and the addition of a new connection pool property to disable the verification of client information properties when pooling is set to false. Further details on these releases may be found in the release notes for Community Edition 6.2025.4 and Enterprise Edition 6.25.0 and Enterprise Edition 5.74.0.

Micronaut

The Micronaut Foundation has released version 4.8.2 of the Micronaut Framework featuring Micronaut Core 4.8.11, bug fixes and patch updates to modules: Micronaut Maven Plugin; Micronaut JSON Schema; Micronaut Micrometer; and Micronaut Servlet. More details on this release may be found in the release notes.

JobRunr

The release of JobRunr 7.5.0 features: support for Quarkus 3.20.0 and Micronaut 4.8.0; improved detection of misconfiguration between JobRequest and JobRequestHandler interfaces; and the ability to configure an instance of the InMemoryStorageProvider class using properties. There is a breaking change for developers who use Quarkus and Micronaut. The behavior to automatically fall back to the InMemoryStorageProvider class if no instance of the StorageProvider interface has been removed. Developers will need to explicitly configure this by setting the jobrunr.database.type property to mem or by providing a custom bean. Further details on this release may be found in the release notes.

Jox

The release of Jox 1.0.0, a virtual threads library that implements an efficient Channel data structure in Java designed to be used with virtual threads, features many dependency upgrades and notable changes: the removal of the collectAsView() method from the Source interface and the CollectSource class as this functionality is offered from the Flows class; and configuration of the newly integrated Renovate automated dependency update tool. More details on this release may be found in the release notes.

Micrometer

The first release candidate of Micrometer Metrics 1.15.0 provides bug fixes and new features such as: enhancements to the OtlpMetricsSender interface that provides an immutable Request inner class and a corresponding builder for convenience; and the addition of metrics for the newVirtualThreadPerTaskExecutor() method defined in the Java Executors class. Further details on this release may be found in the release notes.

The first release candidate of Micrometer Tracing 1.5.0 ships with a dependency upgrade to Micrometer Metrics 1.15.0-RC1 and a new feature that removes the dependency on the incubation of the OpenTelemetry Java Instrumentation API. (opentelemetry-instrumentation-api-incubator). More details on this release may be found in the release notes.

Project Reactor

The second milestone release of Project Reactor 2025.0.0 provides dependency upgrades to reactor-core 3.8.0-M2, reactor-netty 1.3.0-M2, reactor-pool 1.2.0-M2. There was also a realignment to version 2025.0.0-M2 with the reactor-addons 3.5.2, reactor-kotlin-extensions 1.2.3 and reactor-kafka 1.3.23 artifacts that remain unchanged. Further details on this release may be found in the release notes.

Similarly, Project Reactor 2024.0.5, the fifth maintenance release, provides dependency upgrades to reactor-core 3.7.5 and reactor-netty 1.2.5. There was also a realignment to version 2024.0.5 with the reactor-addons 3.5.2, reactor-pool 1.1.2, reactor-kotlin-extensions 1.2.3 and reactor-kafka 1.3.23 artifacts that remain unchanged. More details on this release may be found in the release notes.

Commonhaus Foundation

The Commonhaus Foundation, a non-profit organization dedicated to the sustainability of open source libraries and frameworks, has announced that Kroxylicious has joined the foundation this past week. Kroxylicious is an “early-stage project which seeks to lower the cost of developing Kafka proxies by providing a lot of the common requirements out-of-the-box.” This allows developers to focus on the required logic to get their proxies to perform their tasks.

About the Author

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


OpenJDK News Roundup: Compact Source, Module Import Declarations, Key Derivation, Scoped Values

MMS Founder
MMS Michael Redlich

Article originally posted on InfoQ. Visit InfoQ

There was a flurry of activity in the OpenJDK ecosystem during the week of April 14th, 2025, highlighting eight new JEPs having been elevated from their JEP Drafts to Candidate status. Four of these will be finalized after their respective rounds of preview.

JEP 512, Compact Source Files and Instance Main Methods, has been elevated from its JEP Draft 8344699 to Candidate status. Formerly known as Simple Source Files and Instance Main Methods, this JEP proposes to finalize this feature, with improvements, after four rounds of preview, namely: JEP 495, Simple Source Files and Instance Main Methods (Fourth Preview), delivered in JDK 24; JEP 477, Implicitly Declared Classes and Instance Main Methods (Third Preview), delivered in JDK 23; JEP 463, Implicitly Declared Classes and Instance Main Methods (Second Preview), delivered in JDK 22; and JEP 445, Unnamed Classes and Instance Main Methods (Preview), delivered in JDK 21. This feature aims to “evolve the Java language so that students can write their first programs without needing to understand language features designed for large programs.” This JEP moves forward the September 2022 blog post, Paving the on-ramp, by Brian Goetz, Java Language Architect at Oracle. Gavin Bierman, Consulting Member of Technical Staff at Oracle, has published the first draft of the specification document for review by the Java community. More details on JEP 445 may be found in this InfoQ news story.

JEP 511, Module Import Declarations, has been elevated from its JEP Draft 8344700 to Candidate status. This JEP proposes to finalize this feature, without change, after two rounds of preview, namely: JEP 494, Module Import Declarations (Second Preview), delivered in JDK 24; and JEP 476, Module Import Declarations (Preview), delivered in JDK 23. This feature will enhance the Java programming language with the ability to succinctly import all of the packages exported by a module with a goal to simplify the reuse of modular libraries without requiring to import code to be in a module itself.

JEP 510, Key Derivation Function API, has been elevated from its JEP Draft 8353275 to Candidate status. This JEP proposes to finalize this feature, without change, after one round of preview, namely: JEP 478, Key Derivation Function API (Preview), delivered in JDK 24. This features introduces an API for Key Derivation Functions (KDFs), cryptographic algorithms for deriving additional keys from a secret key and other data, with goals to: allow security providers to implement KDF algorithms in either Java or native code; and enable the use of KDFs in implementations of JEP 452, Key Encapsulation Mechanism.

JEP 509, JFR CPU-Time Profiling (Experimental) has been elevated from its JEP Draft 8337789 to Candidate status. This experimental JEP proposes to enhance the JDK Flight Recorder (JFR) to allow for capturing CPU-time profiling information on Linux OS.

JEP 508, Vector API (Tenth Incubator), has been elevated from its JEP Draft 8353296 to Candidate status. This JEP proposes a tenth incubation in JDK 25, with no API changes and no substantial implementation changes since JDK 24, after nine rounds of incubation, namely: JEP 489, Vector API (Ninth Incubator), delivered in JDK 24; JEP 469, Vector API (Eighth Incubator), delivered in JDK 23; JEP 460, Vector API (Seventh Incubator), delivered in JDK 22; JEP 448, Vector API (Sixth Incubator), delivered in JDK 21; JEP 438, Vector API (Fifth Incubator), delivered in JDK 20; JEP 426, Vector API (Fourth Incubator), delivered in JDK 19; JEP 417, Vector API (Third Incubator), delivered in JDK 18; JEP 414, Vector API (Second Incubator), delivered in JDK 17; and JEP 338, Vector API (Incubator), delivered as an incubator module in JDK 16. This feature introduces an API to “express vector computations that reliably compile at runtime to optimal vector instructions on supported CPU architectures, thus achieving performance superior to equivalent scalar computations.” The Vector API will continue to incubate until the necessary features of Project Valhalla become available as preview features. At that time, the Vector API team will adapt the Vector API and its implementation to use them, and will promote the Vector API from Incubation to Preview.

JEP 507, Primitive Types in Patterns, instanceof, and switch (Third Preview), has been elevated from its JEP Draft 8349215 to Candidate status. This JEP, under the auspices of Project Amber, proposes a third round of preview, without change, to gain additional experience and feedback from the previous two rounds of preview, namely: JEP 488, Primitive Types in Patterns, instanceof, and switch (Second Preview), delivered in JDK 24; and JEP 455, Primitive Types in Patterns, instanceof, and switch (Preview), delivered in JDK 23. This feature enhances pattern matching by allowing primitive type patterns in all pattern contexts, and extending instanceof and switch to work with all primitive types. More details may be found in this draft specification by Aggelos Biboudis, Principal Member of Technical Staff at Oracle.

JEP 506, Scoped Values, has been elevated from its JEP Draft 8352695 to Candidate status. Formerly known as Extent-Local Variables (Incubator), this JEP proposes to finalize this feature, without change, after four rounds of preview, namely: JEP 487, Scoped Values (Fourth Preview), delivered in JDK 24; JEP 481, Scoped Values (Third Preview), delivered in JDK 23; JEP 464, Scoped Values (Second Preview), delivered in JDK 22; JEP 446, Scoped Values (Preview), delivered in JDK 21; and JEP 429, Scoped Values (Incubator), delivered in JDK 20. This feature enables sharing of immutable data within and across threads. This is preferred to thread-local variables, especially when using large numbers of virtual threads.

JEP 505, Structured Concurrency (Fifth Preview), has been elevated from its JEP Draft 8340343 to Candidate status. This JEP proposes a fifth preview, with several API changes, to gain more feedback from the previous four rounds of preview, namely: JEP 499, Structured Concurrency (Fourth Preview), delivered in JDK 24; JEP 480, Structured Concurrency (Third Preview), delivered in JDK 23; JEP 462, Structured Concurrency (Second Preview), delivered in JDK 22; and JEP 453, Structured Concurrency (Preview), delivered in JDK 21. This feature simplifies concurrent programming by introducing structured concurrency to “treat groups of related tasks running in different threads as a single unit of work, thereby streamlining error handling and cancellation, improving reliability, and enhancing observability.” One of the proposed API changes involves the StructuredTaskScope interface to be opened via static factory methods rather than public constructors.

JDK 25 Feature Set (So Far) and Release Schedule

The JDK 25 release schedule, as approved by Mark Reinhold, Chief Architect, Java Platform Group at Oracle, is as follows:

  • Rampdown Phase One (fork from main line): June 5, 2025
  • Rampdown Phase Two: July 17, 2025
  • Initial Release Candidate: August 7, 2025
  • Final Release Candidate: August 21, 2025
  • General Availability: September 16, 2025

With less than two months before the scheduled Rampdown Phase One, where the feature set for JDK 25 will be frozen, these are the two JEPs included in the feature set so far:

Despite not having been formally targeted at this time, it has already been determined that JEP 508, Vector API (Tenth Incubator), will be included in the feature set for JDK 25.

About the Author

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.