Java News Roundup: LangChain4j 1.0, Vert.x 5.0, Spring Data 2025.0.0, Payara Platform, Hibernate

MMS Founder
MMS Michael Redlich

Article originally posted on InfoQ. Visit InfoQ

This week’s Java roundup for May 12th, 2025 features news highlighting: the GA releases of LangChain4j 1.0, Eclipse Vert.x 5.0 and Spring Data 2025.0.0; the May 2025 edition of the Payara Platform; second release candidates for Hibernate ORM 7.0 and Hibernate Reactive 3.0; and the first beta release of Hibernate Search 8.0.

OpenJDK

It was a busy week in the OpenJDK ecosystem during the week of May 12th, 2025 highlighting: two JEPs elevated from Proposed to Target to Targeted and four JEPs elevated from Candidate to Proposed to Target for JDK 25; and one JEP elevated from its JEP Draft to Candidate status. Two of these will be finalized after their respective rounds of preview. Further details may be found in this InfoQ news story.

JDK 25

Build 23 of the JDK 25 early-access builds was made available this past week featuring updates from Build 22 that include fixes for various issues. More details on this release may be found in the release notes.

For JDK 25, developers are encouraged to report bugs via the Java Bug Database.

Jakarta EE

In his weekly Hashtag Jakarta EE blog, Ivar Grimstad, Jakarta EE Developer Advocate at the Eclipse Foundation, provided an update on Jakarta EE 11 and Jakarta EE 12, writing:

The release of the Jakarta EE 11 Platform specification is right around the corner. The issues with the service outage that affected our Jenkins CI instances are now resolved, and the work is progressing. The release date is expected to be in June.

All the plans for Jakarta EE 12 have been completed and approved (with the exception of Jakarta Activation, which will have its plan review started on Monday [May 19, 2025]).

Two new specifications, Jakarta Portlet 4.0 and Jakarta Portlet Bridge 7.0, have been migrated over from JSR 362 and JSR 378, respectively. They join the new Jakarta Query 1.0 specification.

The road to Jakarta EE 11 included four milestone releases, the release of the Core Profile in December 2024, the release of Web Profile in April 2025, and a fifth milestone and first release candidate of the Platform before its anticipated GA release in June 2025.

Spring Framework

The fifth milestone release of Spring Framework 7.0.0 delivers bug fixes, improvements in documentation, dependency upgrades and new features such as: support for the Jackson 3.0 release train that deprecate support for the Jackson 2.0 release train; and updates to the new new API versioning feature that allows for validating supported API versions against only explicitly configured ones. There was also a deprecation of the PropertyPlaceholderConfigurer and PreferencesPlaceholderConfigurer classes for removal. Further details on this release may be found in the release notes.

The release of Spring Framework 6.2.7 and 6.1.20 address CVE-2025-22233, a follow up to CVE-2024-38820, a vulnerability in which the toLowerCase() method, defined in the Java String class, had some Locale class-dependent exceptions that could potentially result in fields not being protected as expected. This was a result of the resolution for CVE-2022-22968 that made patterns of the disallowedFields field, defined in DataBinder class, case insensitive. In this latest CVE, cases where it is possible to bypass the checks for the disallowedFields field still exist.

The release of Spring Data 2025.0.0 ships with new features such as: support for the Vector interface and vector search in the MongoDB and Apache Cassandra databases; and support for the creation of indices using storage-attached indexing from Cassandra 5.0. The upcoming GA release of Spring Boot 3.5.0 will upgrade to Spring Data 2025.0.0. More details on this release may be found in the release notes.

The third milestone release of Spring Data 2025.1.0 ships with: support for JSpecify on sub-projects, such as Spring Data Commons, Spring Data JPA, Spring Data MongoDB, Spring Data LDAP, Spring Data Cassandra, Spring Data KeyValue, Spring Data Elasticsearch; and the ability to optimize Spring Data repositories at build time using the Spring AOT framework. Further details on this release may be found in the release notes.

The first release candidate of Spring AI 1.0.0 features “the final set of breaking changes, bug fixes, and new functionality before the stable release.” Key breaking changes include: renaming of fields, such as CHAT_MEMORY_RETRIEVE_SIZE_KEY to TOP_K, in the VectorStoreChatMemoryAdvisor class; and a standardization in the naming convention of the chat memory repository that now includes repository as a suffix throughout the codebase. The team is planning the GA release for Tuesday, May 20, 2025. More details on this release may be found in the upgrade notes and InfoQ will follow up with a more detailed news story of the GA release.

Payara

Payara has released their May 2025 edition of the Payara Platform that includes Community Edition 6.2025.5, Enterprise Edition 6.26.0 and Enterprise Edition 5.75.0. All three releases deliver: dependency upgrades; a new features that adds the capability to move the master password file to a user defined location; and a resolution to a NullPointerException upon attempting to retrieve the X.509 client certificate sent on an HTTP request using the jakarta.servlet.request.X509Certificate request attribute. Further details on these releases may be found in the release notes for Community Edition 6.2025.5 and Enterprise Edition 6.26.0 and Enterprise Edition 5.75.0.

Eclipse Vert.x

After eight release candidates, Eclipse Vert.x 5.0 has been released with new features such as: support for the Java Platform Module System (JPMS); a new VerticleBase class that replaces the deprecated AbstractVerticle class due to the removal of the callback asynchronous model in favor of the future model; and support for binary data in the OpenAI modules. More details on this release may be found in the release notes and list of deprecations and breaking changes.

LangChain4j

The formal release (along with the fifth beta release) of LangChain4j 1.0.0 delivers modules released under the release candidate, namely: langchain4j-core; langchain4j; langchain4j-http-client; langchain4j-http-client-jdk and langchain4j-open-ai with the the remaining modules still under the fifth beta release. Breaking changes include: a rename of the ChatLanguageModel and StreamingChatLanguageModel interfaces to ChatModel and StreamingChatModel, respectively; and the OpenAiStreamingChatModel, OpenAiStreamingLanguageModel and OpenAiModerationModel classes now map exceptions to align with the other OpenAI*Model classes. Further details on this release may be found in the release notes

Hibernate

The second release candidate of Hibernate ORM 7.0.0 delivers new features such as: a new QuerySpecification interface that provides a common set of methods for all query specifications that allow for iterative, programmatic building of a query; and a migration from Hibernate Commons Annotations (HCANN) to the new Hibernate Models project for low-level processing of an application domain model. There is also support for the Jakarta Persistence 3.2 specification, the latest version targeted for Jakarta EE 11. The team anticipates this as the only release candidate before the GA release. More details on this release may be found in the release notes and the migration guide.

The second release candidate of Hibernate Reactive 3.0.0 (along with version 2.4.8) provides notable changes such as: the removal of JReleaser configuration from the codebase as it will be now located inside the release scripts; and the addition of Java @Override annotations to places where it was missing. These versions upgrade to Hibernate ORM 7.0.0.CR2 and 6.6.15.Final, respectively. Further details on these releases may be found in the release notes for version 3.0.0.CR2 and version 2.4.8.

The first beta release of Hibernate Search 8.0.0 ships with: dependency upgrades; compatibility with the latest versions of Elasticsearch 9.0 and OpenSearch 3.0; and the first implementation of the type-safe field references and the Hibernate Search static metamodel generator. More details on this release may be found in the release notes.

About the Author

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Microsoft CTO Details Successes, Challenges, and Commitment to Rust at Rust Nation UK

MMS Founder
MMS Bruno Couriol

Article originally posted on InfoQ. Visit InfoQ

Mark Russinovich, Chief Technology Officer for Microsoft Azure, delved in a recent talk at Rust Nation UK into the factors driving Rust adoption, providing concrete examples of Rust usage in Microsoft products, and detailing ongoing efforts to accelerate the migration from C/C++ to Rust at Microsoft by leveraging generative AI.

The original motivation for recommending Rust originated from a detailed review of security vulnerabilities. Russinovick says:

[The] journey actually begins with us looking at the problems we’ve had with C and C++ [… Looking at a] summary of Microsoft security response centers triaging of the vulnerabilities over the previous 10 years across all Microsoft products, 70% of the vulnerabilities were due to unsafe use of memory specifically in C++ and we just see this trend continuing as the threat actors are going after these kinds of problems. It also is causing problems just in terms of incidents as well.

Other major IT companies and security organizations have expressed similar conclusions. Google’s security research team, Project Zero, reported that out of the 58 in-the-wild 0-days for the year, 39, or 67% were memory corruption vulnerabilities. Memory corruption vulnerabilities have been the standard for attacking software for the last few decades and it’s still how attackers are having success. Mozilla also estimated a few years back that 74% of security bugs identified in Firefox’s style component could have been avoided by writing this component in Rust. In fact, Rust’s language creator, Graydon Hoare, contended at the Mozilla Annual Summit in 2010 in one of the earliest presentations about Rust that C++ was unsafe is almost every way, and featured no ownership policies, no concurrency control at all, and could not even keep const values constant.

Microsoft’s “Secure Future Initiative”, which Russinovich links to breaches performed by two nation-state actors, commits to expanding the use of memory-safe languages. Microsoft recently donated $1 million to the Rust Foundation to support a variety of critical Rust language and project priorities.

Russinovich further detailed examples of Rust in Microsoft products. In Windows, Rust is used in security-critical software. That includes firmware development (Project Mu), kernel components, a cryptography library (e.g. rustls symcrypt support), and ancillary components (e.g., DirectWrite Core).

In Office, Rust is being used in some performance-critical areas. The Rust implementation of a semantic search algorithm in Office, delivered to customers on CosmosDB and PostgreSQL, proved to be more performant and memory efficient than the C++ version, providing a significant win for large-scale vector searches.

Following a directive mandating that no more systems code be written in C++ in Azure, Rust is used in several Azure-related software. Caliptra is an industry collaboration for secure cloud server firmware. Key firmware components are written entirely in Rust and are open-sourced. Azure Integrated HSM is a new in-house security chip deployed in all new servers starting in 2025. The firmware and guest libraries are written in Rust to ensure the highest security standards for cryptographic keys. Russinovich also mentioned Azure Boost agents, Hyper-V (Microsoft’s hypervisor), OpenVMM (a modular, cross-platform Virtual Machine Monitor recently open-sourced), and Hyperlight as partly or entirely written in Rust.

Developer feedback at Microsoft has generally been positive but also included negatives. On the positive side, developers liked that if Rust code compiles, it generally works as expected, leading to faster iteration. Reduced friction in development leads to more motivation to write tests. Developers become more conscious of memory management pitfalls. The Rust ecosystem and Cargo are appreciated for dependency management. Performance increases are often observed (though not always the primary goal). Data-race-related concurrency bugs are reduced. Memory-safety-related vulnerabilities are significantly reduced.

On the negative side, developers mentioned that C++ interop remains difficult. The initial learning curve for Rust is further perceived as steep. Dynamic linking is a challenge. Reliance on some non-stabilized Rust features is a concern. Integrating Cargo with larger enterprise build systems requires effort. Foreign Function Interface (FFI) is tough to do safely, even in Rust. Tooling is still behind when compared with other languages.

Russinovich further describes Microsoft’s efforts to accelerate the migration of C/C++ legacy code to Rust. One area is verified crypto libraries, using formal verification techniques for C and then transpiling to safe Rust (see Compiling C to Safe Rust, Formalized). Microsoft is also exploring using large language models for automated code translation.

Russinovich concluded by reiterating Microsoft’s strong commitment to Rust across the company and emphasizing Rust’s increasing maturity and adoption:

You know people will come and say, hey wait, there’s this new language that’s even better than Rust. It’s more easy to use than Rust and I say well when is it going to be ready? Because we’re over 10 years into Rust and you know we’re finally ready because it takes a long time for a language to mature, for the tooling to mature, and we’re not even finally, you know, completely done with maturing the Rust toolchain. Anybody that wants to come along at this point and disrupt something that’s already as good as Rust has a very high hill to climb. So I don’t see anything replacing Rust anytime soon […] We’re 100% behind Rust.

Readers are strongly encouraged to view the full talk on YouTube. It contains abundant valuable examples, technical explanations, and demos.

Rust Nation UK is a multi-track conference dedicated to the Rust language and community. The conference features workshops, talks, and tutorials curated for developers of all levels. The conference is held annually at The Brewery.

About the Author

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


OpenAI Launches Codex Software Engineering Agent Preview

MMS Founder
MMS Robert Krzaczynski

Article originally posted on InfoQ. Visit InfoQ

OpenAI has launched Codex, a research preview of a cloud-based software engineering agent designed to automate common development tasks such as writing code, debugging, testing, and generating pull requests. Integrated into ChatGPT for Pro, Team, and Enterprise users, Codex runs each assignment in a secure sandbox environment preloaded with the user’s codebase and configured to reflect their development setup.

Codex is powered by codex-1, a version of OpenAI’s o3 model optimized for programming tasks. It was trained using reinforcement learning on real-world examples and is capable of generating code aligned with human conventions. The model iteratively runs code and tests until a correct solution is reached. Once a task is completed, Codex commits its changes within the sandbox and provides test outputs and terminal logs for transparency.

The Codex sidebar in ChatGPT enables users to assign tasks or ask questions about their codebase through a text prompt. The model can edit files, run commands, and execute tests, with typical completion times ranging from one to thirty minutes. Codex supports AGENTS.md files—repository-level instructions that help guide the agent through project-specific practices and testing procedures.

Codex CLI, a command-line companion interface, is open source and uses API credits. However, as clarified by Fouad Matin, a member of technical staff at OpenAI, Codex access within ChatGPT is included with Pro, Team, and Enterprise subscriptions:

Codex is included in ChatGPT (Pro, Team, Enterprise) pricing with generous access for the next two weeks.

The system, however, does not yet support full application testing with live user interfaces. As one Reddit user pointed out:

Most software engineering is web development these days. How does it handle that, where you have separate layers for certain things, environment variables, and UI interfaces? Does it actually run the app so the user can test it, or do they need to push the change and then pull down a copy to test locally? That would be very annoying. Ideally, in the future, the agents can just test it themselves, but I guess they are not good enough yet.

Codex runs in an isolated container without internet access or UI execution capabilities. While it can handle test suites, linters, and type checkers, final verification and integration remain in the hands of human developers.

OpenAI has also introduced Codex mini, a lighter model designed for faster interactions and lower latency, now the default engine in Codex CLI and available via API as codex-mini-latest. It is priced at $1.50 per million input tokens and $6 per million output tokens, with a 75% prompt caching discount.

The release reflects OpenAI’s broader strategy to eventually support both real-time AI coding assistants and asynchronous agent workflows. While Codex currently connects with GitHub and is accessible from ChatGPT, OpenAI envisions deeper integrations in the future, including support for assigning tasks from Codex CLI, ChatGPT Desktop, and tools such as issue trackers or CI systems.

About the Author

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Windsurf Launches SWE-1 Family of Models for Software Engineering

MMS Founder
MMS Daniel Dominguez

Article originally posted on InfoQ. Visit InfoQ

Windsurf has introduced its first set of SWE-1 models, aimed at supporting the full range of software engineering tasks, not limited to code generation. The lineup consists of three models SWE-1, SWE-1-lite, and SWE-1-mini, each designed for specific scenarios.

SWE-1 is focused on tool-call reasoning and is reported to perform similarly to Claude 3.5 Sonnet, while being more cost-efficient to operate. SWE-1-lite, which replaces the earlier Cascade Base model, offers improved quality and is accessible without restrictions to all users. SWE-1-mini is a compact, high-speed model that enables passive prediction features in the Windsurf Tab environment.

The SWE models are designed to address limitations in existing coding models by introducing flow awareness, a framework that enables models to reason over long-running, multi-surface engineering tasks with incomplete or evolving states. The models are trained on user interactions from Windsurf’s own editor and incorporate contextual awareness from terminals, browsers, and user feedback loops.

Windsurf evaluated the performance of SWE-1 through both offline benchmarks and blind production experiments. The benchmarks included tasks such as continuing partially completed development sessions and completing engineering goals end-to-end. In both cases, SWE-1 showed performance close to current frontier foundation models, and superior to open-weight and mid-sized alternatives.

Production experiments used anonymized model testing to compare SWE-1’s contributions in real-world use cases. Metrics such as daily lines of code accepted by users and edit contribution rates showed that SWE-1 is actively used and retained by developers. SWE-1-lite and SWE-1-mini were developed using similar methodologies, with lite aimed at mid-tier performance and mini tuned for latency-sensitive tasks.

All models are built around the concept of a shared timeline, which allows users and the AI to operate together in a collaborative flow. Windsurf plans to expand this approach and refine the SWE model family by leveraging data generated through its integrated development environment.

Initial community reactions to the SWE-1 model family highlight interest in its broader approach to software engineering tasks beyond coding. Developers have noted the usefulness of SWE-1’s tool-call reasoning and its ability to handle incomplete workflows across different development environments.

Web and app developer Jordan Weinstein shared:

Super impressive so far. Though when testing supabase MCP with SWE1 it errors in Cascade. Lite does not.

And Technical Leader Leonardo Gonzalez commented:

Most AI coding assistants miss 80% of what developers actually do. SWE-1 changes the game.

The release coincides with OpenAI’s acquisition of Windsurf, a move intended to strengthen its presence in the growing market for AI-powered software engineering tools, where competitors such as Anthropic’s Claude and Microsoft’s GitHub Copilot have established a strong foothold. OpenAI is expected to integrate Windsurf’s engineering-focused AI capabilities into its own ecosystem, including platforms like ChatGPT and Codex, further expanding its presence in software development tools.

About the Author

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Podcast: How developer platforms, Wasm and sovereign cloud can help build a more effective organization

MMS Founder
MMS Max Korbacher

Article originally posted on InfoQ. Visit InfoQ

Transcript

Olimpiu Pop: Hello everybody. I’m Olimpiu Pop an InfoQ editor, and I had the pleasure of intersecting with Max at KubeCon the other day, and we said that we’ll stay for a chat to understand better what happens in the platform space because he’s so focused on multiple things. Max Koerbaecher, please introduce yourself.

Max Koerbaecher: Sure, thank you very much, Olimpiu. My name is Max Koerbaecher I’m the founder of Liquid Reply and what I’ve done the last years, it’s actually a lot of different stuff. Right now, primarily working around platform, platform engineering, how to build internal development platforms, but also going into sovereign cloud and how with open source and other technologies you can provide data sovereignty for your end users. But around that, I founded some years ago the technical advisory group for environmental sustainability in the Cloud Native Computing Foundation, today just the emeritus advisor. So, I stepped down to give the more energetic people space and keep pushing on the topic, as it requires a lot of energy.

I’m also part of the Linux Foundation Europe Advisory Board. Take a look at different initiatives and see how the organization can support the European open-source ecosystem better and give it some room to develop its landscape. And yes, I have a little bit of background in the Kubernetes release team. I was involved for three years in two different roles on the organizational side, and I’m hosting now the fourth year in a row the Cloud Native Summit Munich, which was formerly known as KCD Munich and organized also organised some meetups around all the Kubernetes and cloud native platform engineering and so on and so forth. Long, long story.

Olimpiu Pop: I’m remembering now all the encounters that we had in the last four years since I’m attending Kubernetes or KubeCon, mainly in Europe. And then I remember that we met also in DevOps, and it seems that we are always riding with the wave before that. As you mentioned, green technology was a subject, and I remember a lot of the graphs that you had that remained in my mind. Last year, I don’t recall the presentation, to be honest, but I know that your company had something to do with spin and wasn’t as connected there. So I’ll leave that for later. But mainly, you always seem to be focused on community-urgent issues and cloud-native, so that’s pretty much the topic you’re looking into. And this year somehow, you try to push all those things in one basket because you spoke about how a platform can bring a company together. Please tell me more about that. You also wrote a book about it, right?

Do you need a platform? [03:13]

Max Koerbaecher: Exactly. So I wrote last year, together with Andreas Grabner from Dynatrace and Hilliary Lipsig from Red Hat, a book about it called Platform Engineering for Architects. And it’s trying to do what you explain. Platform engineering is not always about the technology. The technology itself is solvable, it’s manageable, and if you cannot fix it, then give it one or two additional months since someone else will do so. So I think we are in an inspiring time, seeing the change in the technologies used and the approaches taken, but what is always missing is the missing clue between all of that. And I’m not talking about CI/CD pipelines and bash scripts and whatnot. Here, really, it’s the people and sometimes even the processes, the communication, and how we come to the point of building a platform.

And in our book, we question pretty often, do we need the platform? Do we need to go to the cloud? How to build it the right way? If so, are you sure, sure about it, that it is doing the right thing for you? And we help throughout the book to find the relevant decision points so that in the end, you know process-wise, precisely what you are doing and come to the fact that you integrate the different technologies. But our key conclusion is that you need to find a purpose for the platform. If you know what the purpose of the platform, is then you’re good to go. If you cannot define it, if you just say, “I was at the last KubeCon and it was filled with many cool, fantastic talks about platform engineering and how they change the world”. Yes, it might be like that; this is true for many organisations. However, just because this approach has worked for other organisations, it doesn’t mean it will work for you too. And yes, this is what we aim to bring together and provide more experience around.

Olimpiu Pop: That’s very interesting, because it’s usually a discussion of that kind. I mean, now we’re discussing platforms. Before that, we spoke about frameworks, different cloud providers, and so on and  so forth. And it’s always the discussion. Well, it’s a very shallow discussion compared to what you mentioned, buy versus build and the other stuff. So, what you said is that we need a platform with a purpose. When do you feel that, first of all, a platform is needed? What’s the most common goal for having a platform?

Max Koerbaecher: That’s a difficult question because there is no generic answer to it. Many companies find this point for themselves for different reasons. There are a few public resources around it, like how Expedia measures the success of a platform, how GitHub measures the success of a platform or how Toyota measures the success of a platform. And all three do it differently. So one is looking into the development speed and how many contributions happen. So, most likely  how they end up building a platform or thought it’s a good idea to create a platform is maybe because they had a problem in the delivery speed of their software, while the others are looking into numbers, right? Just running on an IDP saves us 5 million euros per year. So maybe it was a cost problem in the past, or perhaps the software development cycle cost too much money. And so, you’re looking for ways to improve it or increase its performance.

Therefore, again, it’s not a  generic answer available for it. But let’s see, the most interesting part to really find this point is to go through a lot of questions and identify, like, okay, do I have problems in my software development process, or do I believe I have too many bottlenecks in my organization while producing digital products? I may have different hosting requirements, and I cannot force the whole organization to understand five different hosting providers, so I need to find a way to do that, and so on and so forth. We have one customer at the moment who’s focusing a lot on a very, very complex regulation. They have to fulfill a lot of compliance rules, and there’s two ways to do it. Still, the only way I believe there will be a long-term success is to build a platform around it because it’s easier to provide through one single endpoint all the compliance rules and ensure that in the development of the software and the delivery of the software and the operations of the software, this is always already there.

The other way around this, you have to enforce for every application that this customer is going to migrate to the cloud, and there will be thousands of applications somewhere that the compliance rules fit. Now, some people will say, “Hey, it’s the same if I push it to the cloud or if I push it to, let’s say, Kubernetes as a platform”. Absolutely. But from our experience in the past, it’s easier and faster to build a compliance framework around Kubernetes than to build a compliance framework around a cloud provider or to be precise, within a cloud provider. And that’s so because a cloud provider has tons of limitations and tons of things that do not work the way you would like to have them, and at some point, you very often need to fall back to the good old engineering path and build it by yourself.

Olimpiu Pop: Thank you for that. Let me confirm that I understand it correctly. So, what you’re saying is that most of the technology we face doesn’t have a one-size-fits-all solution, and it’s essential to examine what we have in our courtyard and ensure that we’re solving the problem. Even if the problem is viewed from a business or technological perspective, you should establish a metric in place and ensure that building the platform will enable us to address it effectively. Okay, that sounds good.

Signs that the Kubernetes ecosystem is getting more mature [08:55]

I was just counting the editions of KubeCon that I’ve attended, and this was the fourth one. And I know that I was thinking that technology should have another purpose, as you said, some goal, some primary goal. And in the end, each business we work with, regardless of what they’re building, should have the ultimate goal of helping our customers make their work better through our software. And this year, it was the first time I heard about this at KubeCon during a keynote. The guys from eBay were saying that they are thinking at the SRE level, specifically about the cyber reliability and engineering aspects of user experience. And it was as if I heard the angels sing, and I listened to the fireworks going off, and I said, “Finally, we are getting there”. And would that be a sign of the platform’s coming of age? People are finally realising how they should use it.

Max Koerbaecher: Well, it’s a sign that Kubernetes is becoming mature. And then platforms on top of that are just representing this majority now, and how rigid it can be. Earlier this year, I was discussing with someone about how boring Kubernetes have become. So, there isn’t much to talk about, but that’s not entirely true, because there are tons of changes happening continuously. But where in the past you were always like, it was every release, you were sitting in the evening and waiting that the release notes going on and you’re looking into and like, “Oh, my God, do I need to kill all my Kubernetes cluster I’ve deployed in the last months or can I just seamlessly upgrade?” That time is over.

The community, as well as the end users and the people who deliver additional services around it, such as myself, are trying to find a way to explain that now, Kubernetes is no longer the problem. It’s a little overhead, but it’s not a significant problem or headache. If you spend weeks fixing your Kubernetes, you’re doing something wrong. You should focus on delivering something that helps your organization create value. And this slowly comes together one-to-one into a platform.

Now, “platform” is also not a good term, I must say, right? We’ve been discussing different kinds of platforms for 20 years, but a sound, old cloud-native platform, with some container at its core, is where, at least in our community, we feel well and feel good. And to slowly turn away from staring at this tech and look more at the people who are using it, I think that’s just the step we’ve been waiting for a very long time, showing that the technology is rich enough so that we can now focus more on the user. Because open source has always had this perception of, ‘ Hey, it always looks a little bit ugly. ‘ It’s always more complicated to use and so on. However, I think that has changed now, and it now provides enough space, at least in the cloud-native community, to say, “Hey, make it open source, make it sexy and usable, and give it a purpose or allow it to deliver a purpose”.

When to use Kubernetes [12:01]

Olimpiu Pop: That sounds nice. I’m smiling because at the point when I was earlier in my career, I was following the shiniest thing, like pretty much everybody. And then you’re looking for the boring technology, and I don’t know why, but Yoda is speaking in my head like, “Learn you will, you Padawan”. And now I feel that, as you said, the industry or the cloud-native space is becoming more mature, and a lot of other things are building on top of that. However, there is always, at least in my circles, the question of whether to use Kubernetes or not, and that’s a sign of the maturity of the engineers. When will it be feasible for someone to consider Kubernetes? What would be the scale that you have to look into just to say, “We’re going the Kubernetes way?” Regardless of whether it’s through a cloud provider or you’re just implementing it yourself?

Max Koerbaecher: Complicated, to come to the point that Kubernetes makes sense, it’s more like an evolution, right? If you are a large enterprise company, you will likely find numerous use cases to adopt Kubernetes and make it work effectively. Often, you have the other issue that you then have 10 initiatives at the same time, and practically, you’re wasting nine times more money than you want to save. If you are a medium-sized company and the biggest challenge for you is hosting your ERP system, website, and 10 internal tools, Kubernetes may not be the right solution. If you’re a startup and don’t have money, don’t start with Kubernetes. It’s not that Kubernetes itself is drastically more expensive, but you need to spend a decent amount of engineering time, and you need to hire at least one or two people who are in some way good with that tool to make your infrastructure work.

You can focus on going serverless, for example. When you want to create a prototype and are looking to bring up an MVP quickly, it’s fine to go serverless. Still, you can also, at the same point in time, pull out the companies that built everything on serverless in the past and are now migrating back to containers, trying to get into Kubernetes because it has reached such a scale that they have lost control of their system. Because serverless is also not the easiest thing to observe regarding what’s going on. There are, however, numerous cases where it doesn’t make sense to opt for Kubernetes. Earlier in the Kubernetes book club, we also discussed the same question, where we agreed that we should not bring that.

And my co-author brought up an example: if you use embedded systems, you should also avoid using Kubernetes. And it’s like, yes, well, for me, embedded systems are very far from using Kubernetes. Still, Iot itself and the whole edge deployment topic are a very significant aspect of the cloud-native ecosystem. There’s our user group around it, and there’s our telco group around it, which also works with Edge, so it’s reasonable to consider it. Where does it make sense, and where doesn’t it make sense?

So, for me, there are other questions where you can use it. You may want to either build a platform to host thousands or hundreds of random applications that are not very special. In that case, I show you quote unquotes a platform can be a good approach to unify all the different core services which you need: security compliance, make the life for operations easier, unify how you deploy software, how you keep it up and running, blah, blah, blah. All cool, good way. The other way, and this for me, the ideal use case. If I have one large digital product that is built of many different services, many different components, and they all need to in some way wipe together, they need to grow and get smaller, they need to be fault resistant, they need to allow a seamless global upgrade from versions distributed around the world, whatever. Then Kubernetes places its competencies to the best.

Olimpiu Pop: Okay, fair enough. To summarise, if you have minimal applications, consider an alternative solution. There are better ways to do it than this. But if you need to normalize your deployments and you have massive operations, it’ll help you. And this reminds me of the talk I attended last year from Mercedes-Benz. They were discussing having thousands, I don’t remember exactly, in the lower thousands, so 2,000 to 4,000 developers. They had a team of six platform engineers who were able to build the platform and deliver all of those features because they had a very proper and disciplined way of doing things. So that was eye-opening for me. However, I’ll return with a challenging yet interesting perspective.

Can WebAssembly and Kubernetes work together? [16:49]

You mentioned IoT, and it’s growing. I have a piece of that pie on my plate as well. And as you said, telco is, well, a longer edge because it’s big, they have other resources, and it has more different types of scenarios. However, I am involved in the smaller-scale Iot, where devices are deployed throughout various locations, and similar applications are utilised. However, I’m still considering a platform that can handle heterogeneous ecosystems, as they often involve multiple embedded software components and various versions. There is also an intersection point that is occurring more frequently now, and that’s the combination of WebAssembly and Kubernetes. You were on the SpinKube, if I remember correctly, as a connector.

Max Koerbaecher: Yes, partially. So initially, my team developed something called the Spin Operator. The idea was to make it super simple to get started with WebAssembly on Kubernetes. And in the end, the guys did such a fantastic job that it was even too boring to create a demo, as it was essentially just two CLI calls, and you were simply running on your Kubernetes cluster. The thing is, it wasn’t production-ready; it was lean, simple, and easy, but as often is the case, it was an experimental thing. My team collaborated with Fermyon, Microsoft, and SUSE to take it to the next level, making it production-ready and incorporating their reliability to ensure it’s robust enough for even production use cases. And in the end, I do not want to say we developed Spin or SpinKube; that would be incorrect, but my team contributed to a small, minor portion of making that happen.

Olimpiu Pop: Nice. Well, it’s a good company to be in. SUSE, Microsoft and Fermyon have a cloud focused on WebAssembly. So what’s your thought? Will WebAssembly make your footprint smaller? How much difference would it make?

Max Koerbaecher: First of all, I believe it would make our lives easier in some way because it removes all the things that can go wrong and the dirty tasks that you may not want to do as a developer or sys admin, right? So it’s for me, not about the size of the container. It’s also interesting, but it’s more about the security factor that you have their compiled binary format, which you can modify. Practically, it cannot do much if you do not grant it explicit rights. That’s one of the most significant parts that I love about it. In terms of size, performance, or sustainability, we also examined this aspect. WebAssembly doesn’t need to be more resource-efficient or more sustainable because, on the one hand, the execution time of the software remains relatively constant.

There’s no speed or improvement just because you run a WebAssembly. The only difference is that a WebAssembly container can start in 110 to 150 milliseconds. So in the blink of an eye, you can start a container. But does it allow us, that a container doesn’t need to run continuously, where a regular Docker container is also swift. Let’s say it takes a second, sometimes less, sometimes more, but it’s significantly larger in the amount of data that needs to be stored. For a container, yes, you can create a Docker container or a standard container, which can be tiny, a couple of megabytes, or 100 megabytes, depending on your needs. That would be best practice. The reality for almost everything, as seen in large and medium-sized companies, is even more alarming: the container has several hundred megabytes or gigabytes. It sometimes takes dozens of seconds to start up, or even minutes to prepare.

And so that’s where, on the other hand, WebAssembly boils it down to like, “There’s your code, execute it”. There’s no bullshit going around. There are no unusual scripts triggered in our other container, and we have preheated something. No, you are forced to make your software start and run. That’s it. And through that, yes, the container of WebAssembly is comparatively small. Some say you can go down to kilobytes. If you megabyte one, two, three, four, five megabytes, it is a good medium. Sometimes it can also become bigger. That’s not the thing. But it becomes very, very small in this regard. The technology itself is impressive. I like it because it eliminates all the headaches of maintaining, patching, and hardening the operating system in a container, leaving you to simply “Here’s your software”. You can use it everywhere. You can run it alongside every other container, but the platforms, cloud providers, and your local platforms do not support the speed of starting a WebAssembly container.

It’s great that I can start and stop a WebAssembly container in milliseconds, but it doesn’t help me if a node on AWS takes 7, 8, 9, or 10 minutes to start up, right? So this is the problem in the end. And so, either cloud providers need to start providing services again in the serverless space that can catch up with WebAssembly. And that will happen sooner or later. Or, from my perspective, where my sustainability expert comes in again, especially for a little while, slowly, platforms in an enterprise context, I see it as a possibility to fill in the blanks.

So, the Kubernetes platform is only sustainable or sound in its resource consumption if it utilises almost 100 per cent of the resources; only then is the resource consumption efficient, right? And so you can fill it up; you can have your static workload, with the heavy applications running at the bottom, and then the more flexible applications running on top. And then to fill up the planks, you can include, for example, a WebAssembly container, because you can kill it very quickly and move it somewhere else without even realising it.

How many platform engineers are needed to build your developer portal/platform? [22:49]

Olimpiu Pop: Okay, that’s a nice thought. To summarise those points, you mentioned that WebAssembly can be very fast if used appropriately. Still, companies that simply bloat their WebAssembly binaries or whatever they are called should look for an alternative. And to make things sustainable, I need to mention that sustainability is both financially and environmentally, because they’re interconnected, and it’ll mean that we have to utilise the entire resource that we have. And WebAssembly can be built on top of it because if it starts very, very fast, it allows it just to spin left and right regardless of whether people are seeing it or not. Interesting. However, this is an interesting space, and we can discuss it at length, but ultimately, this is only the runtime, right? It’s the basic stuff, and then we have the application built on top of it. And you mentioned that currently you’re looking more into internal developer portals, and that’s, again, another buzzword because I remember that last year backstage had the most developers allocated for building it.

What’s your experience with developer portals?

Max Koerbaecher: We need to draw a clear line in my wording. It’s always an internal development platform, not a portal, because there are not that many portals. Either you go backstage if you want to go open source; Red Hat has a wrapper around it. All other solutions are commercial. And we need to be aware that, as cool as backstage is, it is a burdensome child. Many organisations are starting with it, failing with it, and I get very, very mad about it.

We were a few weeks ago on a platform engineering executive roundtable. Almost everyone reported they have around three to five people working on their developer portal, and they don’t see any kind of benefit coming out of it. The problem with this standpoint is that it becomes locked into platform engineering and building platforms in the second row. And that’s a massive problem for me. That’s why I always like to take it down; first, we talk about an internal development platform because it’s bigger than just UI. The platform itself can provide you with tons of features and capabilities, and it can be adjusted to whatever you need. And then you are back to the example which you mentioned from Mercedes-Benz tech innovation, where they, with a reasonably small team, can serve a vast number of developers.

And they’re not alone by themselves. Customers we are working with have similar or larger scales, and we can see this practically in this ratio. We always say like five to five. So 5,000 developers require five platform engineers, right? When I look at the organization which we support, we usually come up to the scale of these teams in this regard. And if you need more people, then you have done something wrong with your platform – something like that. For sure, it’s also always complicated, but that’s a little prominent example for me.

But yes, developer portals themselves are a huge buzzword. I think sometimes it’s cooked too hot than what can really, really deliver. For my KubeCon demo from a talk which I gave from KubeCon, the demo, I also spin up an own IDP with the Kanoa project. Everything runs very smoothly. Argo runs smoothly, Argo Workflows comes up like a chime. External DNS, no problem. AWS EKS on an auto mode also, no problem, everything cool. Backstage killed me for weeks. I spent dozens of hours making that little… Sorry, I’m getting angry here. Fill in some bad words that come to mind first to get it up and running. And then when it was up and running, you want to fix something, you want to integrate something else, compile it, push it, proc again, it’s like, man, it can’t be.

So I can understand the pain around it. And that’s why I always like to say an IDP is not backstage. An IDP is a platform that your developers use and love, so you have it. One of our biggest customers runs everything in GitHub, that’s an IDP. They do not have a portal, and the developer portal is developed and maintained by a different team. Although they do not want to imply that no one cares, they are focused on their tasks. But the platform itself runs without it. So it’s not needed for making your organization successful in this space.

Olimpiu Pop: I zoned out a bit when you mentioned that developers have to be happy, and that’s one of the newest metrics that people are discussing. So we had DORA and then we had space, and then out of a sudden we thought about DevEx, and in the end that’s important. If the people are happy with what they’re having and, as you said, they avoid the frustration that comes from building something that allows them to be very creative and then deliver the value that they want. Ultimately, for me, the platform – the tool we use – regardless of the label we assign, should serve as a connecting bridge between the operational space and the development space. And that allows us to build bridges in the organization and break down the silos. Because, in the end, that’s what we’ve tried for so long to do, and now my feeling is that we’re closer than ever to achieving that.

Okay. We touched on a wide range of topics, and you’re at the forefront of everything. First of all, should I have asked you something else? What would be the topic I should have been keen to ask you?

Max Koerbaecher: I’m glad you didn’t touch AI.

Olimpiu Pop: Well, everybody touches AI. So we can leave it there for now.

Max Koerbaecher: With those few words. It was touched, too.  It’s very complicated to think about what’s next because so many things are happening at the same time right now. What we will see, and that’s what the organizations currently struggle with, is still to adopt the cloud. We are far ahead in the platform engineering community. We treat cloud just like an API that is tame. Yes, I need to know some limits and so forth, but usually we don’t invest so much energy in it and just fix the problem, then focus back on what delivers value to the users. But that’s still one of the most significant problems which we see with our enterprises that they’re working on. Besides that, I think we can do way more better in the space of providing application platforms. So when we talk about an IDP, we usually talk just about infrastructure and CI/CD and security and compliance and whatsoever.If you look in all kinds of reference architecture, they always look into resources and cloud provider and all the infrastructure part. Almost nothing looks the application level.

For this one, I really loved in the past Heroku and I think they are currently back on track. Somehow after the acquisition of Salesforce, they vanished for a few years and now they’re back and shiny and put out their nose and like, “Hey, here we are. We are still a cool platform. Come to me”. And I think you see similar platforms like Versal for example, which focus a hundred percent only on one problem. Give me your application, I put it somewhere and make you happy. And I think in that space we really still miss good solutions that are putting the right perspective into it, but also because it’s complex to make people in that space really happy in the end because they either want to touch the infrastructure or they don’t want to touch the infrastructure or the application becomes so complicated that they need 500 different configurations around it and so on and so forth.

So I can understand that there’s no simple solution to it. On the other hand, we have the same for all the Kubernetes and infrastructure space. We have thousands of config possibilities and millions of compositions that are possible and so on. So I think there will come up something in some time, but at the moment we don’t see anything.

Olimpiu Pop: Okay, so let me see if I understand it correctly. I don’t know if you saw it, but I had a large light bulb on top of my head when you said about internal developer platforms and application space. Because the thing that I just thought about is how cool would that be to have the qualities view on the infrastructure guys, the infrastructure team working together with the enablement team and delivering interesting building blocks that everybody can use in a uniform manner. And then we are just making things fast, secure, compliant because whether we like it or not, compliance is a very important topic. Is that around your thoughts as well?

Max Koerbaecher: Yes, in some way. I mean that’s the promise which we carry around since years and decades of doing cloud, right? But the reality is it really isn’t. We are getting more and more far away from it. We always add layers on the bottom. I don’t know, maybe we hope to fill up the whole thing until we reach the top. Maybe that’s the idea behind it, but we don’t think about what we can add from the top to it. That’s why I like other projects in this space and also WebAssembly because it forces me to reduce some part of the infrastructure. I said even just to take out the operating system of a container, that’s already a big change. It sounds stupid, but how many organizations spend thousands of thousands of euros and build hours and people on building so-called golden images in secure images and then the next developer comes around the corner wants to use, I don’t know, Rust, and suddenly nothing works again, right?

And so I think there’s, for example, wasmCloud, a very cool approach, also an open-source project, also part of the CNCF that can run practically everywhere. If you like Kubernetes, throw it on Kubernetes. If you like VMs, throw it on VMs. If you want to have it somewhere else, it can run somewhere else. There’s no real limit around it. But they have taken out every other complexity and just take your application and make it deployable, make it robust. If some connection drops somewhere, the tool will try to find other connection paths. If it doesn’t happen, it waits until it gets back connection and so on. It almost feels like a serverless platform, but it isn’t a serverless platform and it almost feels like a distributed ledger technology. But it isn’t a DLT, it’s not a blockchain, right? It sits somewhere in the middle. And I think there we can see in the next year, I guess, more development, more interest to try this out.

Would a sovereign cloud make your life easier? [33:17]

Olimpiu Pop: Okay. One of the elephants in the room passed by and that’s AI. We left it passed by us, but there is another one and I wouldn’t have mentioned it if you didn’t have touched on that and that’s over in cloud. Unfortunately the way things are looking, that’s something that probably people will start asking, if they didn’t already, “So what are we doing? Are we cutting the transatlantic cables and building new clouds or are they still already existing?”

Max Koerbaecher: Well, I think we shouldn’t do the China move, and, as we learn, trading relationships are very important. But I think what people slowly come to the point is that it doesn’t make sense to just ride on one horse in the end or just follow one horse. And that we need to find our own strengths back in some way. And that always sounds a little bit political, but it’s really not about they versus us or whoever, it’s more about we have lost in the last years a lot of our capabilities and we run behind the technological innovation. This is for multiple reasons. It’s from the investment structures which are way better in the US and people just give you a couple of million dollars. On the other hand, they have a problem with money. So it’s not good to give us the full hand of money if you do not have enough money in your other hand, right?

Nevertheless, I don’t think we should cut the cables, but we need to be aware of that. That’s the very fascinating, interesting part in the sovereign cloud itself. You have a legal aspect which you cannot fulfill with any solution existing except it is provided by 100% European entity owned organization that has zero connections to the US or China or Russia. Also, all of them want to sneak peek into what you’re doing there. That doesn’t matter so much. And the other part is you need to ensure that you have your data all the time encrypted. And I think this is right now the time not only of sovereign cloud and sovereignty about your data, but also confidential computing. I think that in many discussions, which I see is the second or third thing which comes up is it’s not enough to just legally do the things but we also need to protect it from end to end. And I think that’s the both things which we see at the moment coming up.

Beside that, cloud providers are developing. I’m super glad that we can work with Stack-It together in the German market or Scaleway more international and have tons of other options out there. We also can discuss about what is a cloud provider because we have a lot of providers, infrastructure as a service provider for example, that have almost cloud-like services, but not always someone would take them as a cloud provider.

But I really like to see that there’s a push. I really like to see that people challenge the new infrastructures which are getting built, try to understand how they’re working, how they’re different, maybe getting frustrated about it. That’s also my experience on some part. But I also always say the cloud as we know is broken because it’s just like a digital version of how you have done IT the last 60 years before. So maybe it’s time to really rethink how we are doing it. This will not happen for the large enterprises. They’re stuck in their structures, but everyone else can maybe find a way to do it a little bit better and not spend countless hours networking and firewalling and whatsoever and just find ways to really focus on providing, again, more cool features and spend time in being innovative and create new stuff.

Olimpiu Pop: So for me, how it translated from my personal experience and what we discussed is that the sovereign movement is more about aligning with our values, our European values, and ensuring that the data is safe, it’s computed safe, and it’s stored safe inside the European Union, in each entity of the European Union where you need to have the data, and that’s mainly it. Keep your data close and ensure that it’s used by the people that actually have the right of using it properly. Okay, well I think this is a nice sum up of what we discussed up to now looking to a bright future. [inaudible 00:37:26] Max, thank you for your time and hope to see you soon.

Max Koerbaecher: Thank you for having me. I wish you a nice evening.

Olimpiu Pop: Thank you.

Mentioned:

About the Author

.
From this page you also have access to our recorded show notes. They all have clickable links that will take you directly to that part of the audio.

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


OpenJDK News Roundup: Key Derivation, Scoped Values, Compact Headers, JFR Method Timing & Tracing

MMS Founder
MMS Michael Redlich

Article originally posted on InfoQ. Visit InfoQ

There was a flurry of activity in the OpenJDK ecosystem during the week of May 12th, 2025, highlighting: two JEPs elevated from Proposed to Target to Targeted and four JEPs elevated from Candidate to Proposed to Target for JDK 25; and one JEP elevated from its JEP Draft to Candidate status. Two of these will be finalized after their respective rounds of preview.

JEPs Targeted for JDK 25

Two JEPs have been elevated from Proposed to Target to Targeted for JDK 25.

JEP 510, Key Derivation Function API, announced here, proposes to finalize this feature, without change, after one round of preview, namely: JEP 478, Key Derivation Function API (Preview), delivered in JDK 24. This features introduces an API for Key Derivation Functions (KDFs), cryptographic algorithms for deriving additional keys from a secret key and other data, with goals to: allow security providers to implement KDF algorithms in either Java or native code; and enable the use of KDFs in implementations of JEP 452, Key Encapsulation Mechanism.

JEP 506, Scoped Values, announced here, proposes to finalize this feature, without change, after four rounds of preview, namely: JEP 487, Scoped Values (Fourth Preview), delivered in JDK 24; JEP 481, Scoped Values (Third Preview), delivered in JDK 23; JEP 464, Scoped Values (Second Preview), delivered in JDK 22; JEP 446, Scoped Values (Preview), delivered in JDK 21; and JEP 429, Scoped Values (Incubator), delivered in JDK 20. Formerly known as Extent-Local Variables (Incubator), this feature enables sharing of immutable data within and across threads. This is preferred to thread-local variables, especially when using large numbers of virtual threads.

JEPs Proposed to Target for JDK 25

Four JEPs have been elevated from Candidate to Proposed to Target for JDK 25.

JEP 519, Compact Object Headers, has been elevated from its JEP Draft 8354672 to Candidate to Proposed to Target for JDK 25 (announced here and here, respectively).This JEP proposes to promote this feature from experimental to product. Inspired by Project Lilliput, this feature “reduce[s] the size of object headers in the HotSpot JVM from between 96 and 128 bits down to 64 bits on 64-bit architectures.” More details on JEP 450 may be found in this InfoQ news story.

JEP 515, Ahead-of-Time Method Profiling, announced here, proposes to improve application warmup time by “making method-execution profiles from a previous run of an application instantly available, when the HotSpot JVM starts.” This allows the JIT compiler to immediately generate native code upon application startup as opposed to waiting for profiles to be collected.

JEP 514, Ahead-of-Time Command-Line Ergonomics, announced here, proposes to simplify the process of creating ahead-of-time caches, as described in JEP 483, Ahead-of-Time Class Loading & Linking, that may accelerate Java application startup by “simplifying the commands required for common use cases.

JEP 507, Primitive Types in Patterns, instanceof, and switch (Third Preview), announced here, proposes a third round of preview, without change, to gain additional experience and feedback from the previous two rounds of preview, namely: JEP 488, Primitive Types in Patterns, instanceof, and switch (Second Preview), delivered in JDK 24; and JEP 455, Primitive Types in Patterns, instanceof, and switch (Preview), delivered in JDK 23. Under the auspices of Project Amber, this feature enhances pattern matching by allowing primitive type patterns in all pattern contexts, and extending instanceof and switch to work with all primitive types. More details may be found in this draft specification by Aggelos Biboudis, Principal Member of Technical Staff at Oracle.

Their respective reviews are expected to conclude by May 22, 2025.

New JEP Candidates

JEP 520, JFR Method Timing & Tracing, has been elevated from its JEP Draft 8328610 to Candidate status. This JEP proposes to extend the JDK Flight Recorder with facilities for method timing and tracing via the bytecode Instrumentation interface.

JDK 25 Feature Set (So Far) and Release Schedule

The JDK 25 release schedule, as approved by Mark Reinhold, Chief Architect, Java Platform Group at Oracle, is as follows:

  • Rampdown Phase One (fork from main line): June 5, 2025
  • Rampdown Phase Two: July 17, 2025
  • Initial Release Candidate: August 7, 2025
  • Final Release Candidate: August 21, 2025
  • General Availability: September 16, 2025

With less than three weeks before the scheduled Rampdown Phase One, where the feature set for JDK 25 will be frozen, these are 13 JEPs included in the feature set so far:

JDK 25 is designated to be the next long-term support (LTS) release following JDK 21, JDK 17, JDK 11 and JDK 8.

About the Author

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


OpenAI’s Stargate Project Aims to Build AI Infrastructure in Partner Countries Worldwide

MMS Founder
MMS Vinod Goje

Article originally posted on InfoQ. Visit InfoQ

OpenAI has announced a new initiative called “OpenAI for Countries” as part of its Stargate project, aiming to help nations develop AI infrastructure based on democratic principles. This expansion follows the company’s initial $500 billion investment plan for AI infrastructure in the United States.

“Introducing OpenAI for Countries, a new initiative to support countries around the world that want to build on democratic AI rails,” OpenAI stated in its announcement. The company reports that its Stargate project, first revealed in January with President Trump and partners Oracle and SoftBank, has begun construction of its first supercomputing campus in Abilene, Texas.

According to OpenAI, the initiative responds to international interest in similar infrastructure development. “We’ve heard from many countries asking for help in building out similar AI infrastructure—that they want their own Stargates and similar projects,” the company explained, noting that such infrastructure will be “the backbone of future economic growth and national development.”

The company emphasized its vision of democratic AI as technology that incorporates principles protecting individual freedoms and preventing government control concentration. OpenAI believes this approach “contributes to broad distribution of the benefits of AI, discourages the concentration of power, and helps advance our mission.”

The Stargate project operates through a consortium of major technology companies serving as both investors and technical partners. SoftBank, OpenAI, Oracle, and MGX provide the initial equity funding, with SoftBank handling financial responsibilities while OpenAI manages operations.

On the technical side, five major technology companies form the foundation of the project’s implementation. “Arm, Microsoft, NVIDIA, Oracle, and OpenAI are the key initial technology partners,” according to OpenAI. The infrastructure development leverages established relationships between these companies, particularly building upon OpenAI’s long-standing collaboration with NVIDIA that dates back to 2016 and its more recent partnership with Oracle.

The company outlines a comprehensive partnership framework for collaborating with foreign nations.

“OpenAI is offering a new kind of partnership for the Intelligence Age. Through formalized infrastructure collaborations, and in coordination with the US government,” the announcement explains, highlighting the company’s alignment with American foreign policy interests in technology development.

The partnership model includes multiple components addressing infrastructure, access, and economic development. OpenAI plans to “Partner with countries to help build in-country data center capacity” to support data sovereignty while enabling AI customization for local needs.

Citizens of participating countries would receive “customized ChatGPT” services tailored to local languages and cultures, aimed at improving healthcare, education, and public services delivery. OpenAI describes this as “AI of, by, and for the needs of each particular country.”

The company also emphasizes security investments and economic development through a startup funding approach where “Partner countries also would invest in expanding the global Stargate Project—and thus in continued US-led AI leadership,” reinforcing the initiative’s connection to American technological leadership.

OpenAI’s international partnerships incorporate extensive security protocols designed to protect AI models and intellectual property. The company has developed a security approach to address potential vulnerabilities.

“Safeguarding our models is a continuous commitment and a core pillar of our security posture,” OpenAI states, describing their security framework as “rigorous” and “continuously evolving.” This framework encompasses information security, governance, and physical infrastructure protection.

The security architecture adapts to match model capabilities, with OpenAI noting that “Our security measures are not static; they scale with the capabilities of our models and incorporate state-of-the-art protections.” These protections include hardware-backed security, zero-trust architecture, and cryptographic safeguards.

Personnel access represents another critical security dimension. “OpenAI will maintain explicit and continuous oversight over all personnel with access to our information systems, intellectual property, and models,” the company emphasizes, adding that “No individual or entity will gain such access without our direct approval.”

Before deploying models internationally, OpenAI conducts risk assessments through its Preparedness Framework. “Each deployment of new models will undergo risk evaluation prior to deployment,” acknowledging that some advanced models may present risks incompatible with certain environments.

OpenAI CEO Sam Altman expressed enthusiasm about the progress at the Texas site, tweeting:

Great to see progress on the first stargate in Abilene with our partners at Oracle today. Will be the biggest ai training facility in the world. The scale, speed, and skill of the people building this is awesome.

However, the massive infrastructure development has raised environmental concerns. Greg Osuri, founder of Akash Network, questioned the project’s sustainability approach:

This data center is generating 360 MW by burning natural gas, causing heavy pollution and emitting up to 1 million metric tons of carbon every year. I understand the choices are limited but would like to understand your future plans to switch to more cleaner or sustainable sources.

Zach DeWitt, partner at Wing VC, commented on the broader implications of this move:

OpenAI seems to be building and selling products at every layer of the AI stack: chips, data centers, APIs and the application layer. It’s unclear which layer(s) will and won’t be commoditized and OpenAI is hedging their bets up and down the AI stack. Very smart.

The company has specified geographic limitations for its international expansion strategy, maintaining restrictions on which nations can access its technology through its “Supported countries and territories” documentation.

About the Author

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Llama 4 Scout and Maverick Now Available on Amazon Bedrock and SageMaker JumpStart

MMS Founder
MMS Sergio De Simone

Article originally posted on InfoQ. Visit InfoQ

AWS recently announced the availability of Meta’s latest foundation models, Llama 4 Scout and Llama 4 Maverick, in Amazon Bedrock and AWS SageMaker JumpStart. Both models provide multimodal capabilities and follow the mixture-of-experts architecture.

Launched by Meta last April, Llama 4 Scout and Maverick include 17 billion active parameters distributed across 16 and 128 experts, respectively. Llama 4 Scout is optimized to run on a single NVIDIA H100 GPU for general-purpose tasks. According to Meta, Llama 4 Maverick provides enhanced reasoning and coding capabilities and outperforms other models in its class. Amazon highlights the value of the mixture-of-experts architecture in reducing compute costs, making advanced AI more accessible and cost-effective:

Thanks to their more efficient mixture of experts (MoE) architecture—a first for Meta—that activates only the most relevant parts of the model for each task, customers can benefit from these powerful capabilities that are more compute efficient for model training and inference, translating into lower costs at greater performance.

While Llama 4 Scout supports a context window of up to 10 million tokens, Amazon Bedrock currently allows up to 3.5 million tokens, but plans to expand it shortly. Llama 4 Maverick supports a maximum of one million tokens. In both cases, these represent a significant increase over the 128K context window available for Llama 3 models.

On Amazon SageMaker JumpStart, you can use the new models with SageMaker Studio or the Amazon SageMaker Python SDK depending on your use case. Both models default to a ml.p5.48xlarge instance, which features NVIDIA H100 Tensor Core GPUs. Alternatively, you can choose a ml.p5en.48xlarge instance powered by NVIDIA H200 Tensor Core GPUs. Llama 4 Scout also supports the ml.g6e.48xlarge instance type, which uses NVIDIA L40S Tensor Core GPUs.

Llama 4 models are available on several other cloud providers, including Databricks, GroqCloud, Lambda.ai, Cerebras Inference Cloud, and others. Additionally, you can access them on Hugging Face.

In addition to Scout and Maverick, Behemoth is the third model in the Llama 4 family, featuring 288 billion active parameters distributed across 16 experts. Meta describes Behemoth, currently in preview, as the most intelligent teacher model for distillation, having used it to train both Scout and Maverick.

About the Author

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Sora Investors LLC Has $12.66 Million Stock Holdings in MongoDB, Inc. (NASDAQ:MDB)

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

Sora Investors LLC boosted its stake in shares of MongoDB, Inc. (NASDAQ:MDBFree Report) by 443.8% during the 4th quarter, according to the company in its most recent filing with the Securities and Exchange Commission. The firm owned 54,380 shares of the company’s stock after buying an additional 44,380 shares during the period. Sora Investors LLC owned about 0.07% of MongoDB worth $12,660,000 at the end of the most recent reporting period.

Several other institutional investors and hedge funds have also recently bought and sold shares of the stock. Rafferty Asset Management LLC increased its holdings in shares of MongoDB by 24.1% in the 4th quarter. Rafferty Asset Management LLC now owns 52,321 shares of the company’s stock valued at $12,181,000 after purchasing an additional 10,172 shares during the period. Point72 Hong Kong Ltd increased its holdings in shares of MongoDB by 14.1% in the 4th quarter. Point72 Hong Kong Ltd now owns 33,599 shares of the company’s stock valued at $7,822,000 after purchasing an additional 4,141 shares during the period. Polar Asset Management Partners Inc. acquired a new position in shares of MongoDB in the 4th quarter valued at about $14,458,000. ProShare Advisors LLC increased its holdings in shares of MongoDB by 20.2% in the 4th quarter. ProShare Advisors LLC now owns 84,911 shares of the company’s stock valued at $19,768,000 after purchasing an additional 14,260 shares during the period. Finally, Quantinno Capital Management LP increased its holdings in shares of MongoDB by 100.2% in the 4th quarter. Quantinno Capital Management LP now owns 14,262 shares of the company’s stock valued at $3,321,000 after purchasing an additional 7,137 shares during the period. Institutional investors and hedge funds own 89.29% of the company’s stock.

Analysts Set New Price Targets

A number of equities analysts have recently weighed in on MDB shares. Citigroup dropped their target price on MongoDB from $430.00 to $330.00 and set a “buy” rating on the stock in a report on Tuesday, April 1st. Daiwa America upgraded MongoDB to a “strong-buy” rating in a report on Tuesday, April 1st. Stifel Nicolaus dropped their target price on MongoDB from $340.00 to $275.00 and set a “buy” rating on the stock in a report on Friday, April 11th. Daiwa Capital Markets assumed coverage on MongoDB in a report on Tuesday, April 1st. They issued an “outperform” rating and a $202.00 price objective on the stock. Finally, Canaccord Genuity Group dropped their price objective on MongoDB from $385.00 to $320.00 and set a “buy” rating on the stock in a report on Thursday, March 6th. Eight investment analysts have rated the stock with a hold rating, twenty-four have issued a buy rating and one has issued a strong buy rating to the company’s stock. Based on data from MarketBeat, the stock currently has an average rating of “Moderate Buy” and an average target price of $293.91.

View Our Latest Stock Analysis on MDB

Insider Buying and Selling

In related news, CAO Thomas Bull sold 301 shares of the firm’s stock in a transaction on Wednesday, April 2nd. The stock was sold at an average price of $173.25, for a total value of $52,148.25. Following the completion of the transaction, the chief accounting officer now directly owns 14,598 shares of the company’s stock, valued at $2,529,103.50. This trade represents a 2.02% decrease in their ownership of the stock. The sale was disclosed in a document filed with the Securities & Exchange Commission, which is available through this hyperlink. Also, CEO Dev Ittycheria sold 18,512 shares of the firm’s stock in a transaction on Wednesday, April 2nd. The stock was sold at an average price of $173.26, for a total value of $3,207,389.12. Following the completion of the transaction, the chief executive officer now directly owns 268,948 shares of the company’s stock, valued at approximately $46,597,930.48. This trade represents a 6.44% decrease in their ownership of the stock. The disclosure for this sale can be found here. Insiders sold a total of 34,423 shares of company stock valued at $7,148,369 over the last ninety days. 3.60% of the stock is owned by company insiders.

MongoDB Stock Performance

Shares of MDB traded up $0.79 during mid-day trading on Friday, hitting $191.29. The company’s stock had a trading volume of 7,849,265 shares, compared to its average volume of 1,914,406. MongoDB, Inc. has a one year low of $140.78 and a one year high of $379.06. The firm has a market capitalization of $15.53 billion, a PE ratio of -69.81 and a beta of 1.49. The stock’s 50-day moving average is $174.91 and its two-hundred day moving average is $238.95.

MongoDB (NASDAQ:MDBGet Free Report) last posted its quarterly earnings data on Wednesday, March 5th. The company reported $0.19 earnings per share for the quarter, missing the consensus estimate of $0.64 by ($0.45). MongoDB had a negative net margin of 10.46% and a negative return on equity of 12.22%. The business had revenue of $548.40 million during the quarter, compared to the consensus estimate of $519.65 million. During the same period in the prior year, the company posted $0.86 earnings per share. Analysts expect that MongoDB, Inc. will post -1.78 EPS for the current fiscal year.

About MongoDB

(Free Report)

MongoDB, Inc, together with its subsidiaries, provides general purpose database platform worldwide. The company provides MongoDB Atlas, a hosted multi-cloud database-as-a-service solution; MongoDB Enterprise Advanced, a commercial database server for enterprise customers to run in the cloud, on-premises, or in a hybrid environment; and Community Server, a free-to-download version of its database, which includes the functionality that developers need to get started with MongoDB.

Further Reading

Institutional Ownership by Quarter for MongoDB (NASDAQ:MDB)

Before you consider MongoDB, you’ll want to hear this.

MarketBeat keeps track of Wall Street’s top-rated and best performing research analysts and the stocks they recommend to their clients on a daily basis. MarketBeat has identified the five stocks that top analysts are quietly whispering to their clients to buy now before the broader market catches on… and MongoDB wasn’t on the list.

While MongoDB currently has a Moderate Buy rating among analysts, top-rated analysts believe these five stocks are better buys.

View The Five Stocks Here

The 10 Best AI Stocks to Own in 2025 Cover

Wondering where to start (or end) with AI stocks? These 10 simple stocks can help investors build long-term wealth as artificial intelligence continues to grow into the future.

Get This Free Report

Like this article? Share it with a colleague.

Link copied to clipboard.

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Barclays Issues Pessimistic Forecast for MongoDB (NASDAQ:MDB) Stock Price

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

MongoDB (NASDAQ:MDBGet Free Report) had its price target dropped by analysts at Barclays from $280.00 to $252.00 in a report released on Friday,Benzinga reports. The brokerage currently has an “overweight” rating on the stock. Barclays‘s price objective suggests a potential upside of 31.74% from the company’s previous close.

A number of other brokerages also recently commented on MDB. Truist Financial reduced their price objective on MongoDB from $300.00 to $275.00 and set a “buy” rating for the company in a research report on Monday, March 31st. Daiwa America upgraded shares of MongoDB to a “strong-buy” rating in a research note on Tuesday, April 1st. China Renaissance assumed coverage on shares of MongoDB in a research note on Tuesday, January 21st. They issued a “buy” rating and a $351.00 price objective for the company. Daiwa Capital Markets initiated coverage on shares of MongoDB in a report on Tuesday, April 1st. They set an “outperform” rating and a $202.00 price objective for the company. Finally, Redburn Atlantic upgraded shares of MongoDB from a “sell” rating to a “neutral” rating and set a $170.00 target price on the stock in a research report on Thursday, April 17th. Eight equities research analysts have rated the stock with a hold rating, twenty-four have issued a buy rating and one has assigned a strong buy rating to the company’s stock. Based on data from MarketBeat, MongoDB presently has a consensus rating of “Moderate Buy” and a consensus price target of $293.91.

Get Our Latest Stock Report on MongoDB

MongoDB Stock Up 0.4%

<!—->

Shares of NASDAQ MDB opened at $191.29 on Friday. MongoDB has a 12 month low of $140.78 and a 12 month high of $379.06. The company has a market cap of $15.53 billion, a P/E ratio of -69.81 and a beta of 1.49. The business’s 50 day moving average is $174.91 and its two-hundred day moving average is $238.95.

MongoDB (NASDAQ:MDBGet Free Report) last announced its earnings results on Wednesday, March 5th. The company reported $0.19 earnings per share (EPS) for the quarter, missing analysts’ consensus estimates of $0.64 by ($0.45). MongoDB had a negative return on equity of 12.22% and a negative net margin of 10.46%. The company had revenue of $548.40 million for the quarter, compared to the consensus estimate of $519.65 million. During the same period last year, the firm posted $0.86 EPS. Sell-side analysts expect that MongoDB will post -1.78 earnings per share for the current fiscal year.

Insider Buying and Selling

In related news, CAO Thomas Bull sold 301 shares of the firm’s stock in a transaction dated Wednesday, April 2nd. The stock was sold at an average price of $173.25, for a total value of $52,148.25. Following the transaction, the chief accounting officer now owns 14,598 shares in the company, valued at $2,529,103.50. This trade represents a 2.02% decrease in their ownership of the stock. The transaction was disclosed in a legal filing with the Securities & Exchange Commission, which is available at the SEC website. Also, insider Cedric Pech sold 1,690 shares of MongoDB stock in a transaction dated Wednesday, April 2nd. The stock was sold at an average price of $173.26, for a total value of $292,809.40. Following the completion of the sale, the insider now owns 57,634 shares of the company’s stock, valued at approximately $9,985,666.84. The trade was a 2.85% decrease in their ownership of the stock. The disclosure for this sale can be found here. Insiders sold 34,423 shares of company stock worth $7,148,369 in the last quarter. Insiders own 3.60% of the company’s stock.

Institutional Inflows and Outflows

Large investors have recently added to or reduced their stakes in the business. OneDigital Investment Advisors LLC lifted its holdings in MongoDB by 3.9% in the fourth quarter. OneDigital Investment Advisors LLC now owns 1,044 shares of the company’s stock valued at $243,000 after buying an additional 39 shares during the period. Avestar Capital LLC raised its position in shares of MongoDB by 2.0% during the 4th quarter. Avestar Capital LLC now owns 2,165 shares of the company’s stock valued at $504,000 after acquiring an additional 42 shares during the last quarter. Aigen Investment Management LP lifted its stake in shares of MongoDB by 1.4% in the 4th quarter. Aigen Investment Management LP now owns 3,921 shares of the company’s stock valued at $913,000 after purchasing an additional 55 shares during the period. Handelsbanken Fonder AB boosted its position in shares of MongoDB by 0.4% in the 1st quarter. Handelsbanken Fonder AB now owns 14,816 shares of the company’s stock worth $2,599,000 after purchasing an additional 65 shares during the last quarter. Finally, Perigon Wealth Management LLC grew its stake in shares of MongoDB by 2.7% during the fourth quarter. Perigon Wealth Management LLC now owns 2,528 shares of the company’s stock worth $627,000 after purchasing an additional 66 shares during the period. 89.29% of the stock is currently owned by institutional investors and hedge funds.

About MongoDB

(Get Free Report)

MongoDB, Inc, together with its subsidiaries, provides general purpose database platform worldwide. The company provides MongoDB Atlas, a hosted multi-cloud database-as-a-service solution; MongoDB Enterprise Advanced, a commercial database server for enterprise customers to run in the cloud, on-premises, or in a hybrid environment; and Community Server, a free-to-download version of its database, which includes the functionality that developers need to get started with MongoDB.

See Also

Analyst Recommendations for MongoDB (NASDAQ:MDB)



Receive News & Ratings for MongoDB Daily – Enter your email address below to receive a concise daily summary of the latest news and analysts’ ratings for MongoDB and related companies with MarketBeat.com’s FREE daily email newsletter.

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.