Java News Roundup: Java Operator SDK 5.0, Open Liberty, Quarkus MCP, Vert.x, JBang, TornadoVM

MMS Founder
MMS Michael Redlich

Article originally posted on InfoQ. Visit InfoQ

This week’s Java roundup for January 27th, 2025, features news highlighting: the GA release of Java Operator SDK 5.0; the January 2025 release of Open Liberty; an implementation of Model Context Protocol in Quarkus; the fourth milestone release of Vert.x 5.0; and point releases of JBang 0.123.0 and TornadoVM 1.0.10.

JDK 24

Build 34 of the JDK 24 early-access builds was made available this past week featuring updates from Build 33 that include fixes for various issues. Further details on this release may be found in the release notes.

JDK 25

Build 8 of the JDK 25 early-access builds was also made available this past week featuring updates from Build 7 that include fixes for various issues. More details on this release may be found in the release notes.

For JDK 24 and JDK 25, developers are encouraged to report bugs via the Java Bug Database.

TornadoVM

TornadoVM 1.0.10 features bug fixes, compatibility enhancements, and improvements: a new command-line option, -Dtornado.spirv.runtimes, to select individual (Level Zero and/or OpenCL) runtimes for dispatching and managing SPIR-V; and support for multiplication of matrices using the HalfFloat type. Further details on this release may be found in the release notes.

Spring Framework

The first milestone release of Spring Cloud 2025.0.0, codenamed Northfields, features bug fixes and notable updates to sub-projects: Spring Cloud Kubernetes 3.3.0-M1; Spring Cloud Function 4.3.0-M1; Spring Cloud Stream 4.3.0-M1; and Spring Cloud Circuit Breaker 3.3.0-M1. This release is based upon Spring Boot 3.5.0-M1. More details on this release may be found in the release notes.

Open Liberty

IBM has released version 25.0.0.1 of Open Liberty featuring updated Open Liberty features – Batch API (batch-1.0), Jakarta Batch 2.0 (batch-2.0), Jakarta Batch 2.1 (batch-2.1), Java Connector Architecture Security Inflow 1.0 (jcaInboundSecurity-1.0), Jakarta Connectors Inbound Security 2.0 (connectorsInboundSecurity-2.0) – to support InstantOn; and a more simplified web module migration with the introduction of the webModuleClassPathLoader configuration attribute for the enterpriseApplication element that controls what class loader is used for the JARs that are referenced by a web module Class-Path attribute.

Quarkus

The release of Quarkus 3.18.0 provides bug fixes, dependency upgrades and notable changes such as: an integration of Micrometer to the WebSockets Next extension; support for a JWT bearer client authentication in the OpenID Connect and OpenID Connect Client extensions using client assertions loaded from the filesystem; and a new extension, OpenID Connect Redis Token State Manager to store an OIDC connect token state in a Redis cache datasource. Further details on this release may be found in the changelog.

The Quarkus team has also introduced their own implementation of the Model Context Protocol (MCP) protocol featuring three servers so far: JDBC, Filesystem and JavaFX. These servers have been tested with Claude for Desktop, Model Context Protocol CLI and Goose clients. The team recommends using JBang to use these servers for ease of use, but isn’t required.

Apache Software Foundation

Maintaining alignment with Quarkus, the release of Camel Quarkus 3.18.0, composed of Camel 4.9.0 and Quarkus 3.18.0, provides resolutions to notable issues such as: the Kamelet extension unable to serialize objects from an instance of the ClasspathResolver, an inner class defined in the DefaultResourceResolvers, to bytecode; and the Debezium BOM adversely affects the unit tests from the Cassandra CQL extension driver since the release of Debezium 1.19.2.Final. More details on this release may be found in the release notes.

Infinispan

The release of Infinispan 15.1.5 features dependency upgrades and resolutions to issues such as: a NullPointerException due to a concurrent removal with the DELETE statement causing the cache::removeAsync statement to return null; and an instance of the HotRodUpgradeContainerSSLTest class crashes the test suite due to an instance of the PersistenceManagerImpl class failing to start. Further details on this release may be found in the release notes.

Java Operator SDK

The release of Java Operator SDK 5.0.0 ships with continuous improvements on new features such as: the Kubernetes Server-Side Apply elevated to a first-class citizen with a default approach for patching the status resource; and a change in responsibility with the EventSource interface to monitor the resources and handles accessing the cached resources, filtering, and additional capabilities that was once maintained by the ResourceEventSource subinterface. More details on this release may be found in the release notes.

JBang

JBang 0.123.0 provides bug fixes, improvements in documentation and new features: the options, such as add-open and exports, in a bundled MANIFEST.MF file are now honored; and the addition of Cursor, the AI code editor, in the list of supported IDEs. Further details on this release may be found in the release notes.

Eclipse Vert.x

The fourth release candidate of Eclipse Vert.x 5.0 delivers notable changes such as: the removal of deprecated classes – ServiceAuthInterceptor and ProxyHelper – along with the two of the overloaded addInterceptor() methods defined in the ServiceBinder class; and support for the Java Platform Module System (JPMS). More details on this release may be found in the release notes and deprecations and breaking changes.

JHipster

Versions 1.26.0 and 1.25.0 of JHipster Lite (announced here and here, respectively) ship with bug fixes, dependency upgrades and new features/enhancements such as: new datasource modules for PostgreSQL, MariaDB, MySQL and MSSQL; and a restructured state ranking system for modules. Version 1.26.0 also represents the 100th release of JHipster Lite. Further details on these releases may be found in the release notes for version 1.26.0 and version 1.25.0.

About the Author

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Podcast: Apoorva Joshi on LLM Application Evaluation and Performance Improvements

MMS Founder
MMS Apoorva Joshi

Article originally posted on InfoQ. Visit InfoQ

Transcript

Srini Penchikala: Hi, everyone. My name is Srini Penchikala. I am the lead director for AI, ML, and the data engineering community at InfoQ website and a podcast host.

In this episode, I’ll be speaking with Apoorva Joshi, senior AI developer advocate at MongoDB. We will discuss the topic of how to develop software applications that use the large language models, or LLMs, and how to evaluate these applications. We’ll also talk about how to improve the performance of these apps with specific recommendations on what techniques can help to make these applications run faster.

Hi, Apoorva. Thank you for joining me today. Can you introduce yourself, and tell our listeners about your career and what areas have you been focusing on recently?

Apoorva Joshi: Sure, yes. Thanks for having me here, Srini. My first time on the InfoQ Podcast, so really excited to be here. I’m Apoorva. I’m a senior AI developer advocate here at MongoDB. I like to think of myself as a data scientist turned developer advocate. In my past six years or so of working, I was a data scientist working at the intersection of cybersecurity and machine learning. So applying all kinds of machine learning techniques to problems such as malware detection, phishing detection, business email compromise, that kind of stuff in the cybersecurity space.

Then about a year or so ago, I switched tracks a little bit and moved into my first role as a developer advocate. I thought it was a pretty natural transition because even in my role as a data scientist, I used to really enjoy writing about my work and sharing it with the community at conferences, webinars, that kind of thing. In this role, I think I get to do both the things that I enjoy. I’m still kind of a data scientist, but I also tend to write and talk a bit more about my work.

Another interesting dimension to my work now is also that I get to talk to a lot of customers, which is something I always wanted to do more of. Especially in the gen AI era, it’s been really interesting to talk to customers across the board, and just hear about the kind of things they’re building, what challenges they typically run into. It’s a really good experience for me to offer them my expertise, but also learn from them about the latest techniques and such.

Srini Penchikala: Thank you. Definitely with your background as a data scientist and a machine learning engineer, and obviously developer advocate working with the customers, you bring the right mix of skills and expertise that the community really needs at this time because there is so much value in the generative AI technologies, but there’s also a lot of hype.

Apoorva Joshi: Yes.

Srini Penchikala: I want this podcast to be about what our listeners should be hyped about in AI, not all about the hype out there.

Let me first start by setting the context for this discussion with a quick background on large language models. The large language models, or LLMs, have been the foundation of gen AI applications. They play a critical role in developing those apps. We are seeing LLMs being used pretty much everywhere in various business and technology use cases. Not only for the end users, customers, but also for the software engineers in terms of code generation. We can go on with so many different use cases that are helping the software development lifecycle. And also, devops engineers.

I was talking to a friend and they are using AI agents to automatically upgrade the software on different systems in their company, and automatically send the JIRA tickets if there are issues. Agents are doing all this. They’re able to cut down the work from number of days and number of weeks for these upgrades. The patching process is down to minutes and hours. Definitely the sky is the limit there, right?

Apoorva Joshi: Yes.

Current State of LLMs [04:18]

Srini Penchikala: What do you see? What’s the current state of LLMs? And what are you seeing in the industry, are they being used, and what use cases are they being applied today?

Apoorva Joshi: I think there’s two slightly different questions here. One is what’s the current state of LLMs, and then application.

To your first point, I’ve been really excited to see the shift from purely text generation models to models that generate other modalities, such as image, audio, and video. It’s been really impressive to see how the quality of these models has improved in the past year alone. There’s finally benchmarks and we are actually starting to see applications in the wild that use some of these other modalities. Yes, really exciting times ahead as these models become more prevalent and find their place in more mainstream applications.

Then coming to how LLMs are being applied today, like you said, agents are the hot thing right now. 2025 is also being touted as the year of AI agents. Definitely seeing that shift in my work as well. Since the past year, we’ve seen our enterprise customers move from basic RAG early or mid last year to building more advanced RAG applications using slightly more advanced techniques, such as hybrid search, parent document retrieval, and all of this to improve the context being passed to LLMs for generation.

Then now, we are also seeing folks further move on to agents, so frequently hearing things like self-querying retrieval, human in the loop agents, multi-agent architectures, and stuff like that.

Srini Penchikala: Yes. You’ve been publishing and advocating about all of these topics, especially LLM-based applications which is the focus of this podcast. We’re not going to get too much into the language models themselves.

Apoorva Joshi: Yes.

Srini Penchikala: But we’ll be talking about how those models are using applications and how we can optimize those applications. This is for all the software developers out there.

LLM-based Application Development Lifecycle [06:16]

Yes, you’ve been publishing about and advocating about how to evaluate and improve the LLM application performance. Before we get into the performance side of discussion, can you talk about what are the different steps involved in a typical LLM-based application, because different applications and different organizations may be different in terms of number of steps?

Apoorva Joshi: Sure. Yes. Thinking of the most common elements, data is the first obvious big one because the LLMs work on some task out of the box. But at most organizations they want them to work on their own data or domain-specific use cases in industries like healthcare, legal. You need something a bit more than just a powerful language model, that’s where data becomes an important piece.

Then once you have data and you want language models to use that data to inform their responses, that’s where retrieval becomes a huge thing. Which is why things have progressed from just simple vector search or semantic search to some of these more advanced techniques, like again, hybrid search, parent document retrieval, self-querying, knowledge graphs. There’s just so much on that front as well. Then the LLM is a big piece of it if you’re building LLM-based applications.

I think one piece that a lot of companies often tend to miss is the monitoring aspect. Which is when you put your LLM applications into production, you want to be able to know if there’s regressions, performance degradations. If your application is not performing the way it should, so monitoring is the other pillar of building LLM applications.

Srini Penchikala: Sounds good. Once the developers start work on these applications, I think first thing they should probably do is the evaluation of the application.

Apoorva Joshi: Yes.

Evaluation of LLM-based Applications [08:02]

Srini Penchikala: What is the scope? What are the benchmarks? Because the metrics and service level agreements (SLAs) and response times can be different for different applications. Can you talk about evaluation of LLM-based applications, like what developers should be looking for? Are there any metrics that they should be focusing on?

Apoorva Joshi: Yes. I think anything with respect to LLMs is such a vast area because they’ve just opened up the floodgates for being used across multiple different domains and tasks. Evaluation is no different.

If you think of traditional ML models, like a classification or regression models, you had very quantifiable metrics that applied to any use case. For classification, you would have accuracy, precision recall. Or if you were building a regression model, you had means squared error, that kind of thing. But with LLMs, all that’s out of the window. Now the responses from these models are natural language, or an image, or some generated commodity. The metrics, when it comes to LLMs, are hard to quantify.

For example, if they’re generating a piece of text for a Q&A-based application, then metrics like how coherent is the response, how factual is the response, or what is the relevance of the information provided in the response. All of these become more important metrics and these are unfortunately pretty hard to quantify.

There’s two techniques that I’m seeing in the space broadly. One is this concept of LLM as a judge. The premise there is because LLMs are good at identifying patterns and interpreting natural language, they can be also used as an evaluation mechanism for natural language responses.

The idea there is to prompt an LLM on how you wanted to go about evaluating responses for your specific task and dataset, and then use the LLM to generate some sort of scoring paradigm on your data. I’ve also seen organizations that have more advanced data science teams actually putting the time and effort into creating fine-tuned models for evaluation. But yes, that’s typically reserved for teams that have the right expertise and knowledge to build a fine-tuned model because that’s a bit more involved than prompting.

Domain-specific Language Models [10:31]

Srini Penchikala: Yes. You mentioned domain-specific models. Do you see, I think this is one of my predictions, that the industry will start moving towards domain-specific language models? Like healthcare would have their own healthcare LLM, and the insurance industry would have their own insurance language model.

Apoorva Joshi: I think that’s my prediction, too. Coming from this domain, I was in cybersecurity, I used to do a lot of that. This was in the world when BERT was supposed to be a large language model. A lot of my work was also on fine-tuning those language models on cybersecurity-specific data. I think that’s going to start happening more and more.

I already see signals for that happening because let’s take the example of natural language to query. That’s a pretty common thing that folks are trying to do. I’ve seen that usually, with prompting or even something like RAG, you can achieve about, I would say, 90 to 95 percent accuracy or recall on slightly complicated tasks. But there’s a small set of tasks that are just not possible by just providing the LLM with the right information to generate responses.

For some of those cases, and more importantly for domain-specific use cases, I think we are going to pretty quickly move towards a world where there’s smaller specialized models, and then maybe an agent that’s orchestrating and helping facilitate the communication between all of them.

LLM Based Application Performance Improvements [12:02]

Srini Penchikala: Yes, definitely. I think it’s a very interesting time not only with these domain-specific models taking shape, and the RAG techniques now, you can use these base models and apply your own data on that. Plus, the agents taking care of a lot of these activities on their own, automation type of tasks. Definitely that’s really good. Thanks, Apoorva, for that.

Regarding the application performance itself, what are the high level considerations and strategies that teams should be looking at before they jump into optimizing or over-optimizing? What are the performance concerns that you see the teams are running into and what areas they should be focusing on?

Apoorva Joshi: Most times, I see teams asking about three things. There’s accuracy, latency, and cost. When I say accuracy, what I really mean is performance on metrics that apply to a particular business use case. It might not be accuracy, it might be, I don’t know, factualness or relevance. But yes, you get the drift. Because that’s how it is, because there are so many different use cases, it really comes down to first determining what your business cares about, and then coming up with metrics that resonate with that use case.

For example, if you’re building a Q&A chatbot, your evaluation parameters would be mainly faithfulness and relevance. But say you’re building a content moderation chatbot, then you care more about recall on toxicity and bias, for example. I think that’s the first big step.

Improvements here could be, again, depend on what you end up finding are the gaps of the model. Say you’re evaluating a RAG system, you would want to evaluate the different components of the system itself first, in addition to the overall evaluation of the system. When you think of RAG, there’s two components, retrieval and generation. You want to evaluate the retrieval performance separately to see if your gap lies in the retrieval strategy itself or do you need a different embedding model. Then you evaluate the generation to see what the gaps on the generation front are, to see what improvements you need to do there.

I think work backwards. Evaluate as many different components of the system as possible to identify the gaps. And then work backwards from there to try out a few different techniques to improve the performance on the accuracy side. Guardrails are an important one to make sure that the LLM is appropriately responding or not responding to sensitive or off-topic questions.

In agentic applications, I’ve seen folks also implement things like self-reflection and critiquing loops to have the LLM reflect and improve upon its own response. Or even human in the loop workflows, too. Get human feedback and incorporate that as a strategy to improve the response.

Maybe I’ll stop there to see if you have any follow-ups.

Choosing Right Embedding Model [15:02]

Srini Penchikala: Yes. No, that’s great. I think the follow-up is basically we can jump into some of those specific areas of the process. One of the steps is choosing the right embedding model. Some of these tools come with … I was trying out the Spring AI framework the other day. It comes with a default embedding model. What do you see there? Are there any specific criteria we should be using to pick one embedding model for one use case versus a different one for a different use case?

Apoorva Joshi: My general thumb rule would be to find a few candidate models and evaluate them for your specific use case and dataset. For text data, my recommendation would be to start from something like the massive text embedding, or MTEB Benchmark on Hugging Face. It’s essentially a leader board that shows you how different proprietary and open source embedding models perform on different tasks, such as retrieval, classification, and clustering. It also shows you the model size and dimensions.

Yes. I would say choose a few and evaluate for performance and, say latency if that’s a concern for you. Yes, there’s similar ones for multi-modal models as well. Until recently, we didn’t have good benchmarks for multi-modal, but now we have things like MME, which is a pretty good start.

Srini Penchikala: Yes. Could we talk about, real quick, about the benchmarks? When we are switching these different components of the LLM application, what standard benchmarks can we look at or run to get the results and compare?

Apoorva Joshi: I think benchmarks apply to the models themselves more than anything else. Which is why, when you’re looking to choose models for your specific use case, you take that with a grain of salt because the tasks that are involved in a benchmark. If you look at the MMLU Benchmark, it’s mostly a bunch of academic and professional examinations, but that might not necessarily be the task that you are evaluating for. I think benchmarks mostly apply for LLMs, but LLM applications are slightly different.

Srini Penchikala: You said earlier the observability or the monitoring. If you can build it into the application right from the beginning, it will definitely help us pinpoint any performance problems or any latencies.

Apoorva Joshi: Exactly.

Data Chunking Strategies [17:18]

Srini Penchikala: Another technique is how the data is divided or chunked into smaller segments. You published an article on this. Can you talk about this a little bit more, and tell us what are some of the chunking strategies for implementing the LLM apps?

Apoorva Joshi: Sure, yes. I think my disclaimer from before, with LLMs the answer starts from it depends, and then you pick and choose. I think that’s the thumb rule for anything when it comes to LLMs. Pick and choose a few, evaluate on your dataset and use case, and go from there.

Similarly for chunking, it depends on your specific data and use case. For most text, I typically suggest starting with this technique called recursive token with overlap, with say a 200-ish token size for chunks. What this does is it has the effect of keeping paragraphs together with some overlap at the chunk boundaries. This, combined with techniques such as parent document or contextual retrieval could potentially work well if you’re working with mostly text data. Semantic chunking is another fascinating one for text where you try to find or align the chunk boundaries with the semantic boundaries of your text.

Then there’s semi-structured data, which is data containing a combination of text, images, tables. For that, I’ve seen folks retrieve the text and non-textual components using specialized tools. There’s one called Unstructured that I particularly like. It supports a bunch of different formats and has different specialized models for extracting components present in different types of data. Yes, I would use a tool like that.

Then once you have those different components, maybe chunk the text as you would normally do. Then, two ways to approach the non-textual components. You either maybe summarize the images and tables to get everything in the text domain, or use multi-modal embedding models to embed the non-text elements as is.

Srini Penchikala: Yes, definitely. Because if we take the documents and if we chunk them into too small of segments, the context may be lost.

Apoorva Joshi: Exactly.

Srini Penchikala: If you provide a prompt, the response might not be exactly what you were looking for.

Apoorva Joshi: Right.

RAG Application Improvements [19:40]

Srini Penchikala: What are the other, especially if you’re using a RAG-based application which is probably the norm these days for all the companies … They’re all taking some kind of foundation model and ingesting their company data, incorporating it on top of it. What are the other strategies are you seeing in the RAG applications in terms of retrieval or generation steps?

Apoorva Joshi: There’s a lot of them coming every single day, but I can talk about the ones I have personally experimented with. The first one would be hybrid search. This is where you combine the results from multiple different searches. It’s commonly a combination of full text and vector search, but it doesn’t have to be that. It could be vector and craft-based. But the general concept of that is that you’re combining results from multiple different searches to get the benefits of both.

This is useful in, say ecommerce applications for example, where users might search for something very specific. Or include keywords in their natural language queries. For example, “I’m looking for size seven red Nike running shoes”. It’s a natural language query, but it has certain specific points of focus or keywords in them. An embedding model might not capture all of these details. This is where combining it with something like a full text search might make sense.

Then there’s parent document retrieval. This is where you embed and store small chunks at storage and ingest time, but you fetch the full source document or larger chunks at retrieval time. This has the effect of providing a more complete context to the LLM while generating responses. This might be useful in cases such as legal case prep or scientific research documentation chatbots where the context surrounding the user’s question can result in more rounded responses.

Finally, there’s graph RAG that I’ve been hearing about a lot lately. This is where you structure and store your data as a knowledge graph, where the nodes can be individual documents or chunks. Edges capture which nodes are related and what the relationship between the nodes is. This is particularly common in specialized domains such as healthcare, finance, legal, or anywhere where multi-hop reasoning or if you need to do some sort of root cause analysis or causal inference is required.

Srini Penchikala: Yes, definitely. The graph RAG has been getting a lot of attention lately. The power of knowledge graph in the RAG.

Apoorva Joshi: But that’s the thing. Going back to what you said earlier on, what’s the hype versus what people should be hyped about. I think a lot of organizations have a hard time balancing that too, because they want to be at the bleeding-edge of building these applications. But then sometimes, it might just be overkill to use the hottest technique.

Srini Penchikala: Where should development teams decide, “Hey, we started with an LLM-based application in mind, but my requirements are not a good fit?” What are those, I don’t want to call them limitations, but what are the boundaries where you say, “For now, let’s just go with the standard solution rather than bringing some LLM in to make it more complex?”

Apoorva Joshi: This is not just an LLM thing. Even having spent six years as a data scientist, a lot of times … ML in general, for the past decade or so, it’s just been a buzzword. Sometimes people just want to use it for the sake of using it. That’s where I think you need to bring a data scientist or an expert into the room and be like, “Hey, this is my use case”, and have them evaluate whether or not you even need to use machine learning, or in this case gen AI for it.

Going from traditional to gen AI, now there’s more of a preference to generative AI as well. I think at this point, the decision is, “Can I use a small language model or just use an XG boost and get away with it? Or do I really need a RAG use case?”

But I think in general, if you want to reason and answer questions using natural language on a repository of text, then I agree, some sort of generative AI use case is important. But say you’re basically just trying to do classification, or just doing something like anomaly detection or regression, then just because an LLM can do it doesn’t mean you should, because it might not be the most efficient thing at the end of the day.

Srini Penchikala: The traditional ML solutions are still relevant, right?

Apoorva Joshi: Yes. For some things, yes.

I do want to say the beauty of LLMs is that it’s made machine learning approachable to everyone. It’s not limited to data scientists anymore. A software engineer or PM, someone who’s not technical, they can just use these models without having to fine-tune or worry about the weights of the model. Yes, I think that results in these pros and cons, in a sense.

Srini Penchikala: Yes, you’re right. Definitely these LLM models and these applications that use them have brought the value of these to the masses. Now everybody can use ChatGPT or CoPilot and get the value out of it.

Apoorva Joshi: Yes.

Frameworks and Tools for LLM applications [25:03]

Srini Penchikala: Can you recommend any open source tools and frameworks for our audience to try out LLM applications if they want to learn about them before actually starting to use them?

Apoorva Joshi: Sure, yes. I’m trying to think what the easiest stack would be. If you’re looking at strictly open source, you don’t want to put down a credit card to just experiment and build a prototype, then I think three things. You first need a model of some sort, whether it’s embedding or LLMs.

For that, I would say use something like Hugging Face. Pretty easy to get up and running with their APIs. You don’t have to pay for it. Or if you want to go a bit deeper and try out something local, then Ollama has support for a whole bunch of open source models. I like LangGraph for orchestration. It’s something LangChain came up with a while ago. A lot of people think it’s an agent orchestration framework only, but I have personally used it for just building control flows. I think you could even build a RAG application by using LangGraph. It just gives you low-level control on the flow of your LLM application.

For vector databases, if you’re looking for something that’s really quick and open source, and easy to start with, then you could even start with something like Chroma or FAISS for experimentation. But of course, when you move from the prototype of putting something in production, you would want to consider enterprise-grade databases such as my employer.

Srini Penchikala: Yes, definitely. For local, just to get started, even Postgres has a vector flavor of the database called PG Vector.

Apoorva Joshi: Right.

Srini Penchikala: Then there’s Quadrant and others. Yes.

Apoorva Joshi: Yes.

Srini Penchikala: Do you have any metrics, or benchmarks, or resources that teams can use to look at, “Hey, I just want to see what are the top 10 or top five LLMs before I even start work on this?”

Apoorva Joshi: There’s an LLM similar to, what’s the one you were mentioning?

Srini Penchikala: The one I mentioned is Open LLM Leaderboard.

Apoorva Joshi: There’s a similar one on Hugging Face that I occasionally look at. It’s called LLM LMSYS Chatbot Arena. That’s basically a crowdsourced list of evaluation of different proprietary and open source LLMs. I think that’s a good thing to look at than just performance on benchmarks because benchmarks can have data contamination.

Sometimes vendors will actually train their models on benchmark data so certain models could end up looking good on certain tasks than they actually are. Which is why leader boards such as the one you mentioned and LMSYS are good because it’s actually people trying these models on real world prompts and tasks.

Srini Penchikala: Just like anything else, teams should try it out first and then see if it works for their use case and their requirements, right?

Apoorva Joshi: Yes.

Online Resources [27:58]

Srini Penchikala: Other than that, any other additional resources on LLM application performance improvements and evaluation? Any online articles or publications?

Apoorva Joshi: I follow a couple of people and read their blogs. There’s this person called Eugene Yan. He’s an applied scientist at Amazon. He has a blog and he’s written extensively about evals and continues to do extensive research in that area. There’s also a bunch of people in the machine learning community who had written almost a white paper titled What We Learned from a Year of Building With LLMs. It’s just really technical practitioners who’ve written that white paper based on their experience building LLMs in the past year. Yes. I generally follow a mix of researches and practitioners in the community.

Srini Penchikala: Yes, I think that’s a really good discussion. Do you have any additional comments before we wrap up today’s discussion?

Apoorva Joshi: Yes. Our discussion made me realize just how important evaluation is when building just any software application, but LLMs specifically because while they’ve made ML accessible and usable in so many different domains, what you really need on a day-to-day is for the model or application to perform on the use case or task you need. I think evaluating for what you’re building is key.

Srini Penchikala: Also, another key is your LLM mileage may vary. It all depends on what you’re trying to do, and what are the constraints and the benchmarks that are working towards.

Apoorva Joshi: Exactly.

Srini Penchikala: Apoorva, thank you so much for joining this podcast. It’s been great to discuss one of the very important topics in the AI space, how to evaluate the LLM applications, how to measure the performance, and how to improve their performance. These are practical topics that everybody is interested in, not just another Hello World application or ChatGPT tutorial.

Apoorva Joshi: Yes.

Srini Penchikala: Thank you for listening to this podcast. If you’d like to learn more about AI and ML topics, check out the AI, ML, and data engineering community page on infoq.com website. I also encourage you to listen to the recent podcasts, especially the 2024 AI ML Trends Report we published last year. And also, the 2024 Software Trends Report that we published just after the new year’s. Thank you very much. Thanks for your time. Thanks, Apoorva.

Apoorva Joshi: Yes. Thank you so much for having me.

Mentioned:

About the Author

.
From this page you also have access to our recorded show notes. They all have clickable links that will take you directly to that part of the audio.

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Google’s Vertex AI in Firebase SDK Now Ready for Production Use

MMS Founder
MMS Sergio De Simone

Article originally posted on InfoQ. Visit InfoQ

Three months after its launch in beta, the Vertex AI in Firebase SDK is now ready for production, says Google engineer Thomas Ezan, who further explores three dimensions that are essential for its successful deployment to production: abuse prevention, remote configuration, and responsible AI use.

The Vertex AI in Firebase SDK aims to facilitate the integration of Gemini AI into Android and iOS apps by providing idiomatic APIs, security against unauthorized use, and integration with other Firebase services. By integrating Gemini AI, developers can build AI features into their apps, including AI chat experiences, AI-powered optimization and automation, and more.

A few apps are already using the SDK, explain Ezan, including Meal Planner, which creates original meal plans using AI; the journal app Life, which aims to be an AI diary assistant able to convert conversations into journal entries; and hiking app HiiKER.

Although using an AI service may seem easy, it comes with a few critical specific responsibilities, namely implementing robust security measures to prevent unauthorized access and misuse, preparing for the quick evolution of Gemini models by using remote configuration, and using AI responsibly.

To ensure your app is protected against unauthorized access and misuse, Google provides Firebase App Check:

Firebase App Check helps protect backend resources (like Vertex AI in Firebase, Cloud Functions for Firebase, or even your own custom backend) from abuse. It does this by attesting that incoming traffic is coming from your authentic app running on an authentic and untampered Android device.

The App Check server verifies the attestation using parameters registered with the app and then returns a token with an expiration time. The client caches the token to use it with subsequent requests. In case a request is received without an attestation token, it is rejected.

Remote configuration can be useful to take care of model evolution as well as other parameters that could require to be updated at any time, such as maximum tokens, temperature, safety settings, system instructions, and prompt data. Other important cases where you will want to parametrize your app’s behavior are setting the model location closer to the users, A/B testing system prompts and other model parameters, enabling and disabling AI-related features, etc.

Another key practice highlighted by Ezan is user feedback collection to evaluate user impact:

As you roll out your AI-enabled feature to production, it’s critical to build feedback mechanisms into your product and allow users to easily signal whether the AI output was helpful, accurate, or relevant.

Examples of this are including thumb-up and thumb-down buttons and detailed feedback forms in your app UI.

Last but not least, says Ezan, there is responsibility, which means you should be transparent about AI-based features, you should ensure your users’ data is not used by Google to train their models, and highlight the possibility of unexpected behavior.

All in all, the Vertex AI in Firebase SDK provides an easy road into creating AI-powered mobile apps without developers having to deal with the complexity of Google Cloud or switch to a different programming language to implement an AI backend. However, the Vertex AI in Firebase SDK does not support more advanced use cases, such as streaming, and has a simplified API that is close to direct LLM calls. This makes it less flexible out-of-the-box to build agents, chatbots, or automation. If you need to support streaming or more complex interactions, you can consider using Google GenKit, which additionally offers a free tier for testing purposes.

About the Author

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Cloudflare Open Sources Documentation and Adopts Astro for Better Scalability

MMS Founder
MMS Renato Losio

Article originally posted on InfoQ. Visit InfoQ

Cloudflare recently published an article detailing their upgrade of developer documentation by migrating from Hugo to the Astro ecosystem. All Cloudflare documentation is open source on GitHub, with opportunities for community contributions.

The developers.cloudflare.com site was previously consolidated from a collection of Workers Sites into a single Cloudflare Pages instance. The process used tools like Hugo and Gatsby to convert thousands of Markdown pages into HTML, CSS, and JavaScript. Kim Jeske, head of product content at Cloudflare, Kian Newman-Hazel, document platform engineer at Cloudflare, and Kody Jackson, technical writing manager at Cloudflare, explain the reasons behind the change in the web framework:

While the Cloudflare content team has scaled to deliver documentation alongside product launches, the open source documentation site itself was not scaling well. developers.cloudflare.com had outgrown the workflow for contributors, plus we were missing out on all the neat stuff created by developers in the community.

In 2021, Cloudflare adopted a “content like a product” strategy, emphasizing the need for world-class content that anticipates user needs and supports the creation of accessible products. Jeske, Newman-Hazel, and Jackson write:

Open source documentation empowers the developer community because it allows anyone, anywhere, to contribute content. By making both the content and the framework of the documentation site publicly accessible, we provide developers with the opportunity to not only improve the material itself but also understand and engage with the processes that govern how the documentation is built, approved, and maintained.

According to the team, Astro’s documentation theme, Starlight, was a key factor in the decision to migrate the documentation site: the theme offers powerful component overrides and a plugin system to utilize built-in components and base styling. Jeremy Daly, director of research at CloudZero, comments:

Cloudflare has open sourced all their developer documentation and migrated from Hugo to the Astro, with the JavaScript ecosystem claiming another victim. No matter how good your documentation is, user feedback is essential to keeping it up-to-date and accessible to all.

According to the Cloudflare team, keeping all documentation open source allows the company to stay connected with the community and quickly implement feedback, a strategy not commonly shared by other hyperscalers. As previously reported on InfoQ, AWS shifted its approach after maintaining most of its documentation as open source for five years. In 2023, the cloud provider retired all public GitHub documentation, citing the challenge of keeping it aligned with internal versions and the manual effort required to sync with GitHub repositories. Jeff Barr, chief evangelist at AWS, wrote at the time:

The overhead was very high and actually consumed precious time that could have been put to use in ways that more directly improved the quality of the documentation.

Gianluca Arbezzano, software engineer at Mathi, highlights the significance of the topic:

If you are thinking: “it is just documentation”, I think you should care a bit more! We deserve only the best! Nice article from Cloudflare about their migration from Hugo to Astro.

Commenting on the Cloudflare article on Hacker News, Alex Hovhannisyan cautions:

I’m sorry but I have to be honest as someone who recently migrated from Netlify (and is considering moving back): the documentation is not very good, and your tech stack has nothing to do with it. End users don’t care what tech stack you use for your docs.

All Cloudflare documentation is available at developers.cloudflare.com.

About the Author

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Surprising Stock Swings: Missed Opportunities and Bold Moves Shake the Market – elblog.pl

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

  • Momentum stocks experienced setbacks, with the S&P 500 dropping as companies like AppLovin and Palantir faced losses.
  • China’s economic policies spark mixed reactions; success could boost companies like Danaher and GE Healthcare.
  • DuPont’s plan to split into three entities signals potential growth and innovation.
  • Oracle’s report is expected to impact AI stocks, with investor focus also on MongoDB and Toll Brothers.
  • CNBC Investing Club offered strategic trade alerts for smarter market navigation.
  • Investors are advised to remain cautious, balancing opportunities in AI and restructuring with global uncertainties.

In a week filled with dynamic market shifts, investors watched as momentum stocks hit unexpected roadblocks. The S&P 500 dipped as major players like AppLovin tumbled over 11%, narrowly missing entry into the esteemed index. Traders who speculated on index-related profits found themselves in disappointment, while stocks like Palantir saw notable declines despite initial premarket strength.

Across the globe, all eyes were on China’s economic pulse. The government’s promises of a ‘moderately loose’ monetary policy paired with a ‘proactive’ fiscal stance brought a mix of hope and skepticism. If China successfully rolls out these policies, companies like Danaher and GE Healthcare could see a resurgence in demand, but history urges cautious optimism.

Domestic markets buzzed with DuPont’s bold announcement to split into three distinct entities, shedding light on potentially untapped value. This strategic maneuver hints at a future of sharper focus and innovation, igniting investor interest.

On the tech front, Oracle’s upcoming performance report carries the potential to stir excitement in AI stocks once more. Investors are keenly awaiting the outcomes from other big names like MongoDB and Toll Brothers, poised to sway market sentiment with their insights.

Adding a strategic edge, CNBC Investing Club members received preemptive trade alerts, offering them a chance to navigate the market landscape smarter and faster than the average investor.

As market trends unfold, a conservative approach is advised. While intriguing opportunities in AI and company restructurings like DuPont’s restructure are promising, balancing optimism with vigilance in light of global uncertainties is crucial. Stay informed and ahead by continually engaging with reputable financial news.

The Invisible Forces Shaping Market Waves: What You Need to Know Now

Navigating Market Volatility: Key Insights and Strategies

In recent market developments, investors faced unexpected challenges as momentum stocks hit unforeseen obstacles. The S&P 500 experienced a dip, primarily due to significant losses by companies like AppLovin, which fell over 11% and narrowly missed inclusion in this prestigious index. Speculators banking on index-driven gains faced disappointments, particularly as stocks such as Palantir slipped despite their promising premarket performance.

Concurrently, global attention was riveted on China’s economic strategies. The government’s commitment to a ‘moderately loose’ monetary policy combined with a ‘proactive’ fiscal approach sparked both optimism and skepticism. Should China effectively implement these policies, there may be renewed demand for firms like Danaher and GE Healthcare. However, caution is advised given the potential risks associated with such economic shifts.

Domestically, DuPont made a strategic move by announcing its plan to split into three separate entities. This could unlock previously untapped value, offering a future rife with focused innovation and drawing significant interest from investors.

On the tech front, Oracle’s upcoming performance report is set to potentially reignite interest in AI stocks. There’s keen anticipation around results from MongoDB and Toll Brothers, as these could significantly influence market sentiment.

In a proactive measure, CNBC Investing Club members received early trade alerts, enabling them to maneuver the market landscape more effectively than the average investor.

Given these trends, a conservative investment approach is recommended. The allure of opportunities within AI and strategic company restructurings, like DuPont’s, is clear. However, it’s essential to balance optimism with cautious vigilance amidst global uncertainties, ensuring informed and strategic investment decisions.

Key Questions Answered

1. What are the implications of China’s economic policies for global markets?

China’s pursuit of a ‘moderately loose’ monetary policy paired with a ‘proactive’ fiscal stance is designed to stimulate its domestic economy. Should these policies succeed, they may bolster demand for both international and local companies, particularly in the healthcare and technology sectors. However, past experiences suggest that investors should remain cautiously optimistic to mitigate potential risks.

2. How might DuPont’s restructuring impact investor strategies?

DuPont’s decision to divide into three distinct entities is aimed at creating more specialized and agile business units. This restructuring could unlock hidden value, increase efficiency, and inspire innovation across each unit. For investors, this could mean accessing more focused investment opportunities within DuPont’s spectrum, potentially leading to higher returns if executed successfully.

3. What role does Oracle play in the AI stock market?

Oracle’s performance and developments in AI technology are closely watched by the investment community. The company’s upcoming performance report could act as a catalyst for renewed interest in AI stocks, influencing market trends. Oracle’s strategies and earnings could signal the broader trajectory of AI investments, affecting how investors view potential opportunities and risks in this rapidly evolving sector.

Related Links

– CBC Investment News: Stay updated with business news and analysis.
– DuPont: Learn more about DuPont’s strategic initiatives and corporate developments.
– Oracle: Discover Oracle’s latest advancements and performance insights in the tech industry.

For investors, the ability to adapt and thrive amidst these changes hinges on staying informed and diligently scrutinizing each potential investment avenue. Balancing optimism with caution remains critical as market forces continue to evolve.

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


PayTo Goes Live on Amazon Australia, Thanks to Banked and NAB Collaboration

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

Banked, a global provider of Pay by Bank solutions, has partnered with National Australia Bank (NAB) to launch the PayTo payment option at Amazon’s Australian store.

The initiative aligns with the global rise in account-to-account (A2A) payment transactions, projected to reach 186 billion by 2029 due to their lower costs, enhanced security and better user experience.

Their collaboration is set to raise the profile of Pay by Bank in Australia, using Amazon’s platform to familiarise consumers with this payment method. Customers can now make direct bank-to-bank transactions when shopping on Amazon.com.au, offering a payment experience without the need for card details.

Brad Goodall, CEO of Banked, commented: “Enabling Amazon and NAB to launch PayTo in Australia is a huge step in cementing our position as a truly global Pay by Bank platform. Australia is an important market for us and we have worked closely with NAB to ensure Amazon’s PayTo sets a worldwide benchmark for account-to-account payments at scale.

“As more consumers become aware and familiarise themselves with the Pay by Bank experience through major brands like Amazon, we will see a snowball effect of uptake. This announcement today between NAB and Amazon will leapfrog Australia into a commanding position as an account-to-account payments global leader.”

Using ‘PayTo’

Customers shopping on Amazon.com.au now have the option to use ‘PayTo’ for Pay by Bank transactions directly from their bank accounts. This method bypasses the need for card details, aiming to enhance transaction security and user control. The ‘PayTo’ feature also allows for both visibility and control by enabling secure authorisation of transactions through the customer’s online banking platform.

Once set up as a payment method in their online banking, customers can initiate either one-off or recurring payments directly from their bank account with a single click, processed in real time.

Jon Adams, NAB executive, enterprise payments, also said: “It has been a pleasure working with the Banked team on this implementation. They understand tier one merchants and their global insight and experience puts NAB in a great position to provide the scale, security and customer experience that consumers and merchants like Amazon demand from their payment experiences.”

The Amazon launch caps Banked’s recent expansion in Australia through a partnership with NAB, aimed at boosting account-to-account payments for local merchants. This move also follows Banked’s acquisition of the Australian payment firm Waave, and precedes a strategic partnership with Chemist Warehouse to enhance the Pay by Bank experience by 2025.

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.