Cloud security firm Wiz uncovered unprotected DeepSeek database giving full control over database operations and access to internal data including millions of lines of chat logs. While the vulnerability has been quickly fixed, the incident shows the need for the AI industry to enforce higher security standards, says the company.
As Wiz security researcher Gal Nagli explains, Wiz Research found a ClickHouse database linked to DeepSeek that was publicly accessible at oauth2callback.deepseek.com:9000 and dev.deepseek.com:9000. The database was open and did not require any authentication, thus exposing a large quantity of data, including chat history, backend data, log streams, API Secrets, and operational details.
More worryingly, says Nagli, the database exposure allowed an attacker to take full control of the database and to gain higher privilege access to parts of the DeepSeek environment.
To discover the exposed database, Nagli carried out a straight reconnaissance procedure to find out what services were open to the internet. This led to the discovery of about 30 accessible domains, including admin.deepseek.com, dev.deepseek.com, and others. He then scanned them for open ports and discovered the unauthenticated ClickHouse instance.
Nagli was able to run arbitrary SQL queries using ClickHouse’s Web UI, getting access to a log_stream table with large amounts of sensitive data. Depending on ClickHouse instance configuration, attackers could also exfiltrate plaintext passwords and local files using queries like: SELECT * FROM file('filename').
As mentioned, DeepSeek rapidly fixed the vulnerability upon disclosure by restricting public access and taking the database off the internet. The company has not yet provided any comments about the root cause of the issue. Given that ClickHouse DB defaults to not allowing external connections, as ClickHouse CEO clarified in a message on X, it cannot be ruled out that the Chinese company was the [target of a DOS attack that led to the database configuration being hacked(https://x.com/wu89_j/status/1884885678174986419)], says another X user.
Besides the obvious privacy and security risk of enabling public, unauthorized access to a database, Nagli hints at the larger implications of trusting sensitive and confidential data to AI companies:
As organizations rush to adopt AI tools and services from a growing number of startups and providers, it’s essential to remember that by doing so, we’re entrusting these companies with sensitive data. The rapid pace of adoption often leads to overlooking security, but protecting customer data must remain the top priority.
DeepSeek is a Chinese startup that has recently received huge attention thanks to its DeepSeek-V3 mixture-of-experts LLM and DeepSeek-R1 reasoning model, which rivals OpenAI’s o1 in performance but with a much smaller footprint.
ClickHouse is an open-source database management system designed for fast analytical queries on large datasets. Developed by Yandex, it is used for real-time data processing, log storage, and big data analytics.
About the Author
Sergio De Simone
Show moreShow less
Subscribe for MMS Newsletter
By signing up, you will receive updates about our latest information.
Did you know...
More than half of fortune 500 companies are planning an AI project in the next 6 months! (Subscribe to be in the know!)
Aerospike Inc., which sells a commercial version of its namesake open-source, scalable, real-time NoSQL database management system, today unveiled a new version that it said is the first real-time distributed database to guarantee strict serializability of ACID transactions at a small fraction of the cost of competing systems.
Strict serializability is the strongest consistency guarantee for ACID transactions, which combine atomicity (all-or-nothing transactions), consistency, isolation and durability. It ensures that transactions appear to execute instantaneously in a total order while also respecting real-time ordering, meaning that if one transaction completes before another starts, the second transaction must see the effects of the first.
That capability is considered critical in high-volume online transaction processing scenarios. For example, if a shopper’s purchase maxes out a credit card, the credit approval application needs to be aware of that fact before authorizing a second transaction.
“Traditional databases have done this for over 30 years,” said Srini Srinivasan, Aerospike’s founder and chief technology officer. “NoSQL databases relaxed consistency for the sake of performance. Algorithms from the old days didn’t focus on performance but on getting things done right.”
Aerospike “leveraged our ability to use the latest technologies in hardware and networking and invented algorithms that enable us to provide strict serializability,” Srinivasan said. “Many are patented.”
Finding balance
Srinivasan said consistency takes a toll on performance, so NoSQL database makers have worked for years to achieve a balance. With this release, he said, “we have preserved performance while adding consistency.”
Many industries need strict serialization. Telecommunications firms, for example, typically have multiple accounts for each customer based on the services provided. “There are regulations that require these companies to change information on multiple accounts at the same time in the same database while serving customers in real time,” Srinivasan said. “We have customers who run tens of millions of transactions per second. This will let them adopt new use cases without inhibiting performance.”
Aerospike achieved consistency for single-record requests across millions of transactions per second with sub-millisecond latency in 2018. The Database 8 release expands data consistency guarantees to distributed multi-object transactions with low latency, even while serving massive concurrent connections, the company said.
“There are a number of other systems that provide strong consistency, but we expect our performance to be significantly higher,” Srinivasan said, citing MongoDB and CockroachDB as examples. He said Aerospike intends to publish performance figures that prove that assertion but isn’t doing so right now.
Performance tradeoff
He acknowledged that guaranteeing strict serializability across multiple transactions extracts a performance penalty. “Every time you change a record, you create a side note in a transaction monitor that maintains the state of the transaction, so there’s overhead,” he said. “We’ve minimized it, but our multi-record transactions will be somewhat slower than single-record.”
Aerospike simplifies OLTP development by moving the tasks of building and maintaining transaction management logic from the application to the database so developers can access the functionality via application program interfaces. Spring developers can immediately write to the same Spring Data transaction APIs they already use, and Java developers can code to the standard Spring Framework transaction management APIs without knowing anything about Aerospike internal processes.
Aerospike’s multimodel database engine supports document, key-value, graph and vector data types to allow developers to choose the best data model for each use case. The company has raised $241 million in funding, including a $109 million round last spring.
Image: SiliconANGLE/DALL-E
Your vote of support is important to us and it helps us keep the content FREE.
One click below supports our mission to provide free, deep, and relevant content.
Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.
“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well” – Andy Jassy
THANK YOU
Subscribe for MMS Newsletter
By signing up, you will receive updates about our latest information.
Did you know...
More than half of fortune 500 companies are planning an AI project in the next 6 months! (Subscribe to be in the know!)
The latest Aerospike database update prioritizes performance efficiency.
Version 8 of Aerospike’s platform, generally available on Wednesday, adds support for online transaction processing (OLTP) applications with distributed ACID (atomicity, consistency, isolation and durability) transactions, also known as multi-object ACID transactions.
OLTP applications are frequently used in industries such as banking, e-commerce and healthcare, in which transactions are common. However, as transaction volumes increase, some data management systems fail to meet workload needs, and their performance decreases as demand scales.
Distributed ACID transactions are a means of improving data management systems’ performance.
Atomicity guarantees that all transactions succeed or fail completely. Consistency ensures that all data is stored accurately and uniformly. Isolation enables users to run workloads concurrently but separately so as not to cause overloads. Durability means that changes made by a transaction are permanent and won’t be lost even if there’s a system failure while a workload is running.
Given that multi-object ACID transactions aim to improve database efficiency, which in turn helps reduce the compute power required to run workloads and the costs associated with cloud consumption, Aerospike’s update is significant, according to Stephen Catanzano, an analyst at Enterprise Strategy Group, now part of Omdia.
The update … means [Aerospike] can guarantee the highest level of data consistency while maintaining high performance and scalability. Stephen CatanzanoAnalyst, Enterprise Strategy Group
“The update … means [Aerospike] can guarantee the highest level of data consistency while maintaining high performance and scalability,” he said.
However, while new for Aerospike, distributed ACID transactions are not new, nor is Aerospike the first database vendor to provide such capabilities. For example, competitors MongoDB and Couchbase added support for distributed ACID transactions more than seven years ago.
Based in Mountain View, Calif., Aerospike provides a multimodel database that supports document, key-value, graph and vector data types. In December, Aerospike upgraded its vector search and storage capabilities. Earlier in the year, the vendor raised $114 million in venture capital funding to finance development of capabilities that help customers develop generative AI tools.
New capabilities
Because of the volume of data needed to train AI models and applications, surging interest in generative AI development has illuminated the need for efficiency when running data workloads.
While Aerospike’s update is not focused on AI, but on transaction processing, the problems of performance and resulting costs at scale are similar. Just as AI requires large amounts of data to deliver accurate outputs, transactional data workloads now contain volumes beyond what traditional databases were built to handle, according to Srini Srinivasan, Aerospike’s founder and CTO.
“Performance and efficiency have always been critical, but as applications scale, high-speed processing and strong consistency have become more pressing,” he said.
To meet those needs, Aerospike and other databases need to evolve to provide both consistency and speed, Srinivasan continued.
“The ability to maintain speed while adding strong consistency ensures businesses can operate at scale without sacrificing data integrity,” he said.
Focusing on performance and data consistency is not new for database vendors such as Aerospike. In testing, Aerospike’s database has historically handled millions of single-record requests per second, whether volumes reach gigabytes or petabytes of data, according to the vendor.
But single-record requests — a database operation that interacts with one specific record — are no longer enough because of the growing data needs of customers with large transaction workloads.
Multi-object ACID transactions — transactions that simultaneously access or manipulate data related to multiple records — meet those needs, according to Catanzano. Unlike single-record requests, multi-object ACID transactions provide the atomicity and data consistency across large data workloads that are now needed.
“Multi-object transactions are … critical for applications where multiple data points need to be updated all at once to prevent errors and maintain data integrity,” Catanzano said.
In addition, multi-object ACID transactions are noteworthy because they enable transaction management at the database level, where data is easier to configure than at the application level, where transaction management otherwise takes place, he continued.
Atomicity and data consistency are particularly critical to industries such as finance, gaming and fantasy sports that have high transaction volumes, according to Srinivasan. It is imperative in such industries that account balances are correct and each transaction is accurately processed without losing any data.
“Distributed ACID transactions [enable] companies to build more sophisticated, reliable applications without burdening developers to manage consistency at the application level,” Srinivasan said.
Regarding the impetus for adding distributed ACID transactions to support OLTP applications, customer demand played a significant part, according to Srinivasan.
In particular, reducing complexity and susceptibility to errors by managing transactions at the database level rather than the application level pushed Aerospike to update its platform.
“It was driven by the needs of large-scale applications and customer demand across sectors,” Srinivasan said. “[The update] makes it easier for organizations to scale efficiently and meet their operational needs by bringing this functionality directly into the database with performance intact.”
Aerospike’s latest platform update adds ACID transactions to better support OLTP applications.
Looking ahead
With Aerospike 8 now available, the vendor will continue to focus on improving the performance and scale of its database, according to Srinivasan. In addition, ensuring that cloud deployments perform properly across the major clouds — AWS, Alibaba, Google Cloud and Microsoft Azure — is a priority.
Lastly, as more enterprises invest in developing AI applications to detect fraud, provide recommendations and perform other tasks, Aerospike intends to add capabilities for powering AI and machine learning workloads, according to Srinivasan.
“These priorities highlight our commitment to scaling our technology and business for the next phase of growth,” he said.
Catanzano, meanwhile, suggested that a focus on ease of use and expanding its multimodel database capabilities to include more data models would be logical next steps for Aerospike. In addition, more transparency around Aerospike’s performance relative to its competitors could attract more customers.
“Providing clear benchmarks and comparisons against specific competitors would strengthen their market position,” Catanzano said.
Eric Avidon is a senior news writer for Informa TechTarget and a journalist with more than 25 years of experience. He covers analytics and data management.
Subscribe for MMS Newsletter
By signing up, you will receive updates about our latest information.
Did you know...
More than half of fortune 500 companies are planning an AI project in the next 6 months! (Subscribe to be in the know!)
On CNBC’s “Mad Money Lightning Round,” Jim Cramer recommended buying AbbVie Inc.ABBV, saying it’s “just a gem” and a “winner.” He added, “I can’t believe the stock dropped so much.”
On Jan. 31, AbbVie reported fourth-quarter adjusted earnings per share of $2.16, beating the street view of $2.11. Quarterly sales of $15.102 billion, outpacing the analyst consensus estimate of $14.827 billion. Several analysts also raised their price targets on the stock following the earnings announcement.
Cramer Shares His Views On These Tech Stocks
MongoDB MDB is a trade and not an investment, Cramer said. “You’d be catching it at the right time. I think the analysts are all starting to upgrade the, the enterprise software again. I think it’s worth a stab, I really do.”
In the recent news, MongoDB has partnered with Lombard Odier to modernize core banking technology with generative AI.
The Mad Money host recommended pulling the trigger on Cerence Inc.CRNC.
On the earnings front, Cerence is scheduled to announce its first quarter financial results on Thursday, Feb. 6.
Other Stocks In Focus
Cramer recommended buying Bitcoin BTC/USD instead of going with Coinbase Global, Inc. COIN.
Supporting his view, Keefe, Bruyette & Woods analyst Kyle Voigt, on Jan. 13, maintained Coinbase with a Market Perform and lowered the price target from $275 to $255.
Stryker SYK is the better one, Cramer said when asked about Tempus AI, Inc. TEM. “If you want AI, then you go with Medtronic MDT.”
Supporting that view, Loop Capital analyst Mark Schappel, on Jan. 14, maintained Tempus AI with a Buy and lowered the price target from $57 to $52.
Price Action:
MongoDB shares gained 2.7% to settle at $275.97 on Tuesday.
Tempus shares gained 0.5% to close at $61.85 during the session.
Coinbase shares fell 1.4% to settle at $280.39.
AbbVie shares fell 0.1% to close at $189.95 on Tuesday.
Cerence shares gained 6.2% to settle at $12.07.
Read Next:
Don’t miss a beat on the share market. Get real-time updates on top stock movers and trading ideas on Benzinga India Telegram channel.
Montani: I’ll be talking about taking large language models out of the black box, and a few practical tips that hopefully you can apply in your work today. Some of you might know me from my work on spaCy, which is an open-source library for natural language processing in Python. spaCy was really designed from day one to be used in real products and be used in production. That also means we had to do a lot of the boring software development stuff like backwards compatibility, making sure we don’t break people’s code.
This actually had a pretty nice unintended side effect more recently, which is that ChatGPT is actually really good at writing spaCy code. You can try that out if you want. We also developed Prodigy, which is a modern annotation tool for creating training and evaluation data for machine learning developers and machine learning models. Prodigy is fully scriptable in Python, so it allows for a lot of nice, semi-automated workflows, including, actually, a lot of the ideas that I’ll be talking about in my talk.
Software in Industry
As an industry developing software, we have really built up a lot of best practices over the years and a lot of ideas of how we want our software to behave and what’s good software development. One of those is, we want our software to be modular. We want to have components that we can work on independently. We want to have building blocks that we can work with. We also want our tools and our software to be transparent. We want to understand what’s going on, be able to look under the hood, and also debug if something is going wrong. That also means we want to be able to explain to others why our system is doing something, or, in the case of a machine learning model, why we get a specific result.
Many solutions also need to be data private. Who works in a field where data privacy is important? That’s an important topic. We want systems to be reliable, not just randomly go down. Whatever we build also needs to be affordable, whatever that means in a specific context. Maybe you’re working on a budget, maybe you have no money. If we’re building something, it needs to fit. This introduces a lot of problems, if we’re looking at new ways and new technologies. If we’re working with black box models, we can’t really look under the hood, and we can’t really understand what’s going on. For a lot of them, if we’re using an API, we don’t even really know how they work. A lot of that is not public.
We have these big monoliths that do a whole thing at once. That can be very challenging. Because a lot of the newer models are very large, which is also what makes them so good, it’s not really efficient or sustainable to run them yourself in-house, which means we need to consume them via a third-party API. That also means we need to send our data to someone else’s server. We’re at the mercy of an API provider. If it goes down, it goes down. If it’s slow, that’s how it is. The costs can also really add up if you’re using a model via an API at runtime. How can we fix this, and how can we still use all these exciting latest technologies that are really good, while also maintaining the best practices that we’ve built over years of developing software. That’s what I’m going to cover in my talk.
To maybe start off with a practical example, here’s something that’s maybe familiar to you and maybe similar if you’re working in NLP, to something that you were tasked with in your job. Let’s say you work for an electronics retailer or a company producing phones, and you’re getting lots of reviews in from customers, and you’ve collected them all, and now you want to analyze what people are saying about your product. First, what you want to do is you want to find mentions of your products in these reviews and find those that are relevant. You also want to link them to your catalog that you already have, where you have all meta information about your products. Then you want to extract what people are saying, so you have different categories, like battery, camera, performance, design, and you want to extract whether people like these or not, which is also often referred to as aspect-oriented sentiment analysis.
Then, finally, if you have all that information, you want to add those results to a database, maybe the database that also has your catalog. As you can see, there are some parts of this that are actually quite straightforward. We have, battery life is incredible. That’s easy. You can definitely extract that. There are other parts that aren’t necessarily so straightforward because of its language, and language can often be vague. Here, the reviewer says, never had to carry a power bank before, and now I need it all the time. From the context that we have about the language and the world, we know that this means that the battery life is bad. It’s not explicit in the text. That’s really a use case that benefits from machine learning. There are a lot of larger models these days that are really good at that, because they’re trained on such a vast volume of text, and they’re really good at understanding, “This type of context”, in the text.
Using in-context learning, you could basically prompt the model. You can provide it examples of your text, and it can respond with an answer, for example, of the sentiment you’re interested in. Of course, if you think about what you actually need here, you have a specific use case, and you really only need a small subset of what the model is able to do. You really need the context. You’re not interested in talking to it about the weather. There’s really a very specific thing that you want. The question is, what if we could just take that out? What if we can extract only the part that we’re interested in into a model that can also then be much smaller and much more specific, because all we want it to do is predict those sentiment categories.
While there has been a lot of talk about in-context learning, because that’s new, and it’s also what a lot of the research focuses on, it doesn’t mean that transfer learning is somehow outdated or has been replaced. It’s simply a different technique. You might be familiar with models like BERT and their embeddings and all kinds of different local variants, like CamemBERT, and these embeddings encode a lot of very important and very relevant contextual information that you can initialize your model with, and add a task-specific network on top. The thing is, if we’re looking at the research, we’ll actually see that even the most classic BERT, base, is still very competitive and achieves very good results compared to zero-shot or few-shot in-context learning. What’s also important to keep in mind here is that these are all academic benchmarks. These are calculated based on datasets we can’t control. That’s the idea of a benchmark.
If we’re already getting these very promising and good results using benchmark datasets, we’ll probably be able to achieve even better results if we have control over the data which we have in our use case. What we want to do is, if we want our large generative model to do a specific thing, we start out with our text, and we start out with the prompt based on a prompt template specific to a thing we want to extract. Then we pass that to the model. What we get back is raw output, usually in the form of an answer, depending on the prompt we gave it.
Then we can use a corresponding parser that corresponds to the prompt template in order to parse out the task specific output. In our case, we’re not after a conversation. We’re after structured data that expresses the categories that we’ve already defined. What we also want to do is we don’t just want to match the large language model’s performance. Ideally, we want to do it even better, because we have the capability to do that. If the model makes mistakes, we want to correct them. What we can do is we can pass our task output to an annotation workflow and really create a dataset that’s very specific to what we’re looking to extract. Using transfer learning, we can use that to distill a task-specific model that only performs the one task we’re interested in, using what we already have in the weights of the large generative model.
Close the Gap Between Prototype and Production
This is both pretty exciting and very promising, but one thing we’ve definitely seen, if you get to work, is that a lot of projects these days, they really get stuck in this phase that I also call the prototype plateau. You start working. It’s all very exciting, and it’s working. When it comes to actually shipping your system that you’ve built, you realize that it doesn’t work. There are a lot of reasons for that, and also solutions that are actually really important to keep in mind before you start building. In order to close the gap between prototype and production, one important thing is you want to be standardizing your inputs and outputs. You want to have the same workflow during prototyping as you have during production.
If your prototype takes random human generated text that you type in and outputs a human readable text response, but your production system needs structured data, then you’re going to have a problem, and it might not actually translate so well. You also want to start with an evaluation, just like when you’re writing software, you’re writing tests. For a machine learning model, the equivalent is an evaluation. You want examples where you know the answer, and that you can check and so you actually know if your system is improving or not. That’s something that’s often glossed over.
A lot of people if you’re excited in building something, you might go by just a vibes-based evaluation. Does it feel good? If you actually want to assess how is my model performing, you want an evaluation with accuracy scores that you can compare. Of course, accuracy is not everything. Especially if you’re coming from a research background, you can be very focused on just optimizing those scores. A high accuracy in your model is useless if the model isn’t actually doing what you want it to do, and if it’s useful in your application. In an applied context, you don’t just want to optimize for a score, you also want to test, is it actually useful? Whatever that means in your context. That also requires working on your data iteratively, just like with code.
Usually, the first idea you have is usually not what you ship to production, and the same goes for data. You want to have a workflow where you can quickly try things out, and ideally also tooling to help with that, so you don’t need to schedule large meetings and spend hours to try out every idea you have. Finally, we’re working with language here, and that’s really important to keep in mind. While as developers, we really like to fit things neatly into boxes, language doesn’t work that way. It’s usually vaguely gesturing at things. There’s a lot of ambiguity in language that we have to keep in mind, it’s not just data or it’s not just vectors.
On the other hand, we can also use that to our advantage. There’s a lot in the language that helps us express things and get our point across, and that generalizes across language very well. If we could identify these parts, we can actually use that to our advantage in the application, and make the problem easier for our model. These are also things that we really thought about a lot when developing our tools. Because I think if you’re building developer tools, that’s one of the problems you want to address. How can we make it easier for people to standardize workflows between prototype and production, and actually ship things and not just get stuck in the prototype plateau?
Here’s an example of a prototype we might build for an application. We have a large generative model, and what we can do, and something that we’ve actually built with spaCy LLM is, have a way to prompt the model and transform the output, and parse it into structured data. Even while you’re trying things out without any data required, you can use a large language model to create the structured data for you, and what you get out in the end is an object that contains that structured data. You can, of course, ship this to production the way it is, but you can also work on replacing the large generative model at development time so that at runtime you end up with distilled task-specific components that perform only the parts that you’re interested in, and that are fully modular and also transparent, and usually much smaller and faster as well. The output in that case is also the same. You’re also getting this structured machine facing object that you can standardize on.
Human in the Loop
As I said previously, of course we don’t just want to match what the large generative model is doing. We actually want to make it better. We want to correct its mistakes. For that, we need a human at some point in the loop. That’s a very important step here. To give you an example how that works, we start off with a model and all the weights it has available. As a first step, as I mentioned before, we want to have a continuous evaluation. We need a way to figure out our baseline. What are we up against? What’s the performance we get out of the box without doing anything?
Otherwise, you’ll have no idea whether what you’re doing actually makes a difference or not. Now we can use all the weights we have available in that model and prompt it, and it will return whatever data we ask for, using everything that it has available. We can pipe that forward into an annotation environment where we can look at just the exact structured data and make corrections and very quickly move through that data and create a dataset that’s really specific to the task, like the aspect-oriented sentiment predictions, for instance. With transfer learning, create a component that only performs that. Of course, here, it comes in handy that we have our evaluation because we want to do that until our distilled model beats and ideally also exceeds that baseline. I’ll show you some examples of this later, but you might be surprised how easily you can actually do this and apply this yourself.
First, how do we access our human? Going back to that practical example, we have one of these reviews of someone who rated our Nebula phone, “meh”. As an example, the type of structured data we’re after is something like this. For simplicity, for this example, I’ll only focus on assuming we have binary values for those categories. Of course, in some cases, you might want to define some other schema and have a scale of like, how much do people like the battery life, and so on? That’s the structured data, that’s our output. If we’re presenting that to the human, a naive approach would be, let’s just show the human the text, give the human the categories, and then let them correct it.
If you’re looking at this, you’ll see that this doesn’t actually capture these null values. We have no distinction here between a negative response, or no mention of that aspect at all. We can extend this a bit and collect whether it’s positive or negative, and have the large generative model make the selection for us. That means you can move through the examples very quickly, and all you have to do is correct the model if it makes a mistake. The big problem we have is that humans are humans, and have a lot of disadvantages. One of them is that humans actually have a cache too and a working memory. If you ask a human to constantly in their head iterate over your label scheme and every aspect that you’re interested in, humans are actually quite bad at that.
You’ll find that humans really lose focus, end up making mistakes, and humans are very bad at consistency. What you can do instead is you can help the human and the human cache and make multiple passes over the data, one per category and one per aspect. While it might seem like a lot more work at first, because you’re looking at the same example multiple times, and you’re collecting a lot more decisions, it can actually be much faster, because you reduce the cognitive load on the human. I’ll show you an example of this later, where a team actually managed to increase their speed by over 10 times by doing this. You have your human, you have a model that helps you create the data, and you’re collecting a task-specific dataset that doesn’t just match the few-shot or zero-shot baseline, but actually improves upon it.
Case Studies
To give you some examples of how this works and how this can look in practice. This is the case study we did based on a workshop we held at PyData in New York. The task here was we want to stream in data from our cooking subreddit, and extract dishes, ingredients, and equipment from it. We did that together with the group, and also discussed the data while we were doing it. We used a GPT model during the annotation process to help create the data. In the workshop, we were actually able to beat the few-shot LLM baseline of 74%, which is actually pretty good out of the box without any training data. We beat that in the workshop and created a task-specific model that performed the same or even better, and that model also was more than 20 times faster.
If you look at the stats here, we have a model that’s 400 megabytes, which is pretty good. You can totally run that yourself, runs on your laptop, runs at over 2000 words per second, so really fast. The data development time, we calculated, how long would it have taken a single person to create all the data for it. That’s about eight hours. That’s one standard workday. If you think about other things you spend time on as part of your work, you probably spend more time trying to get CUDA installed or trying to get your GPU running. It’s really not true anymore that data development is like this absolutely tedious task, even a single developer can do this in a workday. That was very promising.
That also inspired the next case study we did, which was with a company called S&P Global. What they’re doing, in this project, is they’re extracting commodities trading data in real-time. If crude oil is traded somewhere, they extract the price, the participants, the location, and a wide range of other attributes, and they provide that as a structured feed to their customers in real time. Of course, this is information that can really significantly impact the economy and move markets. Their environment is a high security environment. I actually went to visit them in London a while ago, and even within their office, it’s very highly segregated.
They have this glass box that the analysts sit in, you can only access it with a specific card. It’s incredibly important that everything they do runs in-house, and no other third party gets to see it before it’s published, which also is like a promise of the data product. That’s why their customers are using it. What they did was they moved the dependency of the large language model to development and used that to create data for them. This plus some optimizations of how they actually present the questions to the human, including having simple, often binary questions and making multiple passes over the data, that actually made the whole process more than 10 times faster using a human and the model in a loop.
They currently have eight pipelines in production, probably even more by now. This was a very successful project. If you’re looking at the stats again, they’re achieving very high accuracy. The models are 6 megabytes per pipeline. If you’re letting that sink in, this is really tiny. You can train that on your laptop really easily. They run super-fast, at over 16,000 words per second, so they’re really a great fit for processing these insights in real time and as quickly as possible. Again, also data development time, that’s a single person, so in under two workdays, or with two people, you can create the data needed for a distilled task-specific pipeline in about a day. Totally doable, even if you don’t have that many resources.
Think of It as a Refactoring Process
How did they do it? What’s the secret? One of them is, if you’re thinking about developing AI solutions, they’re really code plus data, and so just like you refactor code, you can also refactor your data. Refactoring code is probably something you do all the time, and you’re very familiar with. The same really applies to your data development process. There are different aspects of this. One big part of refactoring is you’re breaking down a large problem into individual components, and you’re factoring out the different steps and creating reusable functions. That’s something we really accepted as the best practice, has a lot of advantages. You can do the same for your machine learning problem and your models. As part of that, the goal is you can make your problems easier.
Again, you do this with code a lot, trying to reduce the complexity, and you’re allowed to do that. Have an easier system, and make it easier for the model as well. One part of that is factoring out business logic, and separating logic that’s really specific to your application, from logic that’s general purpose and that maybe applies to any language and doesn’t need any external knowledge. I’ll show you an example of that later. Again, that’s something you do in your code already, and that works well. You can apply that same idea to your data process.
Part of refactoring is also reassessing dependencies. Do you need to pull in this massive library at runtime that you only use a function of, or can you replace that? Is there something you can compile during development time so you don’t need to use it at runtime? The same is true for machine learning models. Can you move the dependency on the really complex and expensive and maybe intransparent model to development, and have a much cleaner and operationally simpler production environment? Finally, choosing the best techniques, you decide how a specific problem is best solved, and you have this massive toolbox of skills and of techniques that are available, and you pick the one that’s the best fit for the task at hand.
Make the Problem Easier
One thing people really easily forget is that you are allowed to make your problem easier. This is not a competition. This is not academia. You’re allowed to reduce the operational complexity, because less operational complexity means that less can go wrong. When I started programming, I didn’t know very much. Of course, what I built was all pretty simple. Then as I got more experience, I learned about all of these new things, and of course, wanted to apply them. My code became a lot more complex. Also, if I’m looking back now, back then, I didn’t really write comments because it felt like a sign of weakness. If I found an especially complex solution to my problem, commenting meant that I’m admitting that this was hard, so I didn’t do that, which also makes it even harder to figure out what was going on and what I was thinking at the time.
Then, of course, with more experience my code also became much more straightforward, and I was able to pick the best techniques to get the job done and actually solve it most efficiently, instead of coming up with the most complex and interesting solution. I think it’s easy to forget this, because we are in a field that is heavily influenced by academia. In research, what you’re doing is you’re really building a Commons of Knowledge. You also want to compare the things you’re building using standard evaluations. If you’re comparing algorithms, everyone needs to evaluate them on the same thing, otherwise, we can’t compare them. You also standardize everything that’s not the novel thing that you are researching and publishing. Even if what you’re standardizing isn’t the best possible solution or isn’t efficient, it doesn’t matter. It needs to be standardized so you can focus on the novel thing you’re exploring.
On the other hand, if you’re building an application and working in applied NLP, what you’re doing is you’re basically learning from that Commons of Knowledge that was built by academia and provided, and basically pick what works best, and follow some of the latest ideas. You also align your evaluation to project goals. You’re not using benchmarks. Your evaluation basically needs to tell you, does this solve the problem, and is this useful in my product or project, or not? You also do whatever works. Whatever gets the job done, you can take advantage of. If that means it’s less operationally complex, then that’s great.
Factor Out Business Logic
One big part, as I said, of the refactoring process is separating out what’s business logic and what’s general-purpose logic. That can be quite tricky, and really requires engaging with your data and your problems. Here we have our SpacePhone review again. If we’re looking at that, we can basically break down the two different types of logic in this pseudocode formula. We have the classification task, which is our model that really predicts and processes the language itself. Then we have the business logic which is specific to our application and which can build on top of that.
To give you some examples here, general-purpose classification in our example would be stuff like, what are the products? There’s a model. What’s the model of this phone? Is it a phone? Are we comparing the phone to something else? That requires no outside context, and that’s really inherent to the language, and not our specific problem. Then, on the other hand, we have stuff like our catalog reference. That’s external. Nothing in the text tells us that. We also have things like, does it have a touch screen? Is it worse than the iPhone 13? The fact, is it the latest model? That is something that can change tomorrow. We have information that can really change over time.
While we can include that in the model and in the predictions we make, we’ll end up with a system that’s immediately outdated, that we constantly need to retrain, and a problem that’s a lot harder for the model to build some reasoning around, because we have nothing in the text that tells us that, whereas what we do have is we have our catalog reference, we have dates, we have things we can do math with. This process can be very powerful, but of course, it really is absolutely specific to your problem and requires engaging with it.
To give you an example of this idea in context, this is a most recent case study that we published with GitLab. What they’re doing is they’ve processed one year’s worth of support tickets and usage questions from different platforms, and they want to extract actionable insights. For example, how can we better support our support engineers in answering questions? What are things that we could add to our docs? Also questions like, how are people adopting new features? How many people have upgraded to the latest version? Are people still stuck on an older version? What are potential problems there, and so on? While these things don’t necessarily sound like particularly sensitive information, if you think about it, support tickets can actually include a lot of potentially sensitive data, like paths, details on people’s setup.
They’re working in a high security environment and a hardened offline machine, so whatever they’re building, it needs to run internally, and it also needs to be rerun whenever they have new tickets coming in and new data. It needs to be very efficient. Another very important feature of this project was, it needs to be easy to adapt it to new scenarios and new business questions. What’s the latest version changes? Features change. Things people are doing change. It needs to be easy to answer different questions that maybe weren’t intended when the system was built. Of course, you can do these things as end-to-end prediction tasks, but that means that every time something changes, you need to redo your entire pipeline. Whereas if you can factor out general-purpose features from product specific logic, it becomes a lot easier to add extraction logic for any other future problems and future questions on top.
A very simple example of this is, you have things like the software version that is very specific business logic, whereas extracting numbers, makes it a lot easier for the model. If you have that, you can add your business logic on top to determine, is this a version of the software? Is this a link to the docs, and so on? I’ve linked the case study, explosion.ai/blog/gitlab-support-insights. They’ve done some pretty interesting things. Also have a pipeline that’s super-fast, and are working on adding a conversational output on top. I hope we’ll be able to publish more on that, because it’s a very cool project that really shows the importance of data refactoring.
Reality is not an End-to-End Problem
What you can see here, is, as developers, we really love to put things into clear boxes and have this idea of like, if we can just have this one model that can do everything, wouldn’t that be great? Unfortunately, reality doesn’t really work that way. Reality isn’t an end-to-end prediction problem. It’s actually very nuanced and very complex. Human-in-the-loop distillation and going from a much larger general-purpose model to a much smaller and more efficient task-specific model really is a refactoring process.
You refactor your code, you refactor your data, and that requires engaging with it. Iteration, which, again, is very heavily influenced by the tooling you use, can be a huge help in getting you past that prototype plateau and closing the gap between prototype and production. Because I think at the moment, we’re seeing a lot of prototypes being built, but a lot of them also don’t make it into production, and that’s sad. If we standardize and align our workflows with better tooling, we’re actually able to build a prototype and translate that directly into a production system.
Again, you are allowed to make your problems easier. I think with other aspects of software development, we’ve really learned that making things less operationally complex is better because it means less can go wrong. If something goes wrong, it becomes a lot easier to diagnose. If you can apply that to machine learning, that’s incredibly helpful, and as a result, you also get systems that are much cheaper, that are much smaller, that are much faster, that are entirely private and much easier to control. There’s no need to give up on these best practices, and it’s totally possible.
Also, we’re working with data here, and as soon as you start engaging with that, you will immediately come across edge cases and things you haven’t considered, and ambiguities in the language that are very hard to think of upfront. It’s very important to engage with your data, and also have a process in place that lets you iterate and make changes as needed. I also highly recommend having little workshops internally, like the one we did at PyData, where you can have long discussions about whether Cheetos, a dish or not, or whether the forehead is part of your face. All of these questions are important, and if you can’t make a consistent decision, no AI model is magically going to save you and will be able to do it for you.
Finally, there’s really no need to compromise on software development best practices and data privacy, as you’ve seen in the talk. Moving dependencies to development really changes the calculation. We can be more ambitious than that, and we should be. We shouldn’t stop at having a monolithic model. We can take it one step further and really make the best use of new technologies to allow us to do things that we weren’t able to do before, while not making our overall system worse in the process and throwing out a lot of best practices that we’ve learned. It’s absolutely possible. It’s really something you can experiment with and apply today.
Questions and Answers
Participant 1: You mentioned in your talk that with model assistance it’s totally feasible and quick to create data in-house. In your experience, how many examples do you think you need in order to create good results?
Montani: Of course, it always depends. You’ll be surprised how little you might actually need. Often, even just starting with a few hundred individual examples that are good, can really beat the few-shot baseline. It also depends the amount you choose. It depends on what accuracy figures you want to report. Do you want to just report whole numbers? Do you want to report accuracies like 98.2? That introduces a different magnitude. I think if you look at some of the case studies I linked, we’ve also done some experiments where we basically took an existing dataset and trained on small portions of the data. Then compared when we beat the LLM baseline.
Often, even just using under 10% of the dataset already gave us really good results. It’s really not a lot. I think if you start doing that, you’ll really be surprised how little you need, with transfer learning. That also, of course, means that what you’re doing needs to be good. Garbage in, garbage out. That’s also the reason why it’s more important than ever to have a way that gives you high quality data, because you can get by with very little, but it needs to be good.
Participant 2: Do you have any guidelines when it comes to comparing structured outputs. In the beginning, it seems like a very simple task, but if you start nesting it, in particular, if you have lists on both sides, trying to figure out what entities you’re missing, can just become so complex. How to actually get it down to maybe at least just 5 or 10 numbers, instead of 100, of like, I’m missing the token in entity 5 and missing entity 6 completely.
Montani: There are different ways you can evaluate this. Some evaluations really look at the token levels. Others look at the whole entities. Also, there’s a difference in, how do you calculate that if something is missing. Is that false, or do you count partial matches? That’s a whole other can of worms in itself. More generally, I think it comes back to that refactoring idea of like, if you have these entities, is this actually a problem where boundaries are important? Some people always often go for named entity recognition because you’re like, I can have these spans of text that give me what I want. If you take a step back, in a lot of cases, it actually turns out that you’re not even really interested in the spans. You’re interested in, does this text contain my phone?
Then that becomes a text classification task, which is generally a lot easier and also gives you better results, because you’re not actually comparing boundaries that are really very sensitive. That’s what makes named entity recognition good. It’s very hard to do that consistently. I think refactoring can also help there. Or if you have a lot of people who have who nested categories, taking a step back, do I need these nested categories? Can I maybe come up with a process where I focus on the most important top level first, and then maybe drill down into the subcategories. Or think in that S&P case study, they realized that there are actually some types of information that’s relatively straightforward. If we know that it’s of this category, we can deterministically decide which sublabels apply, for example. I think actually, it really ties into the refactoring point.
Often, the first label scheme you come up with is usually not the best. You want to pick something that’s easy for the machine learning model, and not necessarily translating your business question one to one into a label scheme. That’s usually where a lot of the problems happen.
Participant 3: What’s the process to find the baseline? Because in my life, it’s very hard to find the baseline.
Montani: What the case study companies did is, you create evaluation data. Let’s say you have the text categories, is this about battery life, or is the battery life positive? Then you first have evaluation data where you know the correct answer. Then you basically let the LLM predict those, and then you compare it. For example, with spaCy LLM, in that case, you get the exact same output. You can evaluate that pipeline the same way you would evaluate any other model. Or you can try a few-shot approach. Basically, the idea is you let the model make the predictions, and then compare the output to examples where you know the answer, and that gives you the baseline.
Participant 3: For example, you have a multivariable problem when you’re trying to evaluate, for example, risks, and you have so many different points, and you don’t have reliable data to find the baseline.
Montani: That’s also why I think creating data is important. If you’re building something that’s reliable, you can’t really get around creating a good evaluation. I’ve also heard people say, we can just pass it to some other LLM to evaluate. It’s like, you’re still stuck in the cycle. You can build software and not write tests. That’s legal. You don’t want to do that. Even if you’re doing something that’s completely unsupervised at the end, you want a good evaluation that also actually matches what you’re doing, not just some benchmark dataset. I think that is super important.
Then once you have that, it lets you calculate a baseline. It lets you test things. I always recommend, do something really basic, do like a regular expression, and benchmark that, just to have some comparison. Or do something really simple, because if you find out, I already get really good results on that, does it actually make sense? Or what’s my machine learning model up against? I think it’s such an important part of it. I think people should talk about it more. Yes, do it properly.
Participant 4: Would you also say that the approach that you described here would work in a similar way, if you basically use the model to then interact with users, and maybe, for example, respond based on the comment about the products, and directly interact back.
Montani: How would that look like as an example, if you’re building a chat interface?
Participant 4: In this case, I think there was an evaluation, so you don’t need to really chat. You just need to maybe respond and say, “Thank you for the evaluation. We’re working on improving the battery”, or something like that.
Montani: Here you have the model, actually, you ask, you have a prompt, like, here are the categories. Respond with the categories, and whether it’s positive or negative? Then you try to get the model to respond as structured as possible, and then also pass that out so you really get label true, false, or none.
Participant 4: Would you, for these kinds of use cases, also use in-house trained LLM, or use the bigger ones on the market?
Montani: You can do both. One thing that’s nice here is that, since you’re moving the dependency to development instead of runtime, it actually becomes a lot more feasible to run your own open-source LLMs. If you’re not relying on it at runtime, it’s actually affordable and efficient to just do it in-house, and you can fine-tune it in-house, or you just use something off the shelves, or you use an API that you have access to. I think having the dependency during development is the key and really changes things so you can use whatever. You’re not using the LLM to create any data itself. You’re using it to add structure to your data.
Participant 5: Once you have an LLM running in production, do you have any tips of what I can check to recheck how the data works with the model, and retrain it.
Montani: What do you mean by how the data works with the model?
Participant 5: How is the model performing in production, and based on the new data that is coming in, can I have an automated retraining of the same model? Any tips on that?
Montani: I think that ties back into the evaluation as well. Even if you have your model running, you want to capture, what does it output? Whatever your context is. Then have a human look at it and see, is this correct, or is this not correct? How does this change over time? Because you also easily can have the problem of data drift, if the input data changes, if the model changes, which is also a problem you have if you have an API and then the model just changes. That changes a lot.
I think having a QA process in place where you really store what is your model doing at runtime, and then review, is it doing the correct thing? Do that regularly, iterate on that, and also see how is that changing over time as things change. That’s kind of the thing of evaluation. You never get out of it. It’s not something you do once and then forget about it. You constantly need to iterate and constantly do it if you’re actually interested in really getting reliable feedback of how your system is doing.
RALEIGH, N.C., Feb. 4, 2025 — Percona, a global leader in enterprise-grade open source database software and services, today announced the availability of Percona Support for Valkey—a comprehensive, responsive, and flexible service offering that promises to maintain reliable, optimized Valkey deployments whether on-premises, hybrid, or in the cloud.
For organizations not yet using the database management system, Percona has also introduced comprehensive Valkey Migration services, to help make the switch from Redis OSS to Valkey as seamless, secure, and non-disruptive as possible.
“When the Linux Foundation first announced its plans for Valkey last March, Percona immediately threw its support behind the project,” said Ann Schlemmer, CEO of Percona. “Beyond the simple fact that many of our customers also relied on Redis for their data operations, we at Percona were motivated by our fundamental belief in the principles of open source—that an open world is a better world—and we remain committed to doing everything we can to help foster and promote such a world.”
Valkey: The Open Source Response to Redis’s Controversial Relicensing Decision
Announced by The Linux Foundation in March of 2024, Valkey is an open source fork of the popular Redis in-memory, NoSQL key-value datastore. The project was launched in response to Redis’s decision to abandon its long-held commitment to open source licensing in favor of a much more restrictive, source-available model. Today, Redis is now dual-licensed under the Redis Source Available License (RSALv2) and Server Side Public License (SSPLv1), while Valkey has maintained the datastore’s original open source BSD license.
In the wake of these decisions, Valkey has enjoyed an outpouring of support from both the open source and business communities alike. Since its very first announcement, a number of major tech industry players—including Amazon Web Services (AWS), Google Cloud, Ericsson, and Percona—have put their support behind Valkey in a variety of ways. In addition to its commercial support offerings, Percona has dedicated staff to building new capabilities around programmability and security, as well as fixing bugs, on core Valkey projects.
Thanks in no small part to such support, Valkey has already established itself as far more than a mere maintenance fork. In the five months between its first formal release (Valkey 7.2.5) in April 2024 and its most recent (Valkey 8.0.0) in September, Valkey has seen major improvements in performance, reliability and efficiency—proving itself to be a dynamic, innovative open source project on course to remain competitive with Redis and other popular in-memory datastores.
As a result, Valkey is already enjoying significant levels of interest and adoption. In a survey report from Percona, 75% of Redis users said they were either actively testing, considering or had already adopted Valkey. The study also found that more than three-quarters (76%) of organizations plan to rely on third-party enterprise support for their Valkey deployments.
Percona Support for Valkey: Enterprise-Grade Services Tailored to Your Business
Percona Support for Valkey boasts several key benefits designed to maintain optimal performance with minimal headaches:
Real-Time Expertise: 24x7x365 responsive support from Valkey specialists.
Industry-Best SLAs: Reliable and transparent response times for incidents of every severity level.
Open Source Commitment: Freedom from vendor lock-in with full support for Valkey.
Customizable Solutions: Tailored plans to fit your unique business needs.
Continuous Development: Percona experts continue to build out new capabilities around programmability and security.
Percona Support for Valkey is offered in two core tiers or packages—Advanced, for production environments and Premium, for mission-critical environments. Both offerings help mitigate and resolve some of the most common issues that negatively impact Valkey performance, including inefficient key-value store configurations; high-latency operations and response times; complex Redis-to-Valkey migrations; and misconfigured or under-optimized settings.
In addition to the aforementioned Valkey Migration services, Percona has also introduced the all new Redis OSS Health Assessment—an in-depth, preemptive evaluation of an organization’s existing Redis deployment(s) and overall migration readiness. Together with its core migration services, Percona helps organizations avoid licensing limitations while optimizing database performance and minimizing downtime; for a reliable, highly-performant open source database solution.
Valkey-Interested Organizations Needn’t Venture Alone Any Longer
“The almost immediate explosion of interest and support for Valkey goes beyond just the technology itself,” said Peter Zaitsev, co-founder at Percona. “It wasn’t simply a matter of companies looking to control costs. It was an expression of widespread dissatisfaction and frustration that people are feeling with self-proclaimed “open source” software vendors suddenly deciding to abandon the model. It was a palpable response, and one we felt here at Percona. That’s why Valkey is the first addition to our list of supported technologies in nearly half a decade. And we will continue to support Valkey, as well as other open source initiatives like it, for as long as they’re here.”
To learn more about Percona Support for Valkey, including our dedicated Valkey Migration and Redis OSS Health Assessments, please visit here.
For more information on Valkey, please visit www.valkey.io.
About Percona
Percona is a world-class open source database software, support, and services company. The organization is dedicated to helping businesses ensure their databases — and the applications that depend on them — are secure, compliant, performant, and highly available. Through a unique combination of database expertise and enterprise-grade open source software, Percona empowers organizations with the freedom to choose, the freedom to create, and the freedom to innovate with speed as they grow.
Source: Percona
Subscribe for MMS Newsletter
By signing up, you will receive updates about our latest information.
Did you know...
More than half of fortune 500 companies are planning an AI project in the next 6 months! (Subscribe to be in the know!)
InfoQ is introducing its first hands-on software architecture certification at QCon London 2025 (April 7-11), the international software development conference. The certification will combine practitioner-led conference sessions with a hands-on workshop focused on real-world architectural challenges.
“While many certification programs exist, few address the practical challenges of implementing emerging technologies at enterprise scale,” said Reisz. “This certification bridges that gap by focusing on real-world scenarios software architects face daily.”
:rocket: Looking for a software development certification to elevate your career?
The certification requires participants to have a minimum of five years of senior technical experience. Participants will work through architectural challenges drawn from actual enterprise implementations inspired by the conference sessions, including “Architectures You’ve Always Wondered About” which features real-world examples from companies scaling systems for massive traffic and complexity, “Modern Data Architectures” which addresses critical challenges in building scalable systems that integrate AI and machine learning, and “The Changing Face of Architectural Practice” that will examine how traditional approaches are evolving to meet new challenges.
The InfoQ Certified Software Architect in Emerging Technologies (ICSAET) certification differs from traditional architecture certifications through its:
Exclusive focus on senior-level practitioners
Integration with QCon’s cutting-edge software architecture tracks
Focus on practical implementation challenges
Real enterprise implementation scenarios
Hands-on workshop format
Peer learning emphasis
“Participants will work through actual challenges we’ve encountered in enterprise implementations,” Reisz explained. “The focus is on the trade-offs and constraints that shape real-world solutions, not theoretical frameworks.”
Dio Synodinos, president of C4Media, Inc., the makers of InfoQ and QCon, said:
“This certification emerges from nearly two decades of InfoQ and QCon’s work with international enterprise software teams. Through our conferences and community, we’ve helped senior software developers navigate emerging technologies and architectural challenges since 2006. We’ve seen firsthand what senior architects need: practical insights into implementing emerging technologies at scale. That’s exactly what this certification delivers – real-world knowledge from practitioners who are actively solving today’s complex enterprise-scale architectural challenges.”
The ICSAET certification signals participants’ authority as senior technical leaders and their dedication to continuous growth. Successfully completing both the QCon London conference and the workshop will earn participants the InfoQ Certified Software Architect in Emerging Technologies (ICSAET) credential, which demonstrates leadership and expertise in modern software architecture.
The certification will initially be available at QCon London 2025, with plans to expand to QCon San Francisco 2025 in November. The workshop component will be limited in size to ensure quality interactions and meaningful peer discussions.
Registration for the ICSAET certification is now open. More information and registration details can be found at http://qconlondon.com/.
About the Author
Ian Robins
Show moreShow less
Subscribe for MMS Newsletter
By signing up, you will receive updates about our latest information.
Did you know...
More than half of fortune 500 companies are planning an AI project in the next 6 months! (Subscribe to be in the know!)
Cummins: I’m Holly Cummins. I work for Red Hat. I’m one of the engineers who’s helping to build Quarkus. Just as a level set before I start, how many of you are Java folks? How many of you are using Quarkus? How many of you have not even heard of Quarkus? I’ve worked on Java for most of my career. I’m here to talk about Java. I want to actually start by talking a little bit about Rust. I’m not a Rust developer. I have never developed Rust. I’m not here to criticize Rust, but actually I’m going to start by criticizing Rust. Of course, Rust has so many amazing features. It’s so well engineered. It’s really a needed language. It is incredibly efficient, but Rust does have a problem.
There’s a reason I have never learned Rust, which is, Rust has a reputation for being really hard to learn, and I am lazy. This is something that you see everywhere in the community. People talk about how hard Rust is. It’s too difficult to be widely adopted. Even people who really advocate strongly for Rust will talk about how hard it is. I love the title of this article, “Why Rust is Worth the Struggle”. They start by saying, with Rust, you approach it with trepidation, because it’s got this notoriously difficult learning curve. I love this, “Rust is the hardest language up to that time I’ve met”.
When people talk about Rust, people will tell you that Rust doesn’t have garbage collection, and that’s one of the things that makes it efficient. I have some questions about that. If we start with the assumption that not having garbage collection makes a language performant, which is wrong, but if we start with that assumption, what happens if we add garbage collection to Rust? Now at this point, all of the people who are Rust developers are sort of screaming quietly in the corner, going, why would you do that? What happens if you do that? It turns out, if you do that, Rust becomes much easier to use. They added a layer of garbage collection on top of Rust, and then they had a bunch of volunteers do a coding task. The people who had the garbage collected version were more likely to complete the task, and they did it in a third of the time.
Now I think we really need to rethink the efficiency of Rust, because Rust is very efficient in terms of its computational resources. If you can make something adding garbage collection, is that really an efficient language? Rust maybe is not so efficient. There’s always this tradeoff of, you’ve got your human efficiency and your machine efficiency, and with Rust, they’ve really gone all in on the machine efficiency at the expense of human efficiency. That’s the tradeoff. I don’t like that tradeoff. In fairness to Rust, I think Rust don’t like that tradeoff either, which is why they have all of the things like the really powerful compiler. That’s something that we’ll come back to as well.
Quarkus (Java Framework)
The question is, can we do better? This is where Quarkus comes in. Quarkus is a Java framework. The programming model will be very familiar to you. We have integrations with the libraries that you’re almost certainly already using, like Hibernate, like RESTEasy, but it’s got some really nice characteristics. One of those, and this is probably the thing that people think of when they think of Quarkus, is that Quarkus applications start really fast. You can run Quarkus with GraalVM as a natively compiled binary, or you can run it on OpenJDK. Either way, it starts really fast. If you run it with GraalVM, it actually starts faster than an LED light bulb. Just to give you a scale of how instantaneous the start is. Quarkus applications also have a really low memory footprint. When we used to run on dedicated hardware, this didn’t really matter.
Now that we run in the cloud where memory footprint is money, being able to shrink our instances and have a higher deployment density really matters. If you compare Quarkus to the cloud native stack that you’re probably all using, if you are architecting for Java, we are a lot smaller. You can fit a lot more Quarkus instances in. It’s not just when you compare it to other Java frameworks. When you compare Quarkus even to other programming languages, you can see that we’re competing with Go in terms of our deployment density. Node.js has a higher deployment density than old-school Java, but it’s not as good as Quarkus. This is cool.
There’s another thing that Quarkus is quite good at which we don’t talk about so much, and I wish we would talk about it more, and that’s throughput. If you look at your traditional cloud native stack, you might get about 3000 requests per second. If you are taking Quarkus with the GraalVM native compilation, the throughput is a little bit lower, same order of magnitude, but it’s lower. This is your classic tradeoff. You’re trading off throughput against footprint. This is something that I think we’re probably all familiar with in all sorts of contexts. With native compilation, you get a really great startup time, you get a great memory footprint, but at the expense of throughput.
Many years ago, I worked as a Java performance engineer, and one of the questions we always got was, I don’t like all of this stuff, this JIT and that kind of thing, couldn’t we do ahead-of-time compilation? The answer was, at that time, no, this is a really terrible idea. Don’t do ahead-of-time compilation. It will make your application slower. Now the answer is, it only makes your application a little bit slower, and it makes it so much more compact. Native compilation is a pretty reasonable choice, not for every circumstance, but for some use cases, like CLIs, like serverless. This is an awesome tradeoff, because you’re not losing that much throughput. This is a classic tradeoff. This is something that we see. I just grabbed one thing off core, but we see this sort of tradeoff all the time like, do I optimize my throughput or do I optimize my memory? Depends what you’re doing.
Let’s look at the throughput a little bit more, though, because this is the throughput for Quarkus native. What about Quarkus on JVM? It’s actually going faster than the alternative, while having a smaller memory footprint and a better startup time. That’s kind of unexpected, and so there is no tradeoff, we just made it better. Really, we took this tradeoff that everybody knows exists, and we broke it. Instead of having to choose between the two, you get both, and they’re both better. I always try and think, it’s a double win. I’ve tried a few. I’ve tried 2FA.
Someone suggested I should call it the überwinden. I don’t speak German, and so it sounded really cool to me, but it’s become clear to me now that the person who suggested it also didn’t speak German, because whenever I say it to a German person, they start laughing at me. German’s a bit like Rust. I always felt like I should learn it, and I never actually did. You may think, yes, this isn’t realistic. You can’t actually fold a seesaw in half. You can’t beat the tradeoff. It turns out you can fold a seesaw in half. There are portable seesaws that can fold in half.
What Are the Secrets?
How does this work? What’s the secret? Of course, there’s not just one thing. It’s not like this one performance optimization will allow you to beat all tradeoffs. There’s a whole bunch of things. I’ll talk about some of the ones that I think are more interesting. Really, with a lot of these, the starting point is, you have to challenge assumptions. In particular, you have to challenge outdated assumptions, because there were things that were a good idea 5 years ago, things that were a good idea 10 years ago, that now are a bad idea. We need to keep revisiting this knowledge that we’ve baked in. This, I was like, can I do this? Because I don’t know if you’ve heard the saying, when you assume you make an ass of you and me, and this is an African wild ass.
The first assumption that we need to challenge is this idea that we should be dynamic. This one I think is a really hard one, because anybody knows being dynamic is good, and I know being dynamic is good. I was a technical reviewer for the book, “Building Green Software”, by Anne. I was reading through, and I kept reading this bit where Anne and Sarah would say, “We need to stop doing this because it’s on-demand”. I was thinking, that’s weird. I always thought on-demand was good. I thought on-demand made things efficient. This is sort of true. Doing something on-demand is a lot better than doing it when there’s no demand, and never will be a demand. When you do something on-demand, you’re often doing it at the most expensive time. You’re often doing it at the worst time. You can optimize further, and you can do something when it hurts you least.
This does need some unlearning, because we definitely, I think, all of us, we have this idea of like, I’m going to be really efficient. I’m going to do it on-demand. No, stop. Being on-demand, being dynamic is how we architected Java for the longest time. Historically, Java frameworks, they were such clever engineering, and they were optimized for really long-lived processes, because we didn’t have CI/CD, doing operations was terrible. You just made sure that once you got that thing up, it stayed up, ideally, for a year, maybe two years.
Of course, the world didn’t stay the same. What we had to do was we had to learn how to change the engine while the plane was flying, so we got really good at late-binding. We got really good at dynamic binding, so that we could change parts of the system without doing a complete redeployment. Everything was oriented towards, how can I reconfigure this thing without restarting it? Because if I restart it, it might never come up again, because I have experience of these things.
We optimized everything. We optimized Java itself. We optimized all of the frameworks on top of it for dynamism. Of course, this kind of dynamism isn’t free, it has a cost. That cost is worth paying if you’re getting something for it. Of course, how do we run our applications now? We do not throw them over the wall to the ops team who leave it up for a year, we run things in the cloud.
We run things in containers, and so our applications are immutable. That’s how we build them. We have it in a container. Does anybody patch their containers in production? If someone said to you, I patch my containers in production, you’d be like, “What are you doing? Why are you doing that? We have CI/CD. Just rebuild the thing. That’s more secure. That’s the way to do it”. Our framework still has all of this optimization for dynamism, but we’re running it in a container, so it’s completely pointless. It is waste. Let’s have a look at how we’ve implemented this dynamism in Java. We have a bunch of things that happen at build time, and we have a bunch of things that happen at runtime.
Actually, the bunch of things that happen at build time, it’s pretty small. It’s pretty much packaging and compilation to bytecode, and that is it. All of the other excitement happens at runtime. The first thing that happens at runtime is the files are loaded. Config files get parsed. Properties files get parsed. The YAMLs gets parsed. The XML gets parsed. Then once we’ve done that, then there’s classpath scanning, there’s annotation discovery. Quite often, because things are dynamic, we try and load classes to see if we should enable or disable features. Then we keep going. Then, eventually the framework will be able to build this metamodel.
Then, after that, we do the things that are quite environment specific. We start the thread pools. We initialize the I/O. Then eventually, after all of that, we’re ready to do work. We’ve done quite a lot of work before we did any work, and this is even before we consider any of the Java features, like the JIT. What happens if we start this application more than once, then we do all of that work the first time. We do it again the second time. We do it again the third time. We do it again the fourth time, and there’s so much work each time. It’s a little bit like Groundhog Day, where we’re doing the same work each time. Or it’s a little bit like a goldfish, where it’s got this 30-second memory, and the application has no memory of the answers that it just worked out and it has to do the same introspection each time.
Let’s look at some examples. In Hibernate, it will try and bind to a bunch of internal services. For example, it might try and bind to JTA for your transactions. The first thing it does is it doesn’t know what’s around it, so it says, ok, let me do a reflective load of an implementation. No, it’s not there. Let me try another possible implementation. No, it’s not there. Let me try another implementation. No, it’s not there. It keeps going. Keeps going. It keeps going. Of course, each time it does this reflective load, it’s not just the expense of the load, each time a class not found exception is thrown. Throwing exceptions is expensive, and it does this 129 times, because Hibernate has support for a wide range of possible JTA implementations. It does that every single time it starts. This isn’t just JTA, there are similar processes for lots of internal services. We see similar problems with footprint.
Again, with Hibernate, it has support for lots of databases, and so it loads the classes for these databases. Then eventually, hopefully, they’re never used, and the JIT works out that they’re not used, and it unloads them, if you’re lucky. Some classes get loaded and then they never get unloaded. For example, the XML parsing classes, once they’re loaded, that’s it. They’re in memory, even if they never get used again. This is that same thing. It’s that really sort of forgetful model. There’s a lot of these classes. For example, for the Oracle databases, there’s 500 classes, and they are only useful if you’re running an Oracle database. It affects your startup time. It affects your footprint. It also affects your throughput.
If you look, for example, at how method dispatching works in the JVM, if you have an interface and you’ve got a bunch of implementations of it. When it tries to invoke the method, it kind of has to do quite a slow path for the dispatch, because it doesn’t know which one it’s going to at some level. This is called a megamorphic call, and it’s slow. If you only have one or two implementations of that interface, the method dispatching is fast. By not loading those classes in the first place, you’re actually getting a throughput win, which is quite subtle but quite interesting. The way you fix this is to initialize at build time.
The idea is that instead of redoing all of this work, we redo it once at build time, and then at runtime we only do the bare minimum that’s going to be really dependent on the environment. What that means is, if you start repeatedly, you’ve got that efficiency because you’re only doing a small amount of work each time. That is cool. Really, this is about eliminating waste. As a bonus with this, what it means is that if you want to do AOT, if you want to do native in GraalVM, you’re in a really good place. Even if you don’t do that, even if you’re just running on the JVM as a normal application, you’ve eliminated a whole bunch of wasted, repeated, duplicate, stupid work.
Really, this is about doing more upfront. The benefits that you get are, it speeds up your start. It shrinks your memory footprint. Then, somewhat unexpectedly, it also improves your throughput. What this means is that, all of the excitement, all of the brains of the framework is now at build time rather than at runtime, and there’s lots of frameworks.
One of the things that we did in Quarkus was we said, we have to make the build process extensible now. You have to be able to extend Quarkus, and they have to be able to participate in the build process, because that’s where the fun is happening. I think with anything that’s oriented around performance, you have to have the right plug-points so that your ecosystem can participate and also contribute performance wins. What we’ve done in Quarkus is we have a framework which is build steps and build items, and any extension can add build steps and build items.
Then, what we do is, build steps get declared, and then an extension can declare a method that says, I take in this build step, and I output that build step. We use that to dynamically order the build to make sure that things happen in the right time and everything has the information that it needs. The framework automatically figures out what order it should build stuff in. Of course, if you’re writing an extension, or even if you’re not, you can look to see what’s going on with your build, and you can see how long each build step is taking, and get the introspection there.
Some of you are probably thinking, if you move all of the work to build time, and I, as a developer, build locally a lot, that sounds kind of terrible. What we’ve done to mitigate this is we’ve got this idea of live coding. I’ve been in the Quarkus team for about two years. When I joined the team, I always called live coding, hot reload. Every time my colleagues would get really annoyed with me, and they’d be like, it’s not hot reload, it’s different from hot reload. I think I now understand why. We have three levels of reload, and the framework, which knows a lot about your code, because so much excitement is happening at build time, it knows what the required level of reload is. If it’s something like a config file, we can just reload the file, or if it’s something like CSS or that kind of thing. If it’s something that maybe affects a little bit more of the code base, we have a JVM agent, and so it will do a reload there. It will just dynamically replace the classes.
Or, if it’s something pretty invasive that you’ve changed, it will do a full restart. You can see that full restart took one second, so even when it’s completely bringing the whole framework down and bringing it back up again, as a developer, you didn’t have to ask it to do it, and as a developer, you probably don’t even notice. That’s cool. I think this is a really nice synergy here, where, because it starts so fast, it means that live coding is possible. Because as a developer, it will restart, and you’ll barely notice. I think this is really important, because when we think about the software development life cycle, it used to be that hardware was really expensive and programmers were cheap.
Now, things have switched. Hardware is pretty cheap. Hardware is a commodity, but developers are really expensive. I know we shouldn’t call people resources, and people are not resources, but on the other hand, when we think about a system, people are resources. Efficiency is making use of your resources in an optimum way to get the maximum value. When we have a system with people, we need to make sure that those people are doing valuable things, that those people are contributing, rather than just sitting and watching things spin.
How to Make People Efficient
How do you make people efficient? You should have a programming language that’s hard to get wrong, idiot proof. You want strong typing and you want garbage collection. Then, it’s about having a tight feedback loop. Whether you’re doing automated testing or manual testing, you really need to know that if you did get it wrong despite the idiot proofing, you find out quickly. Then, typing is boring, so we want to do less typing. Java gives us those two, the strong typing and the garbage collection. I just showed that tight feedback loop. What about the typing? With Quarkus, we’ve looked at the performance, but then we’ve also really tried to focus on developer joy and making sure that using Quarkus is delightful and fast. One of the things that we do to enable this is indexing. Indexing seems like it’s actually just a performance technique, but we see it gives a lot of interesting benefits in terms of the programming model.
Most frameworks, if it’s doing anything framework-y and non-trivial, it needs to find all of the classes. It needs to find all of the interfaces that have some annotation, because everything is annotations, because we’ve learned that everything shouldn’t be XML. You also really often have to find all of the classes that implement or extend some class. Annoyingly, even though this is something that almost every Java library does, Java doesn’t really give us a lot of help for this. There’s nothing in the reflection package that does this. What we’ve done is we have a library called Jandex, which is basically offline reflection. It’s really fast. It indexes things like the annotations, but it also indexes who uses you. You can start to see, this could be quite useful.
What kind of things can we do with the index? What we can do is we can go back and we can start challenging more assumptions about what programming looks like, and we can say, what if developers didn’t have to do this and that, and this and that? As an example, a little example, I always find it really frustrating when I’m doing logging that I have to initialize my logger, and I have to say, Logger.getLogger, whatever the call is, and tell it what class it sees. I only half the time know what class I’m programming in, and I get this wrong so often because I’ve cut and paste the declaration from somewhere else.
Then there’s this mistake in the code base, and the logging is wrong. I was like, why do I have to tell you what class you’re in when you should know what class you’re in, because you’re the computer, and I’m just a stupid person? What we’ve done with Quarkus is exactly that. You don’t have to declare your logger. You can just call, capital the static call Log.info, and it will have the correct logging with the correct class information. This is so little, but it just makes me so happy. It’s so nice. I think this is a good general principle of like, people are stupid and people are lazy. Don’t make people tell computers things that the computer already knows, because that’s just a waste of everybody’s time, and it’s a source of errors. When I show this to people, sometimes they like it, and go, that’s cool.
Sometimes they go, no, I don’t like that, because I have an intuition about performance, I have an intuition about efficiency, and I know that doing that kind of dynamic call is expensive. It’s not, because we have the Jandex index, so we can, at build time, use Jandex to find everybody who calls that log.class, inject a static field in them, initialize the static field correctly. Because it’s done at build time, you don’t get that performance drag that you get with something like aspects. Aspects were lovely in many ways, but we all stopped using them, and one of the reasons was the performance of them was a bit scary. We assume that we can’t do this thing that we really want to do because we assume it’s expensive, it’s not anymore. It gets compiled down to that. You can see that that is pretty inoffensive code. I don’t think anybody would object to that code in their code base.
Let’s look at a more complex example. With Hibernate, obviously, Hibernate saves you a great deal of time, but you still end up with quite a bit of boilerplate in Hibernate, and repeated code. Things like, if I want to do a listAll query, you have to declare that for every entity. It’s just a little bit annoying. You think, couldn’t I just have a superclass that would have all of that stuff that’s always the same? What we can do with Hibernate, if you have your repository class, what we can do is we can just get rid of all of that code, and then we can just have a Panache repository that we extend.
That’s the repository pattern where you have a data access object because your entity is a bit stupid. For me, I find an active record pattern a lot more natural. Here I just acquire my entity, and everything that I want to do in my entity is on the entity. That’s normally not possible with normal Hibernate, but with Hibernate with Panache, which is something that the Quarkus team have developed, you can do that. Again, you’ve got that superclass, so you don’t have to do much work, and it all works. One interesting thing about this is it seems so natural. It seems like, why is this even hard?
Of course, I can inherit from a superclass and have the brains on the superclass. With how Hibernate is working, it’s actually really hard. If I was to implement this from scratch, I might do something like, I would have my PanacheEntity, and then it would return a list. The signature can be generic. It’s ok to say, it just returns a list of entities. In terms of the implementation, I don’t actually know what entity to query, because I’m in a generic superclass. It can’t be done, unless you have an index, and unless you’re doing lots of instrumentation at build time. Because here what you can do is you see the superclass as a marker, and then you make your actual changes to the subclass, where you know what entity you’re talking to. This is one of those cases where we broke the tradeoff that machine efficiency of having the index enabled the human efficiency of the nice programming model.
Some people are probably still going, no, I have been burned before. I used Lombok once, and once I got into production, I knew that magic should be avoided at all cost. This is something that the Quarkus team have been very aware of. When I was preparing for this talk, I asked them, under the covers, what’s the difference between what we do and something like Lombok? Several of the Quarkus team started screaming. They know that, with this, what you want is you want something that makes sense to the debugger, and you want something where the magic is optional. Like that logging, some of my team really like it.
Some of my team don’t use it because they want to do it by hand. Panache, some people really like it. Some of the team just use normal Hibernate. All of these features are really optional. They’re a happy side effect. They’re not like the compulsory thing. I think again, this is a question of efficiency. What we see with a lot of these frameworks, or some of these low-code things, is they make such good demos, but then as soon as you want to go off the golden path, off the happy path, you spend so long fighting it that you lose any gain that you maybe had from that initial thing. Really, we’ve tried to optimize for real use, not just things that look slick in demos.
The Common Factor Behind Performance Improvements
I’ve talked about a few of the things that we do, but there’s a lot of them. When I was preparing this talk, I was trying to think, is there some common factor that I can pull out? I started thinking about it. This is my colleague, Sanne Grinovero. He was really sort of developer zero on Quarkus. He did the work with Hibernate to allow Hibernate to boot in advance. This is my colleague, Francesco Nigro. He’s our performance engineer, and he does some really impressive performance fixes. This is another colleague, this is Mario Fusco. He’s not actually in the Quarkus team. He tends to do a lot of work on things like Drools, but he’s given us some really big performance fixes too.
For example, with Quarkus and Loom, so virtual threads, we had really early support for virtual threads back when it was a tech preview. What we found was that virtual threads, you hope that it’s going to be like a magic go faster switch, and it is not, for a number of reasons. One of the reasons is that some libraries interact really badly with virtual threads, and so some libraries will do things like pinning the carrier thread. When that happens, everything grinds to a halt. Jackson had that behavior. Mario contributed some PRs to Jackson that allowed that problem in Jackson to be solved, so that Jackson would work well with virtual threads.
I was looking and I was like, what is that common factor? What is it? I realized they’re Italian. This is a classic example of confirmation bias. I decided the key to our performance was being Italian. Without even realizing it, I looked for the Italians who’d done good performance work. When we do a Quarkus release, we give out a T-shirt that says, I made Quarkus. On the most recent release, we gave out 900 T-shirts. There’s a lot of contributors. A lot of people have done really cool engineering on Quarkus, only some of them were Italian. You don’t have to be Italian to be good at performance, in case anybody is feeling anxious. The title of this talk is Italian graft, and so being Italian is optional, but the graft part is not. This stuff is work. When you’re doing that kind of performance optimization, you have to be guided by the data, and you have to do a lot of graft. You measure, because you don’t want to do anything without measuring.
Then you find some tiny improvement, and you shave it off. Then you measure and you find some tiny improvement, and you shave a little bit of time off. You measure, and then you find some tiny improvement. This was very much what we saw in this morning’s talk as well. It was in C rather than Java, but it was the same thing. If I’m going to profile, then I’m going to find some tiny optimization that I’m going to do. You keep going and you keep going. It’s not easy, so it needs a lot of skill, and it also needs a lot of hard work. I mentioned Francesco, our performance engineer, and he really is like a dog with a bone. When he sees a problem, he’ll just go and go. I think a lot of the rest of us would have gone, “Ooh”, and he just keeps going. He has this idea that what he offers to the team is obsession as a service. You need people like that.
I want to give one example. We run the tech and power benchmark, and what we found was we were behaving really unexpectedly badly when there was this large number of cores. With a small number of cores, our flame graph looked as we hoped. When it was a lot of cores, all of a sudden, our flame graph had this really weird shape, and there was this flat bit, and we’re like, what’s going on there? Why is no work happening in this section of the flame graph? Again, many people would have gone, what a shame? To find out, Francesco and Andrew Haley, another colleague, they read 20,000 lines of assembler. What they found was worth it. They found the pattern that was causing the scalability problem, and the pattern was checking if something is an instanceof.
At this point, hopefully some of you are screaming as well and going, I think there’s a lot of that. That’s not a weird, obscure pattern, that is a very common pattern. Once Franz had found the problematic pattern, he started to look at what other libraries might be affected. We found Quarkus was affected. Netty was affected. Hibernate was affected. Camel was affected. The Java Class library was affected. This was a really big, really bad bug. He found actually that there was an existing bug, but nobody had really realized the impact of it. I think this is partly because it happens when you’ve got like 32 cores, when you’ve got like 64 cores. We’re now much more often running at that kind of scale. It’s a cache pollution problem.
The problem is, when you do this check, the cache that is used for this check is shared across all of the cores. If you’ve got a lot of code running in parallel, basically the cache just keeps getting corrupted, and then you just keep having to redo the work. This was a bad problem. This was not like that saving 2%. This is one of the tech and power benchmarks, and this was running before the fix and running after the fix. You can see we went from 1.8 million requests per second to 5.8 million requests per second. That’s just a small benchmark, but it was a huge improvement.
What we did was, Franz wrote a little tool, because not every instanceof call is problematic. It depends on various factors. He wrote a tool that would go through and detect the problematic case. We ran it through the whole code base, and we started doing the fixes. It’s very sad, because this is fixed in the JVM now, but only on the sort of head, so people won’t get the benefit of the fix for quite a while. We had code that was, for example, like this. Then after the fix, you can see we had to do all of this stuff.
Again, you don’t need to necessarily read the code, but you can just see that the throughput is a lot higher, but the code is a lot longer, so it’s again exactly the same as Alan’s talk. You have this tradeoff. I love it for this one, because the developer did the PR and then they basically apologized for the code that they’re doing in the PR. I’m not a fan of the fix. It’s not idiomatic. It’s difficult to maintain, but it gives us so much more throughput that we have to do it. Again, it’s that tradeoff of machine efficiency against human efficiency. Only in this case, it’s not everybody else’s efficiency, it’s just my team’s efficiency. This is what Anne was talking about when she said, you really want your platform to be doing the hard, grotty, nasty work so that you can have the delightful performance experience. We do the nasty fixes so that hopefully other people don’t have to.
Another thing to note about efficiency is it’s not a one-time activity. It’s not like you can have the big bang, and you can go, yes, we halved the throughput, or halved the cost. Life happens, and these things just tend to backslide. A while ago, Matt Raible was doing some benchmarking, and he said, this version of Quarkus is much slower than the previous version. We thought, that’s odd. That’s the opposite of what we hoped would happen. Then we said, “Are we measuring our performance?” Yes. “Don’t we look to see if we’re getting better or worse?” Yes. “What happened?” What it is, is, if you get that bit of code, is the performance getting better or worse here? It looks like the performance is getting much better. If you look at it over the longer picture, you can see that actually it’s probably getting a little bit worse. Because we had this really big regression that masked a series of smaller regressions.
We had a change detection algorithm that was parametric, and it meant that we missed this regression. We did the work and we fixed it, and we fixed a lot. It was very cool. That was another engineer who was not Italian, called Roberto Cortez. One of the things that Roberto did, which just makes me smile, is, again, it’s about the assumptions. We do a lot of string comparison in config. Config tends to be names based, and so the way any normal human being would do a string comparison is you start at the first character, and then you go. The interesting bit is always at the end. Roberto worked out, if I go from the other end, the config is much faster. I would recommend you all to have a Francesco, to have a performance engineer. You can’t have Francesco, he’s ours, but you need to find your own. It does need investment.
I’ve got one last tradeoff I want to talk about. This is the efficient languages track, but we really do have a green focus here. There’s this classic tradeoff with sustainability between doing the stuff that we want to do and saving the planet. In general, historically, we have always tended to do the stuff we want to do rather than save the planet. I think there is some hope here. I’ve started talking about something called the vrroooom model. Naming is the hardest problem in computer science, because I didn’t think to actually do a Google before I did the name. It turns out there is a vroom model, which is a decision model. That’s with a slightly different spelling than I did. I did 3r’s and 2o’s and stuff, which was another terrible mistake.
If you Google, vrroooom, it thinks you want to do it with the conventional spelling, but then it says, but would you like to search instead for the vrroooom model with the idiosyncratic spline? If you click on that, what do you think happens? The hope is that you get my stuff. The reality is rather different. Everything here, it’s all about cars, and hot babes. That is what you get if you search for the vrroooom model. Even you can see there, that’s a Tesla advert. It says sexy above it. It’s all about sexy cars. Naming, hardest problem in computer science. I should have thought about that.
My vrroooom model, the one that doesn’t involve sexy cars, I really started thinking about this when I looked at the paper. We were talking about this before, and Chris said, you know that stupid paper that compares the programming languages, and there’s a lot of problems with this paper. What I want to show you is not the details of it, but something that I noticed, which is, it has a column for energy and it has a column for time, and they look kind of almost the same.
If you plot it, you can confirm that this trend line is basically straight. It means languages that go fast have a low carbon footprint. We see this with Quarkus. With Quarkus on this graph, we benchmarked the energy consumption of Quarkus native, Quarkus on JVM, the other framework on JVM, the other framework on native. What we did was we had a single instance, and we just fired load at it until it ran out of throughput. The shorter lines are where it ran out of throughput earlier. Lower is better. Lower is the lower carbon footprint. You can see that there’s, again, this really strong correlation. Quarkus on JVM has the lowest carbon footprint of any of these options because it has the highest throughput. It’s the win-win again, that you get to have the really fast language and have the nice programming model and also save the world. We beat the tradeoff.
I just love this that instead of having this opposition between machine efficiency and human efficiency, the one helps us gain the other. If you start with efficient languages, you really need to consider both machine efficiency and human efficiency. When you’re looking at your machine efficiency, you need to challenge your assumptions. Only do work once, obviously. Move work to where it hurts the least. Index. Indexes are so cheap, they’re so good, they solve so many problems. Unfortunately, this isn’t a one-off activity. You do need that continued investment in efficiency. Then, when you look at your human efficiency again, same thing, you need to challenge your assumptions. You need to get those feedback loops as small as you can. Don’t make people tell the computer what the computer already knows, because that’s a waste of everybody’s time.
Hugging Face has launched the integration of four serverless inference providers Fal, Replicate, SambaNova, and Together AI, directly into its model pages. These providers are also integrated into Hugging Face’s client SDKs for JavaScript and Python, allowing users to run inference on various models with minimal setup.
This update enables users to select their preferred inference provider, either by using their own API keys for direct access or by routing requests through Hugging Face. The integration supports different models, including DeepSeek-R1, and provides a unified interface for managing inference across providers.
Developers can access these services through the website UI, SDKs, or direct HTTP calls. The integration allows seamless switching between providers by modifying the provider name in the API call while keeping the rest of the implementation unchanged. Hugging Face also offers a routing proxy for OpenAI-compatible APIs.
Rodrigo Liang, co-founder & CEO at SambaNova, stated:
We are excited to be partnering with Hugging Face to accelerate its Inference API. Hugging Face developers now have access to much faster inference speeds on a wide range of the best open source models.
And Zeke Sikelianos, founding designer at Replicate, quoted:
Hugging Face is the de facto home of open-source model weights, and has been a key player in making AI more accessible to the world. We use Hugging Face internally at Replicate as our weights registry of choice, and we’re honored to be among the first inference providers to be featured in this launch.
Fast and accurate AI inference is essential for many applications, especially as demand for more tokens increases with test-time compute and Agentic AI. Open-source models help optimize performance on RDU, enabling developers to achieve up to 10x faster inference with improved accuracy.
Billing is handled by the inference provider if a user supplies their own API key. If requests are routed through Hugging Face, charges are applied at standard provider rates with no additional markup.
About the Author
Daniel Dominguez
Show moreShow less
Subscribe for MMS Newsletter
By signing up, you will receive updates about our latest information.
Did you know...
More than half of fortune 500 companies are planning an AI project in the next 6 months! (Subscribe to be in the know!)
Block’s Open Source Program Office has launched Codename Goose, an open-source, non-commercial AI agent framework designed to automate tasks and integrate seamlessly with existing tools. Goose provides users with a flexible, on-machine AI assistant that can be customized through extensions, enabling developers and other professionals to enhance their productivity.
Goose is designed to integrate seamlessly with existing developer tools through extensions, which function using the Model Context Protocol (MCP). This enables users to connect with widely used platforms such as GitHub, Google Drive, and JetBrains IDEs while also allowing them to create custom integrations. The AI agent is positioned as a tool for both software engineers and other professionals looking to optimize their workflows.
Goose functions as an autonomous AI agent that can carry out complex tasks by coordinating various built-in capabilities. Users can integrate their preferred LLM providers, ensuring flexibility in how the tool is deployed. Goose is designed for easy adaptation, allowing developers to work with AI models in a way that fits their existing workflows.
The agent supports a range of engineering-related tasks, including:
Code migrations
Generating unit tests for software projects
Scaffolding APIs for data retention
Managing feature flags within applications
Automating performance benchmarking for build commands
Increasing test coverage above specific thresholds
As an open-source initiative, Goose has already attracted attention from industry professionals. Antonio Song, a contributor to the project, highlighted the importance of user interaction in AI tools:
Most of us will have little to no opportunity to impact AI model development itself. However, the interface through which users interact with the AI model is what truly drives users to return and find value.
Goose takes flight. Open-source AI agents are no longer a side project—they are defining the future. Codename Goose 1.0 signals a paradigm shift: decentralized, non-commercial AI frameworks bridging intelligence and real-world execution. The AI race has been dominated by centralized models with restricted access. Goose challenges that by enabling modular AI agents that can install, execute, edit, and test with any LLM, not just a select few.
Goose is expected to evolve further as more contributors refine its capabilities. The tool’s extensibility and focus on usability suggest it could become a widely adopted resource in both engineering and non-engineering contexts.
About the Author
Robert Krzaczyński
Show moreShow less
Subscribe for MMS Newsletter
By signing up, you will receive updates about our latest information.
Did you know...
More than half of fortune 500 companies are planning an AI project in the next 6 months! (Subscribe to be in the know!)