How Developers Can Eliminate Software Waste and Reduce Climate Impact

MMS Founder
MMS Ben Linders

Article originally posted on InfoQ. Visit InfoQ

High performance and sustainability correlate; making software go faster by improving the efficiency of algorithms can reduce energy requirements, Holly Cummins said at QCon London. She suggested switching systems off when not in use to reduce the environmental footprint. Developers can achieve more by doing less, improving productivity, she said.

A high performance sustainable system should have a low memory footprint, high throughput, avoid excessive networking, and support elastic scaling. These are characteristics we already want for our software, Cummins said.

Building hardware has an environmental impact, both in terms of raw materials, and embodied carbon from the energy required, Cummins said. When the hardware reaches the end of its life, it ends up in landfills. E-waste, or electronic waste, takes up space, and puts non-renewable resources like copper, platinum, and cobalt out of circulation.

E-waste can pose health hazards to the people doing the recycling, so the best way to solve the problem is to just generate less of it, Cummins suggested.

Often, software has obsolete assumptions baked into its design. If we can identify those assumptions and update the design, we can improve performance, reduce latency, reduce costs, and save energy, Cummins explained:

Many Java frameworks make heavy use of reflection, which allows the behaviour to be updated dynamically. But for modern applications, that dynamism requirement isn’t there anymore. We don’t swap out application components at deploy-time or run-time. Applications are often deployed in containers, or the complete deployment package is generated by a CI/CD run.

To reduce the environmental footprint, Cummins suggested switching systems off when they’re not in use. Many organisations will run a batch job at the weekends, but keep the system doing the job up all week. Or they’ll keep staging systems running overnight, when no one is using them:

People are nervous about doing this because we’ve been burned in the past by systems that never behaved correctly again after being turned off. This fear of turning things off is kind of unique to computer systems. Nobody goes out of a room leaving the light on because it’s too risky and complicated to turn the light back on.

Boilerplate code- code that’s pretty much the same in every application- is a sign that the API design, or maybe even the language design, isn’t quite right. It’s a waste of time for developers to write code that isn’t really adding differentiated meaning, Cummins explained:

The solution to boilerplate is not to get AI to write the boilerplate; the solution is to design more expressive APIs.

Cummins mentioned that there’s evidence that we can achieve more per working hour if we work less, and achieve more overall:

Henry Ford moved his factories from a 48-hour, six-day working week to a 40-hour, five-day working week, after his research showed that the longer hours did not result in more output. More recently, a study found that companies experimenting with four-day weeks report 42% less attrition, which makes sense, and a 36% increase in revenue, which is perhaps more surprising.

At an individual level, studies find that switching off can improve productivity, Cummins said. One mechanism for this is the default mode network, an area of the brain that becomes more active when we’re not doing anything. The default mode network is involved in problem solving and creativity, which is why so many of us have great ideas in the shower, she said.

Cummins mentioned Jevon’s paradox, which says that increasing capacity increases demand. This is why widening highways doesn’t reduce travel times – more cars use the new, wider, road, and so traffic jams still happen. We can take advantage by leveraging an inverse Jevon’s manoeuvre. If we work shorter hours, the demands on our time become lower, and we can still achieve important things, she concluded.

InfoQ interviewed Holly Cummins about eliminating software waste and reducing the environmental footprint.

InfoQ: How can we eliminate software waste?

Holly Cummins: Dynamism has a cost; many Java applications are paying the dynamism tax, without getting a benefit. Quarkus fixes this by providing a framework which allows libraries to do more up-front, at build time. That shift to build time gives applications which have a smaller memory footprint, and run much faster.

Also, smaller, fine-tuned, generative AI models can sometimes give better results than big models, for a lower cost, and with lower latency. Or for more complex problems, linking a few smaller models together with an orchestration model can work great. It’s challenging the assumption that bigger is always better.

InfoQ: How can we build systems with a smaller environmental footprint?

Cummins: We should design systems to have a light-switch-like ease of turning them on and off. That means idempotency, resiliency, and infrastructure as code. That’s more or less what you need anyway if you’re designing cloud native systems. Once the systems support it, we can automate turning systems off when they’re not needed. I call the two of these together LightSwitchOps.

Just turning things off can generate pretty huge savings. For example, a Belgian school saved €12,000 a year with a script to shut computers off overnight, and a US company reduced their AWS bill by 30% by stopping instances out of working hours. Scripts don’t need to be home-rolled, either. For a more interactive solution, Daily Clean gives a nice UI for setting power schedules.

About the Author

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Podcast: InfoQ Architecture and Design Trends in 2025

MMS Founder
MMS Eran Stiller Daniel Bryant Sarah Wells Thomas Betts

Article originally posted on InfoQ. Visit InfoQ

Transcript

Thomas Betts: Hello and welcome to the InfoQ podcast. I’m Thomas Betts, and today we will be discussing the current trends in software architecture and design for 2025. One of the regular features on InfoQ are the trends reports, and each one focuses on a different aspect of software development. These reports provide InfoQ readers with a high-level overview of the topics you want to pay attention to. We look at the innovator and early adopter trends to see what they’re doing and maybe it might be appropriate for you to adopt in the next year or so.

Now this conversation today is just one half of the trends report. On infoq.com, you can find the trends graph and the written report, which has a lot more information about topics we might not be able to go into detail today in this conversation. Joining me today are a panel of InfoQ editors and QCon program committee members, including Eran Stiller, Daniel Bryant and Sarah Wells. So let’s go around and do quick introductions, starting with Eran.

Eran Stiller: Hi everyone. My name is Eran Stiller. I’m a chief architect at Cartesian, which is a small startup, six people startup, which is always exciting. I’m also an InfoQ editor for the past, I don’t know how many years. And I think as every year, I always wait for this trends report. I think it’s one of the fun things they do every year. So yes, can’t wait to start.

Thomas Betts: Daniel?

Daniel Bryant: Hi everyone, Daniel Bryant. I work at Syntasso on platform engineering. Long career in development and architecture as well. And again, I’m very lucky these days I get to work with a lot of architects in my day job. Super excited to be writing for InfoQ as well, doing news management and helping out at QCon whenever I can as well.

Thomas Betts: And finally, Sarah.

Sarah Wells: Hi. Yes, I’m Sarah. I am chairing the QCon London program committee this year. I am an independent consultant, an author with O’Reilly, and my background is as a tech director and principal engineer in content API, microservices, DevOps, platform engineering, a whole bunch of areas.

Thomas Betts: A little bit of everything.

Sarah Wells: Yes.

Thomas Betts: And myself, like I said, I’m Thomas Betts. My day job, I’m a solution architect for Blackbaud, which is a software provider for Social Good. I’ve been doing this software development thing for longer than I care to admit and been an InfoQ editor for almost a decade. So like Eran said, this is one of my highlights of the year to be able to talk about these things and write up the report.

AI for software architects – LLMs, RAG, and agentic AI [02:43]

Thomas Betts: So I think to get started, it’s 2025. I think we have to call out the 800 pound artificial gorilla in the room, AI. Now there’s a separate trends report on InfoQ that’s dedicated to AI, ML and data engineering. I highly recommend it. It’s great for getting into those details of what’s underneath that AI umbrella, but this is the architecture and design trends discussion.

So I want to focus on things that architects care about and we can’t ignore AI anymore. So what are the big concepts that are bubbling up and we have to integrate into our systems? What are the trade-offs that architects need to consider regarding solutions? I know a year ago the trends report, we called out LLMs, were just brand new, but within a year they were already early adopter technology, but a lot of those cases were glorified chat bots. So Eran, I’m going to start with you. What’s changed in the past year? What are we seeing, the innovation that architects had to respond to?

Eran Stiller: Yes, I think that the landscape is just exploding. Like, in terms of architects, the amount of things that we need to know now that we didn’t need to know, I don’t know, two years ago, it’s just mind-blowing. For example, whenever I look for articles for InfoQ, like to cover in InfoQ, the amount of AI topics that I go through that are directly architecture related is amazing. You mentioned LLMs, which is the big elephant in the room. We’ve all been using it, but now it has all kinds of these satellite technologies around it that we also need to be aware of. And sometimes we use them as a service, we don’t care how it’s built, we just have an API, we just use it and that’s it. And that’s great. As an architect, you don’t need to know everything, but you need to know it exists. And sometimes you need to implement it yourself.

So some examples include RAG, retrieval-augmented generation. So basically, which is a simple way to enrich the context for an LLM. We want to use an LLM, you don’t want to train the model yourself, but you want to fine tune it to your own needs. For example, you’re in organization, you have a bunch of documents, or maybe you’re an architect, you have a bunch of ADRs, you want to ask questions about them, and you want to know why was something done two years ago, three years ago and who did it. So you can use RAG to enrich that content. Again, as I said, we’re not going to go into detail of how it’s done, but when I look at various articles that big companies, their authors write, they often use it to do various things to achieve in their systems. So RAG is one.

We’ve also recently started seeing growth in systems that are called Agentic AI. Before that it was called AI Agents. I think they changed the name to Agentic AI. Again, in a nutshell, basically what it means is you give AI a bunch of tools that it can use. So for example, it can go search the web, it can go and call some API call. And you give them, the AI, some description of these tools, what they can do, how to use them. There’s a protocol around it which Anthropic introduced. It’s called MCP, like a model context protocol, if I remember. I hope I’m not wrong in the acronym. And basically, it makes things more interoperable. It can use more stuff around it. So again, a lot of acronyms, a lot of things you need to know. As an architect, you don’t necessarily need to know the nitty-gritty of all of it, but you definitely need to know what you’re doing and you don’t want to be caught off guard.

Thomas Betts: Yes, what is the shape of the box on my diagram? Like, “Oh, here’s an LLM. Well, what is it used for? When would I use it?” And I’ve seen a lot of people thinking you can use an LLM for everything, I can put AI in anywhere. And they don’t realize, just like any software tool, there are trade-offs, right? These are non-deterministic software. If you wanted to have the ability to calculate numbers exactly, it’s not good at that. It’s great at predicting the next word, but it’s not great at doing simple math, so don’t ask it to do that. Don’t think you can replace all of your software with a bunch of AI boxes. I think the Agentic is one of those things where it goes one level higher, and this is where architects have to start thinking about, “Okay, so do I have different components in my system that I can have parts of AI operating together?” Daniel, have you seen stuff like that too?

Daniel Bryant: Yes, I mean I think the interesting thing that, to me, what you were saying there, Eran, is the MCP stuff. Because my previous space was very much focused on API’s. I think the angle I’d look at it, what I’m most interested in, is as an architect thinking about my responsibilities in relation to the API. So, I mean, it applies for whatever you’re sending over the wire to any third-party AI, but to your point, people are almost putting everything into LLMs. Before you know it you’ve got like PII, personally identifiable information, private information, stuff that shouldn’t be going over the wire.

Now we as architects need to have a handle on that, right? What’s being sent, how’s it being processed? Like in the UK and EU we have GDPR, but there’s many of the laws across all the lands around the world. But I think that’s what I’m most interested in at the moment is, as we’ve all said, they’ve taken the world by storm and we are, as architects, almost not getting time to catch up on some of the impacts and choices we are making around the security, the observability, the extensibility, the maintainability. All the other things we love as architects are kind of being thrown a little bit out of the window. So I’m encouraging folks to think good design principles, think good data hygiene, think good API design. That is really important, regardless of what you’re calling into, but very much with LLMs and this AI services.

Sarah Wells: I think that’s a really interesting challenge for architects, because every business wants to tell a story about using AI. So there’s a lot of pressure to get something out there really quickly. So you want to put in all of the -ilities, but how do you push back on someone saying, “Well, no, but I need to have it. I need to be able to tell this story next week”. So I think it’s always true that you’re balancing those as an architect, but there’s a lot of pressure on it right now.

I was talking to Hannah Foxwell earlier today. She’s going to give a talk about Agentic AI at QCon London, the closing keynote. And she mentioned just in passing about thinking of the agents like they’re microservices, because when you’ve got different agents that have different skills, so they have different things they can do, you’re composing them together to deliver some level of functionality. And I think that seems to me a really useful way for architects to think about this. How are you going to put multiple boxes together and the things around that to check whether you are actually just passing rubbish information on between agents.

Thomas Betts: Right, right. And then it goes back to some of the things Daniel was mentioning. You talked about the security aspect of it. You don’t want to send the LLM all of your personal PII data and all your customer data. How do you scrub that down so you send enough information that’s useful? And when you think about it, like microservices, I like that analogy because a well-designed microservice architecture, you have very clearly defined context boundaries, right? Like here’s the contract of how you talk to me, and here’s what I share with you. I may have a lot of private information stored in my database that I’m not going to share with you, but here’s what is in that public contract. So the thing about the agents, in that same way, really seems to mesh with my idea of how to design a system into reusable elements.

Eran Stiller: Or as an architect you can decide, “Well, I really do need to send that private information”. And then you can think, “Okay, maybe I’ll run that LLM locally”. And then you get into the big, entire world of open source LLMs and how do you run them? It goes also into the platform architecture. How do we scale them? How much would it cost to run all these LLMs? Maybe something about, “Okay, maybe..”. We haven’t mentioned, there’s also something that’s called small language models. “Maybe I’ll run this fine-tuned small thing that can do what I want”, but you really need to understand all these things in order to make these decisions.

Thomas Betts: We have a trend, I think it’s on the early adopters, the edge computing. Architects are thinking about how much more code can I put closer to the user, because then there’s no latency going over the wire. And if I can put the large language model, well that’s too big, but I can put a small language model and it can do enough, or I can make it specially trained for that use case, that’s going to run in that environment. This gets back to, we now label everything with AI and a lot of times it means LLMs, but some of this is just core machine learning. And like I said, we’re not going to go into the details, but this is… Create a small model. Maybe it’s a language model, maybe it’s some other thing. So those ideas have been around for a while. We’ve been able to distribute smaller models before, it was just when the large language models became big a few years ago that they became enormous, right?

This wasn’t saying you could run on your machine. So how do we get to the other side of the teeter totter and move the scales? So the last thing I wanted to talk about with AI was the role of the architect in an AI assisted world. Because we’ve had this topic on the trends report for… In a various form for the last, I don’t know, five, six, seven years at least. Architect is a technical leader. Socio-technical considerations, architects need to worry about. I’m not saying I need to bring the AI into my socio-technical aspect, but maybe I do. Are there things that I can design with Conway’s law around a team? And then well, is it a team that’s made up of AI programmers or software engineers who are augmented by more AI? Can I make my architecture decisions better because I’m using the AI to search my previous ADRs? Or I can ask it to review an ADR before I go and present it to someone else? Has anyone else run into people doing their job differently because this is just a new tool they have at their disposal?

Eran Stiller: I think I’ve started doing my job differently as an architect because of these tools. For example, as an architect, one of the things you’re naturally expected to do other than architecting the system is mentoring team members, deciding on coding standards, coding guidelines, code quality, and how do we do code reviews, stuff like that. And now when I have all these tools, I can actually use them as part of my job to do all these things. So for example, we have various code assisted tools like GitHub, Copilot, Cursor, whatever. There are many of them, and I know at least some of them, I don’t know all of them, but at least some of them have this mechanism where you can fine tune them, you can provide rules on how to help your developers. So for example, specifically in my company, we’re using Cursor and it has a mechanism that’s called Cursor Rules.

And what that means, you can provide the… You can tell it, “Okay, these folders are React apps and these are the best practices we implement when we write React, and these folders do that and these folders do that”. And you can provide as an architect all your guidelines there. And then when your developers use the tool to write code, because I hope they do it because it makes them much faster developers, more efficient developers, then they get all of these code guidelines already built into what they’re writing, which makes my work easier as an architect. So I think definitely there’s lots of places we can use them, we can extend it to how do we do code reviews and so on, but that’s my 2 cents.

Thomas Betts: Yes, I think we’ve had things like linters and an editor config file that says, “Oh, you need to put your curly braces at the end of the line. No, on the next line and two space tabs”, and whatever.

Eran Stiller: It’s the high level.

Thomas Betts: You can define those things, but they’re very much at the syntax level. I think what architects… Some companies have been able to introduce fitness functions and say, “Okay, this can tell that this module is adhering to our architecture guidelines”, but those are often more squishy. And so I think there’s going to be a place where we might be able to see some AI based tools be able to just analyze the code, not straight from a static analysis like, “Oh, you’re using an out-of-date library”, or “You didn’t follow this”, but does it smell right? Did you do the thing correctly? Did you follow what we’re trying to do? Help me with my pull requests before the person has 10 years of experience again, weigh in and say, “You know that’s not how we write things. You should pull this out into a separate method”, that type of stuff.

Cell-based architecture for resilience [15:31]

Thomas Betts: So last year we added cell-based architecture as an innovator trend and we’ve seen a lot more companies adopting this as a strategy to improve resiliency, but it also has potentially good cost and performance benefits. Daniel, you want to give us a quick refresher on what’s the idea behind cell-based architecture and when is that pattern appropriate?

Daniel Bryant: Yes, sure thing, Thomas. I mean, it definitely dovetails on microservices, which I’ll doff my cap to Sarah. Here you’re the expert in the room on microservices, so we definitely hear from you in a second. But Rafal Gancarz did a fantastic eMAG for us in InfoQ talking about the cell-based architecture.

For me it is an extension to microservices, and you focus a lot more on the cell as the unit of work here, and you are isolating… It’s almost like a bounded context plus plus. It’s a cell of work and you’ve got very strict guidelines around bulkheads and failure. A lot of companies… We saw great talk, I think it was QCon San Francisco by Slack who used a cell-based architecture and they’d architected these cells, so if one cell breaks, falls over, gets compromised or whatever, it doesn’t take out the whole system. So it’s microservices, but with a much stricter boundary in my mind, and the way you operationally run them is interesting from an architecture standpoint, because they have got to be isolated. But I know Thomas, you’ve done a fair bit, I think, of work with cell-based architecture as well, right?

Thomas Betts: Yes, I mean I haven’t personally implemented them, but the coverage on InfoQ definitely exposed me and we’ve seen a lot more companies writing on their own blogs of, “Here’s how we did our cell-based architecture”. So that’s how we can tell it’s being adopted more. I like that it was, “Okay, we’re going to build all these microservices”, and then things like a service mesh, because I need to do service discovery. We had all these other tools come in, but you can get to the point where if one goes down, then I don’t know how to contain the blast radius and how do you impose logical boundaries around those things? So it’s a little bit of, “I want to be deployed into multiple availability zones, but thinking about that a little bit more and sometimes bringing in the business context, not just the technology side”.

So it’s like, “I want to make sure that customers in the East Coast or Europe or wherever they are are served the closest traffic”. We’ve had those patterns for a while, but when something fails, how do we make sure that that doesn’t blow up the rest of the region or take them down entirely and they’re still able to do their work? I mentioned the cost and performance benefits. Sometimes because you start restricting the boundaries and say “You can only call somebody else in the cell”. You’re not sending that traffic over the wire, over the ocean maybe. So you’re making a lot more closer calls. So sometimes things just get faster because you’ve put these constraints around it. But Sarah, I’d love to hear your ideas, because I think this is core to things that are in your book.

Sarah Wells: So it’s really interesting because I think I was thinking about these things without knowing cell-based architecture as a term. Certainly when we were doing microservices at the FT, we had a lot of cases where we thought, “Well, how can we simplify what we’re trying to do to make it easy for us to reason about what might happen when things fail?” So for example, we had parts of our architecture where we decided we didn’t want to try… We basically were going to have databases in different regions that didn’t know anything about each other. So that in theory, the same data in both, but we’re not even attempting to double-check other than occasionally counting records, just to simplify what we’re trying to do.

We did the same thing with calls between instances. Well, do we want to call to a different availability zone to a different region? We’re not going to go across regions. So I think providing the context for how you decide the terminology for making those decisions, because people often talk about things like bulkheads, but it’s quite far down the list of things you do in a microservices architecture, because it’s confusing. Everything that you have to consider is confusing. So you’re like, until you have a massive incident, it’s normally not the top of your list. But if you can start off by thinking in terms of, “Well, which of these services hang together? How could we split them up? Where are we separating our data?” I love the idea that you can save money by deciding that you don’t make calls across availability zones. I just think, well, that seems like a really sensible idea.

Monzo Bank’s unorthodox backup strategy [19:47]

Thomas Betts: And I think there’s a parallel discussion here. Eran, you wrote the article about Monzo and their backup strategy, which is sort of related. That was their thinking of, “If something goes down, how do we have a backup?” Because an online bank, “How do we have a backup bank?” But it wasn’t as simple as like, “Oh, we have two cells that are identical”. Can you go into the details of how they thought through that problem? I think it’s an interesting one.

Eran Stiller: An interesting one, and I saw some interesting discussions about this item online after I posted it, where in that case… I think we should probably have a link to the article somewhere. Basically what they’ve done is instead of having an identical replica, which is what we usually do, we have this standby or active-active or whatever it is that we do, they follow, “Okay, in case of an outage, what from a business perspective, what are the most important things that we want to keep alive? What brings us money?” In their case, it was transactions. We want people to be able to use their credit cards and buy things, because if they can’t, it’s going to be bad for the bank obviously. And once they had that list, they architected their, how do you call it, the secondary zone or their backup deployment to only implement those things.

And that means that the environments are not identical. And that means they brought business insights into the way they implemented the backup strategy. And so what they said, “We have this main environment, it does everything, but we have this small thing here on the side that’s isolated, and that’s very much aligned to the cell-based architecture concept”, even though it’s not a cell-based architecture. But the concepts are the same. “We want to have this isolated, we want to have separate, we don’t want to have common points of failure”. So one of the things they’ve done, for example, is they didn’t take a subset of the services and just deploy it to the secondary environment. They wrote a new set of services, it’s a different code base. And I remember reading it, even when I was reading it first, I said, “But why? It’s such a waste. You’re writing all this code and how are you going to test it? It’s on backup. How do you know it’s even going to work?”

And I actually chatted dollars, I asked them a bunch of questions about it and I actually came to learn and appreciate this approach where it’s a separate call base. So if you have a bug in the main environment, that bug is not going to overflow to the secondary environment. It’s still going to be there. And then in regards to how do you test it, where you test it in production, every day, it takes some percentage of the traffic and you route it there and you see it works and you compare it at the end. So it’s not only when there’s an outage, you run it all the time just on a small subset of users who get the emergency experience. Fits well with the cell-based architecture theme of, “We want this stuff”.

Thomas Betts: And it goes back to what are the -ilities that the architects care about? In this case resiliency, the ability to survive an outage, whether that’s code that didn’t run or the availability zone goes down. Like, “We need to be able to still run the business. How do we solve that?” And it’s clearly a case of one size does not fit all. Even the cell-based architecture is not a, “I can just press this button and boom, I have a cell-based architecture”. You have to think about it. In Monzo’s case, they made a lot of decisions to implement things differently. Like you said, a different code base that seems like a waste actually provides extra reliability.

Privacy engineering and sustainability [23:20]

Thomas Betts: So those types of trade-offs show up everywhere. And I think it’s just one of those things as an architect you have to consider, “Is there some different way we want to solve this?” And that’s what I love hearing about these innovative stories. I would not have built a separate smaller backup bank, but I can definitely see the benefit of it. I think the other thing we’ve seen about, not just thinking about reliability, but some of the new trends we had were privacy engineering, and sustainability and green software came onto the list last year. Sarah, you made an interesting comment before we started recording, that we need to have a shift left mindset around these things, about the adoption. Can you tell us more about what you were thinking of when you said that?

Sarah Wells: Well, I just thought that this is the architecture equivalent of all the shift left things that were happening in software engineering where people were involving testing earlier, involving security earlier. It feels like it’s the same pattern of, “If you consider this early, it’s less costly to do it. You can build it in from the beginning”. So if you think about you’re building your architecture so that you are considering privacy, so you’re thinking carefully about where you’re putting that data. And I think there are lots of places where people suddenly go, “I have no idea where I’m exposed and where private data may be stored”. And I think with sustainability and making sure you’re not incurring too many costs, often you only look at that after you realize your bill is really high. But if you think about it early, it’s great. But I think you might face some of the same challenges, because you have to convince people that you should spend more time investing in the thing that the moment isn’t critical.

So if you’re starting to think about how do I build this so it won’t cost a lot at the point where it isn’t costing a lot, because you haven’t built that much of the system, but it’s kind of tough in the same way that it can be hard to persuade people to invest a lot of time in thinking about security when they haven’t yet built a thing that they know they can sell. So I thought that was just an interesting pattern, and I wondered what else might move earlier in the architectural thinking, other things, because obviously with shift-left in engineering, everything gradually just moved left.

Thomas Betts: The left keeps moving, right? The goalposts keep going. You just have to keep going more and more left, because who’s doing this. Daniel, you were nodding along. I know this is some of the platform thinking. I think that we’re going to get to that a little bit more, but how am I going to deploy it? How do I secure it? How do I make it sustainable or use less energy or less carbon? All those things come into play at some point. Are you seeing in your day job, or in stuff on InfoQ, people talking about these trends earlier than they were before?

Daniel Bryant: Yes, for sure. And to Sarah’s point, I definitely see a split, because I’m very lucky, I get to work with startups and then big banks and other enterprises as we call them as well. And the startups is fundamentally like, “Can I get product market fit?” So they are not even threat modeling sometimes, they’re literally just like, “Hey, I’m going to put this thing out there, if I get some customers, we’ll get some funding and then we’ll think about doing this stuff”. And more power to you if you are a startup, that’s how I like to work. Sounds like, Eran, you’re in that space too.

I like the startup vibe, but you’ve got to respect it’s a slight hustle until you get product market fit. But the big banks and so forth, they are doing what is akin to threat modeling, but more green modeling, if I’m making up the term, I’m not sure if that is a term, but you know what I mean, right? They’re doing more analysis. They realize that running things green, not only is it good for the mission, but often it’s cheaper, more cost-effective should I say. So they are spending time now. I think the shift left message has landed with a lot of architects, particularly in enterprise space.

Eran Stiller: And they even showed on the UI, they show, “Hey, you’re in this mode right now because we’re testing and they can opt out if you want”. So that’s like an innovative approach. Also, as I said-

Daniel Bryant: They have a clear line to justify it, but if we threat model now we spend a week doing this, it’s going to save us maybe six months of battle testing at the end. If we bake it in now, it’s going to cost less. Same thing with even the green modeling or thinking about your carbon footprint. It’s so much harder to get rid of that cost at the end of process than it is at the beginning.

So I’m definitely seeing folks doing that. Softly, softly. And I do think to the points we’ve made at several point in this podcast, the tools really need to help you do this, whether it’s AI powered or not, but something I talk about a lot on my team is there’s always a certain amount of cognitive load. And the danger is if you’re shifting stuff left, you often shift that load left onto other people. So it used to be the security folks, the cost, and us folks at the end of the pipeline, now it’s being shifted onto us as developers, us as architects, and without the tools, you just get cognitive overload. You just get overly burdened.

Commoditization of platform engineering [28:00]

Sarah Wells: Isn’t this an opportunity for platform architecture, for building the stuff into that platform so that people don’t have to take on that cognitive load? It’s already there in whatever you’re deploying, the tools that you’re using, everything that’s there is your foundation?

Daniel Bryant: A hundred percent agree, Sarah. This is what definitely I’m seeing in my day job. Folks are trying to feel platforms, even more so the context I’m working in with compliance in mind, “Making it easy to do the right thing”, is a phrase I find myself saying a lot. And again, I’m quoting other people standing on the shoulders of giants here, but yes, a hundred percent Sarah, to Eran’s point, they’re baking in certain rules and checks along the way, but even some guidelines and so forth in the tools early on in the process to make it easy to do the right thing.

Thomas Betts: Yes, and that’s where the architecture… And this is where platform architecture I think is the term, is the thinking about how do we make a platform that takes some of that cognitive load off the people who are going to be actually driving the carbon usage, the privacy concerns, the security concerns. If I have to have 10 teams and they all have to think about, “How do I make my data secure”, and “How do I make this call”, or whatever it is, if they all have to think about that, that’s distributing the energy and the cognitive load around versus, “Okay, we have taken the time to figure this out, solve it in the best practice way. And if you get on our paved road platform solution, maybe you could do a little better, but you’re going to do pretty good for everybody”.

And I think that’s the good enough approach that architects need to try and get to, not the, “Perfect as the enemy of good enough”, that we want to have the solution that applies to enough people. Because if you wait until the end and then you get some, “Oh, here’s a GDPR”, and pretty soon the only solution is, “Let’s just secure everything because we don’t know how to secure in just the little bits that we need to do”.

Sarah Wells: I always think that you don’t want people to have to go and read something that explains to them what they’re expected to do. You want to just make it so they can’t do it wrong. That’s an architectural thing. Make it so that people can’t shoot themselves in the foot.

Daniel Bryant: Yes, well said, Sarah.

Thomas Betts: The last thing I wanted to talk about for platforms, my sense Daniel, is in the last, say five or 10 years ago, this was much… You’d have your DevOps experts on staff and they would have to be building your paved road, your engineering system, whatever it is. A lot more of that custom code has just become commodities off the shelf software. And first two questions. One, am I correct in that assumption? And second, how do architects respond to that when we’re shifting from a build versus buy mentality for the platform?

Daniel Bryant: Yes, I’m seeing tools that a lot of the platform architecture and the software architecture is symbiotic. I’ve done a talk actually at KubeCon last year about… It was platform engineering for software architects, was my pitch, because I’m bumping into a lot of folks. I think I’m going to KubeCon soon, and the largest chunk of the audience are actually application architects going to KubeCon, which has historically been a very much Kubernetes platform shaped conference, which again, I grew up in, I know and love, but when architects started rolling in now I was like, “Oh, these conversations I’m having with folks are very different”.

But there, to Sarah’s point earlier, they’re being told that you’ve got to go Kubernetes, because some vendors are actually packaging their software as Kubernetes packages there. “If you are not running Kubernetes, you don’t get access to the latest version of the certain checking software or verification software”.

I’ve literally heard that in banks where they’re buying off the shelf components, and it’s now being packaged as Kubernetes components. And if they don’t have Kubernetes, that’s a problem. So I think the application architects are having to be much more aware of what platforms… How to define them, what they offer, these things. Now, I talk a lot about platform as a product. You have to have a platform that makes it easy to do the right thing from getting your ideas, your code tested, verified, secured, all the way through to adding value to users in production.

And as we’ve said a few times in this podcast, make that process as pain-free as possible. But as an architect, you’ve got to know all these things. It is another thing, another group of things to learn about Kubernetes, Serverless. There’s all those various vendors that Heroku and Vassal offer different forms of platform as a service and you often get them popping up a shadow IT in big enterprises. So you have to be aware of these things and offer guidance as to what is the way to do things in this company. “We lean towards this approach. We don’t go here because here’s the trade-offs and we don’t like these trade-offs”.

Thomas Betts: Yes. Yes, I made a comment, I met with some of my coworkers last week and the product manager for our platform that we build, our engineering system, I told him, “I don’t speak Kubernetes, and thank you for that”, because that’s a huge win for me when I’m doing the architecture. I know that the pods are going to get created and I can say, “Oh, I need four, I need two”. Now, there’s times when I need to know what that means underneath the covers, but most of my day I don’t need to. So it’s like when I need to dive deep, and this is the T-shaped architect, I need to be broad in a lot of topics and then be able to dive down and the things that I don’t know about, I want to know who are the experts that know that.

And so that goes back to the same ideas of compartmentalization. Put the microservices together, have these little pods, whether it’s people or servers of where’s the specialty? How do I do that? Is it an Agentic AI that I’m calling to do that, or is it the platform team or is it this very specific stream aligned development team? So being able to think in those pieces helps reduce the cognitive load on the architect, because now you don’t have to know about all the details all the time.

Eran Stiller: But I think it’s very similar to what you mentioned about the AI, that you don’t necessarily need to know the nitty-gritty, how it works and how to implement it, but you need to know it’s there and what it gives you and how you can use it to your benefit, and what are the pros and cons of each, again, at a high level. Because if you do know these things, it’ll make your job as an architect much easier, and you’ll probably be able to do a much better job with it. And that’s where I think also QCon fits in and all the other conferences that you go and learn this stuff.

Closing thoughts [34:15]

Thomas Betts: I’m going to leave it there. That was a fantastic wrap up, Eran, saying that we need to know the high level aspects of what are these things, and then if I need to dive deep, just go read Qcon.

Eran Stiller: I love that.

Thomas Betts: Go to attend Qcon. So I want to thank everyone again for attending today, Eran, Daniel and Sarah, thank you for being here.

Eran Stiller: Thanks.

Sarah Wells: Thank you.

Daniel Bryant: Thank you very much.

Thomas Betts: And we hope you’ll join us again for another episode of the InfoQ podcast.

About the Authors

.
From this page you also have access to our recorded show notes. They all have clickable links that will take you directly to that part of the audio.

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


PayPal’s New Agent Toolkit Connects AI Frameworks with Payment APIs Through MCP

MMS Founder
MMS Vinod Goje

Article originally posted on InfoQ. Visit InfoQ

PayPal has released its Agent Toolkit, designed to help developers integrate PayPal’s API suite with AI frameworks through the Model Context Protocol (MCP). The toolkit provides access to APIs for payments, invoices, disputes, shipment tracking, catalog management, subscriptions, and analytics capabilities.

The company is adopting MCP, a standard proposed by Anthropic that aims to standardize how agents access third-party services and data sources. PayPal’s official MCP server is now available for developers, offering remote MCP servers with authentication integration in the cloud environment. The technology will support users working across multiple devices with simplified login procedures.

The PayPal Agent Toolkit provides developers with a streamlined way to incorporate PayPal’s commerce capabilities into AI agent workflows. This library serves as an intermediary between PayPal’s API ecosystem and modern AI frameworks, enabling AI agents to handle tasks like order management, invoice generation, and subscription control without requiring developers to implement complex API integrations manually.

The toolkit offers several important capabilities, including “Easy integration with PayPal services, including functions that correspond to common actions within the payments, invoices, disputes, shipment tracking, catalog, subscriptions, reporting and insights APIs, eliminating the need to delve deep into each API endpoint.” Currently supporting TypeScript with Python support planned for future release, the toolkit works with leading AI frameworks such as Model Context Protocol servers and Vercel’s AI SDK.

The PayPal Agent Toolkit opens new avenues for businesses to integrate AI-powered workflows with financial operations. By connecting AI frameworks directly to PayPal services, developers can create specialized agents that handle various commerce-related tasks.

For Order Management and Shipment tracking, businesses can implement AI agents that create orders from conversational interactions, process payments with proper authentication, and provide shipment updates. A customer service agent could use the toolkit to complete transactions when customers confirm purchases through chat interfaces.

The toolkit also supports Intelligent Invoice Handling, allowing AI assistants to generate and manage invoices based on service completions. These agents can use natural language instructions to define invoice parameters, send them to clients via email, and monitor payment status.

With Streamlined Subscription Management capabilities, businesses can develop agents that handle the entire subscription lifecycle. This includes creating products, setting up subscription plans, and processing recurring payments through PayPal’s payment system. As described in the documentation, “a membership agent could use the toolkit to set up a recurring PayPal payment when a new user signs up for a service and approves the payment.”

The PayPal Agent Toolkit implementation requires minimal setup, as demonstrated in the code examples. Developers first initialize the PayPal workflows by importing the necessary components and configuring client credentials:

import { PayPalWorkflows, ALL_TOOLS_ENABLED } from '@paypal/agent-toolkit/ai-sdk';

const paypalWorkflows = new PayPalWorkflows({

clientId: process.env.PAYPAL_CLIENT_ID,

clientSecret: process.env.PAYPAL_CLIENT_SECRET,

configuration: {

actions: ALL_TOOLS_ENABLED,

},

});

Once configured, the toolkit can be used with any compatible language model through a simple API call structure. The code shows how developers can integrate PayPal’s functionality with an AI model:

const llm: LanguageModelV1 = getModel(); // The model to be used with ai-sdk

const { text: response } = await generateText({

model: llm,

tools: paypalToolkit.getTools(),

maxSteps: 10,

prompt: `Create an order for $50 for custom handcrafted item and get the payment link.`,

});

This integration allows AI models to execute PayPal operations through natural language instructions, such as creating orders and generating payment links, without requiring developers to write extensive custom code for API integrations.

David Paluy, CTO at Suppli, raised important questions about identity and authentication in agentic commerce:

Financial systems rely on identifying who is making a transaction. An AI agent has no legal identity of its own – it operates under a user’s or organization’s identity.

Paluy suggested that the industry may need to extend the concept of Know-Your-Customer to “Know-Your-Agent” – verifying an agent’s identity and authorization as an intermediary acting for a user.

Abdul Abdirahman, a principal at F-Prime, highlighted existing challenges in API infrastructure:

Today’s APIs are designed for human developers to engage with and thus not optimized for the agentic world. Many modifications are needed to enable agents to execute complex tasks efficiently.

The announcement also triggered reactions on social media. Kenneth Auchenberg, Partner at AlleyCorp, expressed surprise at PayPal’s position in the market, stating:

Wait, Paypal shipped a remote MCP server, before Stripe?

Developers interested in exploring the PayPal Agent Toolkit can access the official repository at https://github.com/paypal/agent-toolkit/ for documentation, examples, and implementation guides. The repository contains detailed information on getting started with the toolkit, supported features, and best practices for integrating PayPal’s payment capabilities into AI agent workflows.

About the Author

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Google Releases Last Android 16 Beta Before Official Launch

MMS Founder
MMS Sergio De Simone

Article originally posted on InfoQ. Visit InfoQ

With the release of the last Android 16 beta, developers should ensure their apps or libraries are free of any compatibility issues. Google warns of changes—including JobScheduler quotas, stronger intent security, 16KB page size—that might affect apps even if they do not specifically target Android 16.

The JobScheduler in Android 16 will enforce runtime quotas based on several factors, including the app’s standby bucket—suche as active, working, frequent, rare, and restricted— whether a background job was started while the app was active, or whether a job executes concurrently with a foreground service. When a job runs out of its quota, it will be blocked.

An important, and potentially breaking, change is the update to the Android Runtime (ART), which affects apps that use reflections or JNI to access Android internals. Google warns that relying on internal ART structures, such as non-SDK interfaces, is risky, as ART updates are delivered via Google Play and are decoupled from the device’s platform version.

If an app uses intents, it must account for Android 16’s new protections against redirection attacks, which occur when an attacker hijack an intent to trigger a different component. These attacks can exploit serialized intents passed via the extra field or when intents are marshaled into strings. For instance, a malicious app might pass a null value to getCallingActivity() hoping that a vulnerable app will not validate it. Likewise, ignoring the return code from checkCallingPermission() is a common mistake leading to potential risks.

Android 16 introduces a by-default security hardening solution to Intent redirection exploits. In most cases, apps that use intents normally won’t experience any compatibility issues.

The removeLaunchSecurityProtection method allows apps to opt out of the new protections, Google says. However, developers should thoroughly test their intent handling and opt out only when absolutely necessary.

Android 15 introduced 16 KB page size as a performance optimization, improving app launch times on systems under memory pressure, reducing power consumption during app launch, and shortening system boot time. In Android 16, while 4 KB-aligned apps will work in compatibility mode, the user will be displayed a potentially-annoying warning dialog. To suppress this dialog, developers can set the android:pageSizeCompat property in the AndroidManifest.xml.

Additional Android 16 changes potentially affecting existing apps include the removal of the option to opt out of edge-to-edge mode, a new predictive-back behavior, and optimizations to fixed-rate scheduling. On large-screen devices, Android 16 also fully enforces orientation, resizability, and aspect ratio settings.

This is just a glimpse of what developers should take into account in the upcoming release of Android. Refer to the official documentation for the full picture.

About the Author

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Google Cloud Enhances Databases with Firestore and MongoDB Features

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

At the recent Google Cloud Next 2025, Google Cloud announced the preview of Firestore with MongoDB compatibility, introducing support for the MongoDB API and query language. This capability allows users to store and query semi-structured JSON data within Google Cloud’s real-time document database.

Firestore with MongoDB compatibility is backed by a serverless infrastructure offering single-digit-millisecond read performance, automatic scaling, and high availability. Minh Nguyen, senior product manager at Google Cloud, and Patrick Costello, engineering manager at Google Cloud, stated:

“Firestore developers can now take advantage of MongoDB’s API portability along with Firestore’s differentiated serverless service, to enjoy multi-region replication with strong consistency, virtually unlimited scalability, industry-leading high availability of up to 99.999% SLA, and single-digit milliseconds read performance.”

The backend of Firestore automatically replicates data across availability zones and regions, ensuring no downtime or data loss. Future developments will include data interoperability between Firestore’s MongoDB compatible interface and Firestore’s real-time and offline SDKs.

.ai-rotate {position: relative;}
.ai-rotate-hidden {visibility: hidden;}
.ai-rotate-hidden-2 {position: absolute; top: 0; left: 0; width: 100%; height: 100%;}
.ai-list-data, .ai-ip-data, .ai-filter-check, .ai-fallback, .ai-list-block, .ai-list-block-ip, .ai-list-block-filter {visibility: hidden; position: absolute; width: 50%; height: 1px; top: -1000px; z-index: -9999; margin: 0px!important;}
.ai-list-data, .ai-ip-data, .ai-filter-check, .ai-fallback {min-width: 1px;}

Techstrong Gang Youtube
AWS Hub

Details about supported query and projection operators and features by MongoDB API version are available, along with limitations and behavior differences. Text search operators are currently unsupported.

For more information, explore the following links:



MongoDB announced several updates at Google Cloud Next ’25, emphasizing their collaboration with Google Cloud.

MongoDB Atlas has expanded availability to Google Cloud regions in Mexico and South Africa. The company was recognized as the 2025 Google Cloud Partner of the Year for Data & Analytics – Marketplace, marking MongoDB’s sixth consecutive year as a partner of the year.

MongoDB is enhancing developer productivity through integrations with Google Cloud’s Gemini Code Assist. This tool allows developers to access MongoDB documentation and code snippets directly within their Integrated Development Environments (IDEs).

Additionally, MongoDB is now integrated with Project IDX, which helps developers set up MongoDB environments quickly. Developers can also easily integrate MongoDB Atlas with Firebase, using a new Firebase extension for MongoDB Atlas for real-time synchronization.

For more details, visit:

MongoDB Partner of the Year


Google Cloud introduced numerous database enhancements at its Next 2025 conference, including new AI features in AlloyDB and a MongoDB-compliant API for Firestore. AlloyDB has incorporated the open-source pgvector extension, allowing storage and querying of vector embeddings.

In April 2024, Google Cloud added the Scalable Nearest Neighbor (ScaNN) algorithm to AlloyDB, significantly enhancing performance metrics. This was highlighted by Andi Gutmans, VP & GM for databases at Google:

“We’re seeing thousands and thousands of customers doing vector processing. Target went into production with AlloyDB for their online retail search and they have a 20% better hit rate on recommendations. That’s real revenue.”

For more on this, check:

Google Cloud Databases


Google Cloud Migration Center simplifies the transition from on-premises servers to Google Cloud, which significantly enhances performance and reduces operational costs. The integration of MongoDB cluster assessment into the Migration Center allows for deeper insights into MongoDB deployments.

Using Migration Center provides significant advantages, including:

  • Automated discovery and inventory of MongoDB clusters.
  • Enhanced understanding of configuration and resource utilization.
  • A unified platform for asset discovery and migration tools.

For further insights, see:

![Migration Center Screenshot](https://webassets.mongodb.com/_com_assets/cms/Screenshot 2025-04-08 at 7.08.06 AM-1aggslax9d.png)


MongoDB Atlas now supports native JSON for BigQuery, simplifying data transformations and reducing operational costs. This integration allows businesses to analyze data with better flexibility and efficiency.

MongoDB also introduced support for Data Federation and Online Archive, enabling users to manage and archive cold data seamlessly.

Discover more at:


For a secure and smooth login experience, explore passwordless authentication solutions at mojoauth. Quickly integrate passwordless authentication for web and mobile applications to enhance user experience and security. Contact us today to learn more about our services!

*** This is a Security Bloggers Network syndicated blog from MojoAuth – Go Passwordless authored by Dev Kumar. Read the original post at: https://mojoauth.com/blog/google-cloud-enhances-databases-with-firestore-and-mongodb-features/

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Spring News Roundup: RCs of Spring Boot, Data, Security, Auth, Session, Integration, Web Services

MMS Founder
MMS Michael Redlich

Article originally posted on InfoQ. Visit InfoQ

There was a flurry of activity in the Spring ecosystem during the week of April 21st, 2025, highlighting first release candidates of Spring Boot, Spring Data 2025.0.0, Spring Security, Spring Authorization Server, Spring Session, Spring Integration, Spring Modulith and Spring Web Services. There were also second milestone releases of Spring Data 2025.1.0 and Spring for Apache Kafka and a first milestone release of Spring Vault.

Spring Boot

The first release candidate of Spring Boot 3.5.0 delivers bug fixes, improvements in documentation, dependency upgrades and new features such as: new annotations, @ServletRegistration and @FilterRegistration, as an annotation-based alternative to registering servlet and filter beans using the ServletRegistrationBean and FilterRegistrationBean classes; and new classes that support Docker credential stores and helpers. More details on this release may be found in the release notes.

The release of Spring Boot 3.4.5 and 3.3.11 (announced here and here, respectively) provide bug fixes, improvements in documentation and dependency upgrades. More importantly, the Spring Boot team has disclosed that these two releases, along with versions 3.2.14, 3.1.16 and 2.7.25, address CVE-2025-22235, a vulnerability in which the overloaded to() method, defined in the EndpointRequest class creates an incorrect null/** matcher, under certain conditions, if the actuator endpoint is not exposed. Further details on these releases may be found in the release notes for version 3.4.5 and version 3.3.11.

Spring Data

The first release candidate of Spring Data 2025.0.0 features: refinements to the Hibernate Query Language (HQL), Elastic Query Language (EQL) and Jakarta Persistence Query Language (JPQL) to resolve various query issues; and new deprecation warnings for intended breaking changes, such as the removal of support for JMX, planned for Spring Data 4.0. This version aligns with Spring Boot 3.5.0-RC1 and the Spring Data team plans a GA release in May 2025.

The second milestone release of Spring Data 2025.1.0 ships with support for JSpecify on sub-projects: Spring Data Commons, Spring Data JPA, Spring Data MongoDB, Spring Data LDAP, Spring Data Cassandra, Spring Data KeyValue, and Spring Data Elasticsearch. There was also a breaking change with a significant rewrite of the QueryEnhancer interface such that configuration via the the spring.data.jpa.query.native.parser property is no longer available. Configuration is now possible via the @EnableJpaRepositories annotation. More details on this release may be found in the release notes.

Spring Security

The first release candidate of Spring Security 6.5.0 delivers bug fixes, dependency upgrades and new features such as: refinements to the implementation of the OAuth 2.0 Demonstrating Proof of Possession (DPoP) specification that include a new AuthenticationEntryPoint interface that returns the WWW-Authenticate header upon failure of a DPoP authentication; and refinements to the PathPatternRequestMatcher class to use a servlet in the path pattern instead of implementing the RequestMatcher interface for the servlet. Further details on this release may be found in the release notes and what’s new guide.

The release of Spring Security 6.4.5 and 6.3.9 (announced here and here, respectively) provide bug fixes, improvements in documentation and dependency upgrades. More importantly, the Spring Security team has disclosed that these two releases, along with versions 6.2.11, 6.1.15, 6.0.17, 5.8.19 and 5.7.17, address CVE-2025-22234, a follow up to CVE-2025-22228, whee the the timing attack mitigation, implemented in DaoAuthenticationProvider class, had been inadvertently broken. More details on these releases may be found in the release notes for version 6.4.5 and version 6.3.9.

Spring Authorization Server

The first release candidate of Spring Authorization Server 1.5.0 provides dependency upgrades and new features such as: the addition of authorization server metadata for the OAuth 2.0 DPoP and Pushed Authorization Requests (PAR) specifications; and a new REQUEST_URI constant, defined in the Spring Security OAuth2ParameterNames class, to facilitate flow in PAR. Further details on this release may be found in the release notes.

Spring Session

The first release candidate of Spring Session 3.5.0 ships with bug fixes, dependency upgrades and new features: a new CompositeHttpSessionIdResolver class, an implementation of the HttpSessionIdResolver interface, that iterates over a given collection of delegate instances of the HttpSessionIdResolver; and an optimization of the JdbcIndexedSessionRepository class to only start JDBC transactions only when there are session updates with a JDBC-based repository. More details on this release may be found in the release notes.

Spring Integration

The first release candidate of Spring Integration 6.5.0 provides bug fixes, improvements in documentation, dependency upgrades and new features such as: discontinued use of the logger.error() method in the TcpSendingMessageHandler class that was deemed unnecessary; and a new LockRequestHandlerAdvice class, based on the LockRegistry interface, that maintains mutual access to underlying services. Further details on this release may be found in the release notes.

Spring Modulith

The first release candidate of Spring Modulith 1.4.0 delivers bug fixes, dependency upgrades and improvements such as: performance improvements in use of the DefaultEventPublicationRegistry class and the publishEvent() method defined in the Spring Framework AbstractApplicationContext class; and state change detection for instances of the Scenario class should only accept non-empty collections by default. More details on this release may be found in the release notes.

Spring for Apache Kafka

The second milestone release of Spring for Apache Kafka 4.0.0 provides bug fixes, improvements in documentation, dependency upgrades and new features such as: client dependency upgrades to Apache Kafka 4.0.0; and an optimization in the MessagingMessageListenerAdapter class that now returns null from the invoke() method, defined in the DelegatingInvocableHandler class, that avoids an unnecessary instance return of the InvocationResult class. Further details on this release may be found in the release notes.

Spring Web Services

The first release candidate of Spring Web Services 4.1.0 ships with bug fixes, improvements in documentation, dependency upgrades and new features such as: support for configuring arbitrary options for Apache Web Services Security for Java (WSS4J) via the Wss4jSecurityInterceptor class; and the ability to create custom implementations of the MethodArgumentResolver and MethodReturnValueHandler interfaces. More details on this release may be found in the release notes.

Spring Vault

The first milestone release of Spring Vault 3.2.0 available delivers bug fixes, improvements in documentation, dependency upgrades and new features such as: support for Instance Metadata Service Version 2 (IMDSv2) on AWS EC2; and the ability to use the Github token authentication mechanism. Further details on this release may be found in the release notes.

About the Author

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


MongoDB, Inc. (NASDAQ:MDB) Stock Position Lessened by Voya Investment Management LLC

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

Voya Investment Management LLC reduced its position in shares of MongoDB, Inc. (NASDAQ:MDBFree Report) by 96.4% in the fourth quarter, according to the company in its most recent disclosure with the Securities and Exchange Commission. The firm owned 474,019 shares of the company’s stock after selling 12,794,369 shares during the period. Voya Investment Management LLC owned 0.64% of MongoDB worth $110,356,000 as of its most recent SEC filing.

Several other large investors also recently modified their holdings of MDB. B.O.S.S. Retirement Advisors LLC bought a new position in MongoDB in the fourth quarter worth about $606,000. Union Bancaire Privee UBP SA bought a new position in shares of MongoDB in the 4th quarter worth approximately $3,515,000. HighTower Advisors LLC raised its position in shares of MongoDB by 2.0% in the 4th quarter. HighTower Advisors LLC now owns 18,773 shares of the company’s stock worth $4,371,000 after acquiring an additional 372 shares in the last quarter. Nisa Investment Advisors LLC lifted its stake in shares of MongoDB by 428.0% in the 4th quarter. Nisa Investment Advisors LLC now owns 5,755 shares of the company’s stock valued at $1,340,000 after purchasing an additional 4,665 shares during the period. Finally, Covea Finance bought a new stake in shares of MongoDB during the fourth quarter valued at approximately $3,841,000. 89.29% of the stock is owned by institutional investors.

Insider Activity at MongoDB

In other MongoDB news, CEO Dev Ittycheria sold 8,335 shares of the business’s stock in a transaction on Tuesday, January 28th. The shares were sold at an average price of $279.99, for a total value of $2,333,716.65. Following the sale, the chief executive officer now owns 217,294 shares in the company, valued at $60,840,147.06. The trade was a 3.69 % decrease in their position. The sale was disclosed in a document filed with the Securities & Exchange Commission, which is available through the SEC website. Also, Director Dwight A. Merriman sold 3,000 shares of MongoDB stock in a transaction dated Monday, February 3rd. The stock was sold at an average price of $266.00, for a total value of $798,000.00. Following the completion of the sale, the director now directly owns 1,113,006 shares in the company, valued at $296,059,596. This trade represents a 0.27 % decrease in their ownership of the stock. The disclosure for this sale can be found here. In the last 90 days, insiders have sold 47,680 shares of company stock valued at $10,819,027. 3.60% of the stock is owned by company insiders.

Wall Street Analyst Weigh In

Several equities analysts have commented on the company. Robert W. Baird lowered their price objective on MongoDB from $390.00 to $300.00 and set an “outperform” rating for the company in a research report on Thursday, March 6th. The Goldman Sachs Group cut their price objective on shares of MongoDB from $390.00 to $335.00 and set a “buy” rating on the stock in a report on Thursday, March 6th. Piper Sandler lowered their target price on shares of MongoDB from $280.00 to $200.00 and set an “overweight” rating for the company in a report on Wednesday. Needham & Company LLC reduced their price objective on MongoDB from $415.00 to $270.00 and set a “buy” rating for the company in a research report on Thursday, March 6th. Finally, China Renaissance initiated coverage on MongoDB in a report on Tuesday, January 21st. They issued a “buy” rating and a $351.00 target price on the stock. Eight analysts have rated the stock with a hold rating, twenty-four have assigned a buy rating and one has assigned a strong buy rating to the stock. Based on data from MarketBeat.com, MongoDB presently has a consensus rating of “Moderate Buy” and an average target price of $294.78.

Check Out Our Latest Stock Analysis on MDB

MongoDB Stock Performance

Shares of MDB traded up $0.29 during mid-day trading on Friday, reaching $173.50. The stock had a trading volume of 2,105,392 shares, compared to its average volume of 1,841,392. The firm has a market cap of $14.09 billion, a price-to-earnings ratio of -63.32 and a beta of 1.49. MongoDB, Inc. has a 12 month low of $140.78 and a 12 month high of $387.19. The business has a fifty day moving average of $195.15 and a 200 day moving average of $248.95.

MongoDB (NASDAQ:MDBGet Free Report) last issued its quarterly earnings results on Wednesday, March 5th. The company reported $0.19 earnings per share (EPS) for the quarter, missing analysts’ consensus estimates of $0.64 by ($0.45). The business had revenue of $548.40 million during the quarter, compared to analysts’ expectations of $519.65 million. MongoDB had a negative return on equity of 12.22% and a negative net margin of 10.46%. During the same quarter in the previous year, the company posted $0.86 EPS. On average, research analysts forecast that MongoDB, Inc. will post -1.78 EPS for the current fiscal year.

About MongoDB

(Free Report)

MongoDB, Inc, together with its subsidiaries, provides general purpose database platform worldwide. The company provides MongoDB Atlas, a hosted multi-cloud database-as-a-service solution; MongoDB Enterprise Advanced, a commercial database server for enterprise customers to run in the cloud, on-premises, or in a hybrid environment; and Community Server, a free-to-download version of its database, which includes the functionality that developers need to get started with MongoDB.

Further Reading

Institutional Ownership by Quarter for MongoDB (NASDAQ:MDB)

Before you consider MongoDB, you’ll want to hear this.

MarketBeat keeps track of Wall Street’s top-rated and best performing research analysts and the stocks they recommend to their clients on a daily basis. MarketBeat has identified the five stocks that top analysts are quietly whispering to their clients to buy now before the broader market catches on… and MongoDB wasn’t on the list.

While MongoDB currently has a Moderate Buy rating among analysts, top-rated analysts believe these five stocks are better buys.

View The Five Stocks Here

7 AI Stocks to Invest in Today: Capitalizing on AI and Tech Trends in 2025 Cover

Discover the top 7 AI stocks to invest in right now. This exclusive report highlights the companies leading the AI revolution and shaping the future of technology in 2025.

Get This Free Report

Like this article? Share it with a colleague.

Link copied to clipboard.

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Stifel Financial Corp Increases Position in MongoDB, Inc. (NASDAQ:MDB) – MarketBeat

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

Stifel Financial Corp grew its position in shares of MongoDB, Inc. (NASDAQ:MDBFree Report) by 6.4% in the 4th quarter, according to the company in its most recent filing with the Securities and Exchange Commission (SEC). The institutional investor owned 114,216 shares of the company’s stock after purchasing an additional 6,894 shares during the quarter. Stifel Financial Corp owned approximately 0.15% of MongoDB worth $26,590,000 as of its most recent SEC filing.

A number of other large investors also recently bought and sold shares of the business. TD Waterhouse Canada Inc. raised its position in shares of MongoDB by 7.2% during the 4th quarter. TD Waterhouse Canada Inc. now owns 1,557 shares of the company’s stock valued at $362,000 after buying an additional 105 shares during the period. Tower Research Capital LLC TRC boosted its stake in shares of MongoDB by 554.0% in the 4th quarter. Tower Research Capital LLC TRC now owns 8,077 shares of the company’s stock valued at $1,880,000 after buying an additional 6,842 shares during the period. Teachers Retirement System of The State of Kentucky grew its holdings in MongoDB by 138.1% during the 4th quarter. Teachers Retirement System of The State of Kentucky now owns 51,432 shares of the company’s stock worth $11,974,000 after acquiring an additional 29,832 shares in the last quarter. Transatlantique Private Wealth LLC increased its holdings in MongoDB by 15.5% in the fourth quarter. Transatlantique Private Wealth LLC now owns 3,273 shares of the company’s stock valued at $762,000 after buying an additional 440 shares during the last quarter. Finally, Thematics Asset Management boosted its holdings in MongoDB by 49.9% in the fourth quarter. Thematics Asset Management now owns 132,313 shares of the company’s stock valued at $30,804,000 after purchasing an additional 44,061 shares in the last quarter. Institutional investors and hedge funds own 89.29% of the company’s stock.

Wall Street Analysts Forecast Growth

MDB has been the topic of several recent research reports. Daiwa America upgraded MongoDB to a “strong-buy” rating in a research report on Tuesday, April 1st. Barclays cut their price objective on shares of MongoDB from $330.00 to $280.00 and set an “overweight” rating for the company in a research report on Thursday, March 6th. Daiwa Capital Markets began coverage on shares of MongoDB in a report on Tuesday, April 1st. They issued an “outperform” rating and a $202.00 target price on the stock. Stifel Nicolaus dropped their price target on shares of MongoDB from $340.00 to $275.00 and set a “buy” rating on the stock in a research note on Friday, April 11th. Finally, Scotiabank restated a “sector perform” rating and issued a $160.00 price objective (down previously from $240.00) on shares of MongoDB in a research report on Friday. Eight investment analysts have rated the stock with a hold rating, twenty-four have given a buy rating and one has assigned a strong buy rating to the company’s stock. According to data from MarketBeat, the stock currently has an average rating of “Moderate Buy” and an average price target of $294.78.

Read Our Latest Research Report on MongoDB

MongoDB Stock Performance

MongoDB stock traded up $0.29 during midday trading on Friday, reaching $173.50. 2,105,392 shares of the company were exchanged, compared to its average volume of 1,841,392. The company has a market capitalization of $14.09 billion, a price-to-earnings ratio of -63.32 and a beta of 1.49. The firm’s 50-day simple moving average is $195.15 and its 200-day simple moving average is $248.95. MongoDB, Inc. has a 1 year low of $140.78 and a 1 year high of $387.19.

MongoDB (NASDAQ:MDBGet Free Report) last announced its quarterly earnings data on Wednesday, March 5th. The company reported $0.19 earnings per share (EPS) for the quarter, missing the consensus estimate of $0.64 by ($0.45). The firm had revenue of $548.40 million for the quarter, compared to analysts’ expectations of $519.65 million. MongoDB had a negative net margin of 10.46% and a negative return on equity of 12.22%. During the same period in the previous year, the company posted $0.86 EPS. On average, analysts forecast that MongoDB, Inc. will post -1.78 EPS for the current fiscal year.

Insider Activity

In other MongoDB news, CAO Thomas Bull sold 301 shares of MongoDB stock in a transaction on Wednesday, April 2nd. The shares were sold at an average price of $173.25, for a total value of $52,148.25. Following the transaction, the chief accounting officer now owns 14,598 shares in the company, valued at $2,529,103.50. This trade represents a 2.02 % decrease in their position. The sale was disclosed in a document filed with the Securities & Exchange Commission, which is available through this link. Also, CFO Srdjan Tanjga sold 525 shares of the business’s stock in a transaction on Wednesday, April 2nd. The shares were sold at an average price of $173.26, for a total transaction of $90,961.50. Following the sale, the chief financial officer now directly owns 6,406 shares of the company’s stock, valued at $1,109,903.56. This represents a 7.57 % decrease in their ownership of the stock. The disclosure for this sale can be found here. Insiders sold a total of 47,680 shares of company stock valued at $10,819,027 over the last quarter. Company insiders own 3.60% of the company’s stock.

MongoDB Company Profile

(Free Report)

MongoDB, Inc, together with its subsidiaries, provides general purpose database platform worldwide. The company provides MongoDB Atlas, a hosted multi-cloud database-as-a-service solution; MongoDB Enterprise Advanced, a commercial database server for enterprise customers to run in the cloud, on-premises, or in a hybrid environment; and Community Server, a free-to-download version of its database, which includes the functionality that developers need to get started with MongoDB.

Read More

Institutional Ownership by Quarter for MongoDB (NASDAQ:MDB)

Before you consider MongoDB, you’ll want to hear this.

MarketBeat keeps track of Wall Street’s top-rated and best performing research analysts and the stocks they recommend to their clients on a daily basis. MarketBeat has identified the five stocks that top analysts are quietly whispering to their clients to buy now before the broader market catches on… and MongoDB wasn’t on the list.

While MongoDB currently has a Moderate Buy rating among analysts, top-rated analysts believe these five stocks are better buys.

View The Five Stocks Here

7 AI Stocks to Invest in Today: Capitalizing on AI and Tech Trends in 2025 Cover

Discover the top 7 AI stocks to invest in right now. This exclusive report highlights the companies leading the AI revolution and shaping the future of technology in 2025.

Get This Free Report

Like this article? Share it with a colleague.

Link copied to clipboard.

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Scotiabank Reduces MongoDB (MDB) Price Target Amid Market Cautio – GuruFocus

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

Scotiabank has adjusted its outlook for MongoDB (MDB, Financial), decreasing the price target from $240 to $160. Despite recognizing several strengths within MongoDB’s business model, analyst Patrick Colville maintains a Sector Perform rating for the company. This cautious stance suggests that the firm advises a prudent approach and does not recommend investors to quickly increase their positions in MongoDB at this time.

Wall Street Analysts Forecast

1915784108412399616.png

Based on the one-year price targets offered by 34 analysts, the average target price for MongoDB Inc (MDB, Financial) is $278.18 with a high estimate of $520.00 and a low estimate of $160.00. The average target implies an
upside of 64.47%
from the current price of $169.14. More detailed estimate data can be found on the MongoDB Inc (MDB) Forecast page.

Based on the consensus recommendation from 38 brokerage firms, MongoDB Inc’s (MDB, Financial) average brokerage recommendation is currently 2.0, indicating “Outperform” status. The rating scale ranges from 1 to 5, where 1 signifies Strong Buy, and 5 denotes Sell.

Based on GuruFocus estimates, the estimated GF Value for MongoDB Inc (MDB, Financial) in one year is $432.68, suggesting a
upside
of 155.81% from the current price of $169.14. GF Value is GuruFocus’ estimate of the fair value that the stock should be traded at. It is calculated based on the historical multiples the stock has traded at previously, as well as past business growth and the future estimates of the business’ performance. More detailed data can be found on the MongoDB Inc (MDB) Summary page.

MDB Key Business Developments

Release Date: March 05, 2025

  • Total Revenue: $548.4 million, a 20% year-over-year increase.
  • Atlas Revenue: Grew 24% year-over-year, representing 71% of total revenue.
  • Non-GAAP Operating Income: $112.5 million, with a 21% operating margin.
  • Net Income: $108.4 million or $1.28 per share.
  • Customer Count: Over 54,500 customers, with over 7,500 direct sales customers.
  • Gross Margin: 75%, down from 77% in the previous year.
  • Free Cash Flow: $22.9 million for the quarter.
  • Cash and Cash Equivalents: $2.3 billion, with a debt-free balance sheet.
  • Fiscal Year 2026 Revenue Guidance: $2.24 billion to $2.28 billion.
  • Fiscal Year 2026 Non-GAAP Operating Income Guidance: $210 million to $230 million.
  • Fiscal Year 2026 Non-GAAP Net Income Per Share Guidance: $2.44 to $2.62.

For the complete transcript of the earnings call, please refer to the full earnings call transcript.

Positive Points

  • MongoDB Inc (MDB, Financial) reported a 20% year-over-year revenue increase, surpassing the high end of their guidance.
  • Atlas revenue grew 24% year over year, now representing 71% of total revenue.
  • The company achieved a non-GAAP operating income of $112.5 million, resulting in a 21% non-GAAP operating margin.
  • MongoDB Inc (MDB) ended the quarter with over 54,500 customers, indicating strong customer growth.
  • The company is optimistic about the long-term opportunity in AI, particularly with the acquisition of Voyage AI to enhance AI application trustworthiness.

Negative Points

  • Non-Atlas business is expected to be a headwind in fiscal ’26 due to fewer multi-year deals and a shift of workloads to Atlas.
  • Operating margin guidance for fiscal ’26 is lower at 10%, down from 15% in fiscal ’25, due to reduced multi-year license revenue and increased R&D investments.
  • The company anticipates a high-single-digit decline in non-Atlas subscription revenue for the year.
  • MongoDB Inc (MDB) expects only modest incremental revenue growth from AI in fiscal ’26 as enterprises are still developing AI skills.
  • The company faces challenges in modernizing legacy applications, which is a complex and resource-intensive process.

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Activision Reduces Build Time of Call of Duty by 50% with MSVC Build Insights

MMS Founder
MMS Matt Foster

Article originally posted on InfoQ. Visit InfoQ

Activision has cut build times for Call of Duty: Modern Warfare II (COD) in half by profiling and optimizing their C++ build system with MSVC Build Insights to uncover bottlenecks in their compilation pipeline.

The effort unblocked developers, accelerated delivery, and reduced idle time. Their success reflects a broader trend across the industry, with teams at Netflix, Canva, and Honeycomb investing in CI performance engineering as a way to improve both productivity and developer experience.

Activision observed that persistent build delays were eroding developer flow and limiting delivery velocity. In response, the Activision team collaborated with Microsoft’s Xbox Advanced Technology Group to instrument and streamline their compilation pipeline. By using MSVC (Microsoft Visual C++) Build Insights, a profiling tool for C++ builds, engineers identified a number of key inefficiencies in their build process. While these specific issues are rooted in C++, they reflect familiar challenges faced when working with large codebases and compute heavy builds.

Among the core inefficiencies, excessive inlining was inflating compile units, link-time optimizations were dragging due to complex initializations, and inefficient symbol resolution was creating CPU stalls during the final linking stage. Each issue contributed to delay in a different part of the process, and together they highlighted how localized inefficiencies – when multiplied across a large codebase – significantly extended build time.

These targeted optimizations led to a substantial reduction in build times – from approximately 28 minutes to 14 minutes. This improvement had significant implications for Activision’s development workflow. Faster builds meant more pull requests merged, more builds, less idle time and ultimately more frequent feature delivery.

But reducing build time isn’t just a technical improvement – it has measurable effects on the developer experience. Michael Vance, SVP and software engineer at Activision, noted that “slow builds create bottlenecks in our continuous integration pipelines, delaying the verification of every piece of code and content that goes into our games.” The team’s build time improvements were not just a performance win, but a way to unblock developers and maintain velocity in a tightly integrated workflow.

This aligns with broader industry findings that highlight developer experience as a key contributor to engineering throughput. Research from GitHub and Microsoft suggests that satisfaction with internal tooling, including CI/CD pipelines, correlates strongly with productivity metrics such as PR cycle time, deployment frequency, and time to resolve issues.

Activision’s experience is indicative of a broader shift in how organizations approach CI performance. As build and test pipelines grow in complexity, teams are applying similar discipline to their profiling and instrumentation as they are with the build artifacts. Netflix reported faster iteration cycles and improved efficiency for Android developers after tuning their Gradle builds. Canva reduced CI durations from over 80 minutes to under 30, improving release velocity and reducing developer frustration. Honeycomb set internal objectives to keep build times under 15 minutes, framing CI speed as a first-class developer productivity metric. In each case, pipeline performance improvements were directly tied to happier, more effective engineering teams.

About the Author

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.