Apple Completes Migration of Key Ecosystem Service to Swift, Gains 40% Performance Uplift

MMS Founder
MMS Matt Foster

Article originally posted on InfoQ. Visit InfoQ

Apple has migrated its global Password Monitoring service from Java to Swift, achieving a 40% increase in throughput and significantly reducing memory usage—freeing up nearly 50% of previously allocated Kubernetes capacity. 

In a recent post, Apple engineers detailed how the rewrite helped the service scale to billions of requests per day while improving responsiveness and maintainability. The team cited lower memory overhead, improved startup time, and simplified concurrency as key reasons for choosing Swift over further JVM optimization.

Swift allowed us to write smaller, less verbose, and more expressive codebases (close to 85% reduction in lines of code) that are highly readable while prioritizing safety and efficiency.

Apple’s Password Monitoring service, part of the broader Password app’s ecosystem, is responsible for securely checking whether a user’s saved credentials have appeared in known data breaches, without revealing any private information to Apple. It handles billions of requests daily, performing cryptographic comparisons using privacy-preserving protocols.

This workload demands high computational throughput, tight latency bounds, and elastic scaling across regions. Traffic fluctuates significantly over the course of a day, with regional peaks differing by up to 50%. To accommodate these swings, the system must quickly spin up or wind down instances while maintaining low-latency responses.

Apple’s previous Java implementation struggled to meet the service’s growing performance and scalability needs. Garbage collection caused unpredictable pause times under load, degrading latency consistency. Startup overhead—from JVM initialization, class loading, and just-in-time compilation, slowed the system’s ability to scale in real time. Additionally, the service’s memory footprint, often reaching tens of gigabytes per instance, reduced infrastructure efficiency and raised operational costs.

Originally developed as a client-side language for Apple platforms, Swift has since expanded into server-side use cases. Apple’s engineering team selected Swift not just for its ecosystem alignment, but for its ability to deliver consistent performance in compute-intensive environments. 

The rewrite also used Vapor, a popular Swift web framework, as a foundation. Additional custom packages were implemented to handle elliptic curve operations, cryptographic auditing, and middleware specific to the Password Monitoring domain.

Swift’s deterministic memory management, based on reference counting rather than garbage collection (GC), eliminated latency spikes caused by GC pauses. This consistency proved critical for a low-latency system at scale. After tuning, Apple reported sub-millisecond 99.9th percentile latencies and a dramatic drop in memory usage: Swift instances consumed hundreds of megabytes, compared to tens of gigabytes with Java.

Startup times also improved. Without JVM initialization overhead or JIT warm-up, Swift services could cold-start more quickly, supporting Apple’s global autoscaling requirements.

Apple’s migration reflects a broader trend: the shift toward performance-oriented languages for services operating at extreme scale. Meta has a long history with Rust from hyper-performant Source control solutions to programming languages for the blockchain. Netflix introduced Rend, a high-performance proxy written in Go, to take over from a Java-based client interacting with Memcached. AWS increasingly relies on Rust in services where deterministic performance and low resource usage improve infrastructure efficiency.

While this isn’t a sign that Java and similar languages are in decline, there is growing evidence that at the uppermost end of performance requirements, some are finding that general-purpose runtimes no longer suffice.

About the Author

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Presentation: Rust: A Productive Language for Writing Database Applications

MMS Founder
MMS Carl Lerche

Article originally posted on InfoQ. Visit InfoQ

Transcript

Lerche: I’m Carl. I work on Tokio primarily, the open source async runtime. I probably started that about six, seven, eight years ago now. Now I’m doing that. I’m still working on that at Amazon. I’m at Amazon, but I’m working on Tokio there, so I’m on the open-source team. I’m going to try to convince you that Rust can be a productive language for building higher level applications, like those web apps that sit on top of databases, like the apps that back mobile apps or web apps.

Even if I don’t convince you, I’m going to try to have you leave with something of value, I’m going to give you some tips and tricks that will be generally useful working with Rust. How many people here have already written Rust? You’ve heard of Rust, I assume here. You’re not here for the Rust computer game. It’s ok. You don’t have to know Rust, but you know a little bit about it. Who already believes that Rust is generally useful for higher level use cases, that are not performance sensitive? Hands up, you’re like, “Yes, I will use Rust for building a web app now. It’s the best language for everything”.

Overview of Rust

Rust is a programming language that has roughly the same runtime performance as C or C++ do, but does this while maintaining memory safety. What’s novel about Rust is it’s got a borrow checker, which does that enforcement of memory safety at compile time, not at runtime. Rust is still relatively new, relative to other programming languages. At this point, it’s gotten to the point of being established. The rate of growth over the past few years have been really quite stunning. It’s gained adoption at quite a number of companies, small companies and big companies, Google, Amazon, Dropbox, Microsoft, they all use Rust these days. Amazon, where I’m at, we’re using Rust to deliver services like EC2, S3, CloudFront. You might have heard of them. Rust is being used more within these services to power them. It’s become an established language.

The vast majority of Rust’s adoption these days is at that infrastructure level. For networking stuff, I’m talking about databases, proxies, routers. It’s definitely less common today to see Rust being used for higher level applications like those web applications. I’m not saying no one does it. Some people definitely have, I’ve spoken with them. Mostly Rust is used at that infrastructure level. I’ve been using Rust for 10, 11 years now, which actually I thought about it, that’s the core of my life, kind of scary.

When I started using Rust, when I got involved with Rust, I also was like, ok, Rust is a systems level programming language. It’s used for those lower-level cases. I myself did not really think, Rust is a good use case for those web apps. That’s not something I considered. Over the past couple years, personally, my mind’s been changing on that. I started asking myself, is that really an inherent truth? Is Rust really only a systems language? I’m not a Rust maximalist by any means. I know I probably might sound like one, who’s like, use Rust for everything. I don’t actually believe that. I believe you should just use the best language for the job. When people ask me, what language should I use? Oftentimes I’ll say something else. We should pick the best tool for the job.

That said, what the best tool for the job is not necessarily a black and white answer. You really want to pick the language that’s going to be as productive for you for that use case, but productivity has many aspects. There’s the obvious one, how quickly can you ship your features? How quickly can developers on a team work together? How quickly can developers ramp up? It’s also the context of like, what do developers know coming in? Because you take a bunch of, it doesn’t matter, JavaScript developers, Java developers, and you put them on a Rust project solo, they’re not going to do very well. The reverse is also true. Throw me on a JavaScript project, I’m like, I don’t know. I probably wrote JavaScript a while ago. I forgot everything.

Then, besides just shipping features, there’s just actually getting the level of quality that’s required by the project. By that, I mean not all software projects have the same quality requirements. I’m sure we all believe we ship great software all the time. Realistically, sometimes you just got to ship. Bugs are ok. When you’re building a car, hopefully that’s not true. Different levels of quality depending on what you’re actually working on. Lots of aspects to consider.

How Rust Fits in Different Dimensions

Let’s talk about how Rust fits within those different dimensions. The first one being quality, that’s where Rust really shines. It’s an entire value proposition. Rust is a really good language for writing high-quality code with, both from a performance point of view, but also from the point of view of minimizing defects and bugs. On the performance side of things, that’s like what you hear about the most. Rust is really fast. It’s compiled. There’s no garbage collector. That’s not new. C and C++ do that. Those have been around for a while. Why haven’t those gained as much adoption as Java? Because there is that quality side of things. With C or C++, about 70% of all high-severity security issues are memory-related. If quality is an issue, maybe C and C++ aren’t the right choice, which is probably why there are languages like Java. Less obvious, you’ve probably heard of some stuff like fearless concurrency.

Rust’s type system can prevent a whole bunch of other bug categories, like data races. Rust’s really good for writing high-quality code. Maybe some less good things up until now if all things were equal, Rust would be a pretty slam dunk sell, high-level or infrastructure cases, but all things are not equal. Most of the complaints I hear about Rust when talking with developers can be summarized as, Rust is not as productive. Usually, that’s not what people tell me. There’ll be things like, when I tried to use Rust, I ended up fighting with the borrow checker. Or maybe you hear things like, Rust is great when it compiles, my code just works, but getting it to compile can be challenging. These are the kinds of things I hear. That really does boil down to that question of productivity.

Right now, the choice developers are making when picking Rust is to trade that development time, so longer development times for higher-quality code, but less development time than if you’re going to use a different language to reach that same level of quality. If you have a software project, that performance bar is high and that quality is high, you’re going to actually be able to reach that goal quicker with Rust than other languages. If maybe you’re willing to sacrifice a bit of quality for faster development time, maybe it doesn’t make sense. That’s the general sentiment you hear around online discussions with Rust for those higher-level use cases. The borrow checker, it’s all just unnecessary overhead.

Is that actually true? So far, maybe it doesn’t sound like I’m making a great pitch. Is the type system and the borrow checker fundamental overhead that comes with Rust? I do think there’s a kernel of truth, but reality is a bit more subtle. Again, in my Rust journey, I started with that same belief that Rust is not as productive as other languages, that it really is only good for systems-level programming. After talking with a whole bunch of teams that have been adopting Rust within their organization, that’s not really always what I heard. More often than not, the stories I heard started with like, a team had some performance requirement for some feature, and they decided to look at Rust for that project, so their team learned Rust. They were able to ship their code, meet that performance requirements, oftentimes, with minimal tuning.

Then they started noticing, over time, their software ran more reliably. They got paged less. They also noticed, as their team got more familiar with Rust, because they had to keep working on that software over the lifetime of that project, as a team, they didn’t actually notice their productivity drop as maybe they would have expected going into it. They still maintained that productivity. Also, they found lots of other different advantages as they started adopting Rust more in other cases, like in more higher-level cases themselves, that they were able to get more code reuse and other benefits like that. I started hearing the story over and again. I started to reevaluate my own assumption that Rust is not as productive.

Yes, it’s true, Rust is maybe not the best language for prototyping. First, that type system really does push you to write correct code. When you’re prototyping, you want to get your code running fast, even if it’s mostly broken, Rust’s type system might get in the way of that. What Rust lacks for prototyping, it makes up in the long run by speeding up development time over the long term, so over the entire lifespan of a software project. Code tends to live a lot longer than you might originally expect. How many times you just write something, it won’t last long, and then 5, 10 years later, it’s still there. That happens more often than we’d like to believe. The type system, yes, it can add friction when prototyping. Then the other side of the coin, it makes Rust more explicit. If you’re just looking at a piece of Rust code in isolation, you know a lot more about it than with other languages.

For example, if you get a mutable reference, you know that you can’t mutate that code elsewhere. That matters a lot because we’re going to be reading code a lot more than writing it over the lifetime of the project. Besides just references, in general, Rust tends to prevent a lot more of the hidden dependencies or magic. Just looking at Rust code, you can tell a lot more what’s going on, and that has benefits for that maintenance aspect. Things like during code reviews, debugging, all the stuff that you have to do to maintain a software project over its lifetime, Rust helps speed that up. It helps improve the productivity of the team over that lifespan. If you’re spending less time on that maintenance aspect, it also does mean you’re spending more time building new features. Anecdotally, this is what people tell me as they’ve used Rust over a couple years. That is where they’re seeing that tradeoff happen and part of why they are seeing their productivity with Rust stay just as high as with other languages.

Rust’s Learning Curve

You may have noticed up until now, I’ve been qualifying things with, once they have successfully adopted Rust. What I think is true today is, Rust is harder to learn than other programming languages. There are a number of reasons. While it’s true Rust as a language isn’t trivial, I think a bigger reason why Rust is harder to learn is that it’s a pretty different language. One, it looks like an object-oriented language if you squint a lot, but it’s not. It’s not at all an object-oriented language. One big pitfall I see when people are coming to Rust and learning Rust, especially coming from object-oriented languages, is they take their patterns and they try to apply it to Rust, and then that just goes poorly. What if we could make Rust easier to learn? I think that is going to be a big step towards making it a compelling language for that higher level because of how the learning is a big part of what is that initial productivity friction that teams see.

Second, and this applies more to Rust at that higher level, which is what we’re talking about now, is that the Rust ecosystem is a lot less developed than something like JavaScript. JavaScript ecosystem has tons of libraries, off-the-shelf components. Other languages do too. Rust, less so for the higher-level use case, and that’s in part because of Rust’s history coming up as a systems level language. Because if you’re building something at the systems level, the ecosystem is actually really developed there. There are libraries for a lot of different things and they’re all really nice. That’s part of that self-fulfilling cycle where Rust says it’s a great language for systems level programming. Developers come, they build stuff they need, more developers come, there’s like a self-reinforcing cycle that hasn’t really happened at that higher level. The second big aspect, I think, that we really need for Rust to really get to that level of being a great language for higher level is a more developed ecosystem there.

There’s not nothing. What libraries are there today for building those higher-level web apps, database-backed apps? At a very high level, to build a database application, you’ll need to have some HTTP router, takes inbound requests, and you, as the developer, handle those requests by using a database client, an ORM or something, and then you send the result back over the HTTP response. What is there as the ecosystem? There are libraries to do the router side, definitely a lot of good options there. There’s Axum, there’s Warp, there’s Actix, and probably others.

That website, arewewebyet.org, is definitely something you should go to if you want a more comprehensive list to find things. I’m personally partial to Axum, so if you go look at one, I recommend Axum. The state there is pretty strong. For the database client side of things, I think there are fewer options. Diesel, the original ORM for Rust. If you’ve tried to use Diesel, it works. I’ve heard that it can be harder to use. The main other options can be something like SQLx. It’s a nice little library if you like writing your SQL queries by hand, but personally, I think there’s really a need for those higher-level use cases to have a nice high level ORM.

Personally, over the past year, I’ve been working on that. Toasty, it’s open on GitHub, but be warned, it’s still in the very early days. It’s more of a preview, it’s not released on crates.io. The examples work. Lots of panics. Again, very early days. I’m hoping by sometime next year, hopefully mid, probably later, it’ll be ready for real-world apps, but I really want to get it out there and get people looking at and providing feedback early. Goals for Toasty. Toasty doesn’t just target SQL. Toasty does not abstract away the datastore, so you can’t use Toasty assuming SQL, then swap out the backend transparently to a datastore like DynamoDB or Cassandra. I don’t think that’s something a library can reasonably do.

However, personally, what’s bugged me when I’ve looked at ORM libraries in the past is there tends to be this full ecosystem split between ORMs and libraries that support other types of databases when there really is 90% overlap. The majority of the work that these libraries do is that mapping between structs and your database and doing create, read, update, the basics, so basic queries. Having to always have complete splits between those two ecosystems always bugged me. Toasty starts with the basic features that are common across all of these different flavors, and then lets you opt in to database-specific capabilities, whether that’s SQL or DynamoDB or Cassandra, but also prevents you from doing things that wouldn’t work on each target database. You obviously don’t want to do a three-way join on Cassandra.

Second and more importantly, I think, is that I really wanted to build a library that prioritized ease of use over maximizing performance. That isn’t to say that Toasty doesn’t care about performance, but you’re using Rust, you are coming here for things to be pretty fast. When designing the flow of Toasty, when designing the happy path specifically, I’m focusing on ease of use. If there’s a design tradeoff that I have to make between ease of use and really getting that last bit of performance, I’m going to pick ease of use here. That brings me, again, back to learnability. I do think Rust can become easier to learn. Yes, Rust has features that can be complicated and harder to use.

If you’ve looked at Rust, you’ve probably had these and you probably know what I’m talking about. I believe you don’t need to use these features to be productive with Rust. The basic Rust language is not that hard and you can be very productive with it. For Rust to really get to that point where it can really be a productive language for that higher level case, one, learning materials need to focus on that core, easy part of the language, and libraries need to focus as well, not bring in all the hard features.

Hard Parts: Traits and Lifetimes

What are the hard parts? When talking with developers who say Rust is hard, it really comes down to either traits or lifetimes, somehow. Both of these topics are not trivial. If you’re new to Rust and you structure your code wrong with these two features, it’s really easy to dig yourself into a hole that’s hard to get back out of. I think that part is really the biggest part of what contributes to that feeling that Rust is hard to use or not as productive. Because once you become more experienced with Rust, and that experience comes over a non-trivial amount of time, you know how to use these features, you know how to avoid the pitfalls, but that’s not really something helpful to tell a new developer that has to ship something next month.

It’s like, don’t worry, in like six months, you’ll be an expert at these things, or something like that. What do we do about it? If traits and lifetimes are hard, maybe the answer is as simple as avoid using them. Maybe it’s a little controversial, but I think that most developers using Rust can become very productive with hardly touching these. The problem is that, again, learning materials, will introduce these early, and a lot of libraries use traits and lifetimes heavily as part of their core APIs. You pick some of these beginning libraries and you’re like, ok, and there’s like five lifetimes stuck in this basic API that you’re supposed to call. I’m like, why?

Tips and Tricks for Using Rust

Personally, I started compiling a set of tips and tricks for using Rust. At Amazon, we got a lot of new developers onboarding Rust. I’ve had to compile some things that I tell them on their learning journey. They’re not just for beginners. I find myself following these as well when writing Rust. I’m not going to be giving you a tutorial of writing web apps with Rust. I don’t think that’s super useful. You can go and look at the guides, like Axum guides and Toasty if that’s what you want. Instead, I’m going to go over some of Rust features that I like and try to put those in context of building for web app developers. Hopefully, those tips and tricks, if you go and read the guides and learn Rust and maybe even talk to other developers who are within your org, teach them Rust, those will be helpful. The first tip is, really try to prefer using enums when possible. A trait is a way of defining generic code. You should use traits if you don’t know all the possible types that are going to be passed in. This is especially true for libraries.

You might want to write a generic function and you don’t know all the possible types ahead of time. That is true. You probably need to use a trait there. When building the application, like the end product, we do know all the types that are going to get passed in. We don’t need to use a trait. We can use an enum instead. This is going to greatly simplify our code. This principle applies in many cases. One time I see it come up often is that question of mocking.

This comes up a lot, how do I mock in Rust? It’s so hard. I get these questions a lot because, again, at Amazon, I’m on the Rust team and we get all of the questions like, how do we do this in Rust? I know this question comes up a lot when building these apps. Let’s look at a quick example. Imagine you’re building a very basic payment processing routine. You have your billing client that issues network calls, and you want to test this by mocking the billing client. This is almost always why I see people try the first time. They define a bill trait and then they go make their billing logic generic over that trait. The problem is going to be that you have this trait bound but this trait bound is going to leak everywhere in your application as well. Not just that, it’s going to start like that bill at a very high level but then propagate everywhere.

Then you have all of these different traits. If you keep adding more traits for every single thing you want to mock out, this is going to just compound and become super complicated. Again, this is an application that you control all the types that come in. You know there are only going to be two implementations, the real billing client and the mock one. The easier option is going to be just to use an enum here and list out all the billing clients as variants. You avoid using the traits. There’s no more trait bounds. It adds a little bit of boilerplates to define that enum, but there are crates out there that can help get rid of all that boilerplate and you’ll now have no more traits in your handle payment, and that won’t cascade everywhere.

A nice segue to procedural macros. Procedural macros let you write a Rust function that generates code for the user at compile time. That enables a lot. I do think it’s one of Rust’s superpowers that can unlock a lot of productivity and really actually is one of the reasons why Rust can be competitive for productivity at that higher level. Let’s look a bit about it. Here’s a Hello World example with the Axum library. That JSON exclamation point, call the contents of that, that’s clearly not Rust syntax. It’s JSON syntax. Rust has no support in the language for JSON syntax, but this compiles. The way it works is that there’s a Serde JSON library.

If you use Rust, you probably already heard Serde. It provides the implementation for that macro call. It’s implemented as a Rust function that takes a syntax tree and transforms that syntax tree to something else. In this case, an instantiation of a Rust value representing that JSON. I’m not going to labor too much. Again, you probably know Serde. Here’s a derive attribute macro and it works in a similar way. That struct definition is passed to Serde as an AST. Serde takes that and then generates transparently all the code needed to serialize that struct. Now you can use it as an Axum type response, and that’s really powerful.

Applying this at Toasty, the ORM library I’m working on, I think the initial obvious way to design the library would be to use a procedural macro on structs that define the database schema, something like this. I decided not to do this, at least initially. I’m going to tell you why. Procedural macros are one of Rust’s superpowers. As you start using Rust and you end up using it more, you’re probably going to start writing some. They do come at some amount of cognitive cost. You’ll even notice this, like there’s definitely an undercurrent of pushback to procedural macros within the Rust community. I don’t think it’s because procedural macros are bad. They’re definitely great, I love them. You need to be aware, again, of this cognitive cost. They generate all of that output transparently at compile time. If you need to debug the output or look at the output, that is, I think, where some of the problem comes. Just ask anyone who’s tried to debug proc_macro output. It can be challenging.

For Toasty, instead I took inspiration from Prisma, which is a JavaScript ORM client. They do a separate schema file, and code is generated from there. In a lot of ways that’s similar to procedural macros in that there’s a program that generates code for you. The difference being is it generates real files that you could open up and look and see all the output of the generated code. I think specifically for Toasty, that’s pretty useful because Toasty generates a lot of structs and methods that the developer is supposed to use. For example, this user find_by_email method is generated by Toasty. If you can just open a file and read it and find all of those methods and just explore it like real code, I think that’s useful. Does that mean this code generation strategy is superior to proc_macros? Not at all. They’re different. I think it depends on the context. The reason why I’m bringing it, again, if you get to this point where you’re starting to write some libraries and introduce proc_macros, I think this is going to be something to keep in mind.

How do you decide between these two strategies? For me, there’s two different factors that I consider. First is, how much context from the surrounding Rust code is required by the macro? If the answer is any, I think odds are that you’ll be better off with a procedural macro instead of that external code generation strategy. A quick example, again, revisiting that JSON exclamation point macro, you could see that the contents of that macro referenced variables, so that’s highly contextual. Just from the conceptual level, the response struct is very tied to that specific request handler. It would be a bit jarring to have to jump to a different file to see how each is defined. Highly contextual case, and I think this is a really good use case for procedural macros.

Then, the second factor is, how important is it for the user to discover the details of that generated code? Just how important is it to just read the generated code? I’m going to consider the Serde derive example again. The procedural macro here generates an implementation of that serialized trait. The trait definition itself is public. The specifics of the implementation don’t really matter as much, because it’s just an implementation. It’s a lot less important for the user to just open up that generated code and read that implementation. Again, I think this is a great use case for procedural macros. Toasty, on the other hand, it’s going to generate a lot of bespoke methods, which is why I decided, again, initially to go with a code generation strategy. In short, code generation is a great strategy to reduce boilerplate. Proc_macros is one of Rust’s superpowers and a super-helpful way to do that. Just be sensitive of how much code is generated and how the user is supposed to learn to use that proc_macro.

Back to traits. Yes, you should prefer enums over traits, but there will be times when a trait is appropriate, especially when building libraries, traits are a necessity, like I said. I still think, even for the library case, preferring enums over traits applies. When adopting a trait is necessary, just try to keep it as simple as possible. Doing something like this is probably ok. It lets the caller pass in any type that can be converted to a string. This helps avoid boilerplate. It can be good. This, on the other hand, is what I’m calling a second-order trait bound. The more complicated the trait bound, the harder it becomes for the user to reason about what types you can pass in. The compiler messages get harder. Now you can start to see, this is hard to reason about. This is why new people come to Rust, say, it’s so hard. It’s stuff like this. To have a trait bound like this, there has to be a ton of value to that trait bound so that that value outweighs the complexity. I think, historically, Rust libraries have leaned too much on traits.

In my years, I’ve definitely been a big offender of overusing traits. This example here comes from Tower. It’s a simplified version. It’s a library I worked on that uses traits heavily. There is an argument for it, but I think it’s not worth it, is the short of it. The theme of this talk is really, as you get familiar with Rust, you’re going to be lured into the power of Rust’s advanced features. Try to push back and really focus on how newcomers to your code, whether it’s a new developer from the organization coming in is going to be able to read this and understand it.

For Toasty, this is the generated find_by_email method I mentioned earlier. The argument is a trait. It’s a first-order bound. I’m hoping that this is the most complicated usage of traits that 95% or more of Toasty users will experience. I did include a lifetime. It’s one of the hard parts as well. There’s a similar theme to try to avoid using lifetimes and instead pass return values. Here, I’m including a lifetime. I’m not 100% sure it carries its weight yet, which is why I’m hoping you try Toasty, tell me what kind of experience you have. I may or may not end up getting rid of this lifetime here.

Result vs. Panic

Let’s talk a bit about result versus panic. Result is a type. It’s typically used as a return type to make it explicit to the caller of a function that that function could encounter an error. Languages like Java would handle this usually with an exception. The advantage of making error handling an explicit part of the return type is that it does force the caller to be aware of that error and handle that error, or their program will not compile. That is a big part of what leads to fewer bugs with Rust, because you can’t necessarily forget about handling the edge cases with exceptions. Rust also has panic, which is a different way of modeling errors. Panics are a lot like exceptions in that, if you panic, it stops the execution flow and starts unwinding the stack. A panic is going to be pretty harsh and really is for when something goes pretty wrong.

Two ways of handling errors. Which one do you choose? It really comes down to whether the caller is expected to handle the error case or not. Let’s say you have a socket. You’re reading data from the socket, and the socket unexpectedly closes. That is an error case that will happen in real life. You, as the programmer using the socket, you should gracefully handle that case somehow. Socket operations in Rust, all the methods are going to return result. Now, when to panic. This is going to be for error cases that have no sane way to handle at runtime. What I mean by that is, oftentimes, that’s a bug in the caller’s code that ends up in an unexpected error case. Handling bugs in your own code and responding to errors like that is a lot. To hard stop makes sense there, because if you have a bug in your code, you are now in unexpected state that is hard to recover from. A panic is going to be usually when there’s a bug in the code.

To illustrate what I mean by a bug in the code, let’s look a bit at Toasty. I want to talk a bit about how Toasty handles the n plus 1 problem, which is a textbook ORM problem. Here, when we’re loading a user, we’re iterating the todos, and we’re printing the todos category. The n plus 1 problem is if the ORM implicitly and lazily loads associations and issues queries when it’s loading those associations, there’s going to be a database query that’s issued for every iteration of that loop. That’s bad. What you actually want to do is load all the necessary data up front. In this code example, when you’re calling find_by_email, Toasty doesn’t know that you’re going to want the category. First, with async Rust, it’s not actually possible for Toasty to implicitly load that data on demand, because at every time there might be a network call, there needs to be a .await. That’s actually pretty nice, because now you can look at this code sample and immediately know where the database queries might happen.

Again, recall I mentioned hidden dependencies are magic earlier. This is another illustration of where Rust, the language, can prevent that. Here, the only database query that gets issued is to load that user up front. If only the user is loaded, what happens when you call user.todos right there? Toasty panics. I specifically didn’t want that todos method to return a result, because as a caller, what would you do in that case with that result? You’d have to add boilerplate every place to handle it, which probably the only way to really handle a result in that case would be a .unwrap. That’s going to add a whole bunch of unnecessary friction when using Toasty the library. To avoid that panic, the caller then specifies which associations they want to eagerly load at query time. If you try to access the association without eagerly loading it, that’s a runtime bug, so a panic.

Using Indices for Complex Relationships

One quick tip for the road. You may have heard it said that you can’t implement a doubly-linked list with Rust, as an example of the borrow checker limitations. You can. It’s not actually true. You can do it. You can implement that doubly-linked list with Rust without using any unsafe code, if you store the nodes in the vec and then use indices to represent links. That’s a pattern that’s super useful once you get to modeling more complex data. It can be scaled up, again, to more complex data relationships. If you want to watch another video, this video covers it in great depth, youtube.com/watch?v=aKLntZcp27M. I highly recommend it to everyone as a almost required viewing.

Summary

What’s my point with all this? I do think Rust could be a great general-purpose language, including for those higher-level use cases like database applications, where productivity is more important than performance. There’s still some work to get there. Primarily, we need a more fleshed out ecosystem for those higher-level libraries. I’m trying to do my part. Part of why I’m giving this talk is to convince you that Rust has that potential. If we can reach that potential of growing that ecosystem and making Rust easier to learn with libraries that are easier to learn, learning materials that focus on ease of use, we can get to the point where we have this language that is really fast, lets you write more reliable code, lets you get paged less often, and is as productive.

At the end of the day, building these web apps is almost just taking all these components and gluing them together. I don’t think you need to be an expert in lifetimes and traits and all that to do that kind of work. That is my main point. Try out Toasty. Give me feedback. I’m still myself trying to figure out what exactly is an API for that ease of use. Feedback is super useful.

Questions and Answers

Participant: Your argument of enums versus traits, you show an example where it’s pretty messy if you use traits. You have a very clean implementation, but the trick is the messed up function. When you argue for enums, you intentionally or unintentionally skip the implementation part, which I think is where the messy part will be. How do you then argue for enums if your code becomes forced everywhere?

My argument is, if you have enums, then you have to write code twice to handle enums.

Lerche: Yes, if you have enums, you have to write code twice to handle enums. There’s proc_macros to handle that for you. If you look at enum_dispatch, a proc_macro that lets you use that enum style pattern, just like you’d use a trait. If you have an enum, the only place you’re going to have duplication is in the implementation.

See more presentations with transcripts

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


AWS CodeBuild Introduces Docker Server Capability to Accelerate CI/CD Pipelines

MMS Founder
MMS Craig Risi

Article originally posted on InfoQ. Visit InfoQ

On May 15, 2025, AWS announced a significant enhancement to its CodeBuild service: the Docker Server capability. This new feature allows developers to provision a dedicated and persistent Docker server within their CodeBuild projects, aiming to streamline and expedite the Docker image build process.

Traditionally, building Docker images in CI/CD pipelines can be time-consuming, especially when dealing with multi-layered images. With the Docker Server capability, AWS addresses this challenge by centralizing image building to a remote host. This approach reduces wait times and increases overall efficiency by maintaining a persistent Docker layer cache. In practical terms, AWS reports a dramatic reduction in build times when utilizing this feature.

The persistent Docker server supports multiple concurrent build operations, with all builds benefiting from the shared centralized cache. This setup not only accelerates the build process, but also ensures consistency across builds, which is crucial for maintaining reliable deployment pipelines.

To leverage this capability, developers can enable the Docker Server option within their CodeBuild project settings. Once activated, CodeBuild provisions the dedicated Docker server with persistent storage, facilitating faster and more efficient builds.

To set up the new Docker Server capability in AWS CodeBuild (as detailed in the AWS blog), begin by creating a new CodeBuild project or editing an existing one in the AWS Management Console. In the environment configuration, select “Managed image” and choose Amazon Linux 2 as the operating system. Then, within the new Docker configuration section (available for supported standard images such as aws/codebuild/standard:7.0 or later), enable the “Docker Server mode” option. This activates a lightweight Docker daemon without the performance drawbacks typically associated with Docker-in-Docker (DinD). Next, update your buildspec.yml file to include Docker commands- for example, building and pushing images to Amazon ECR – just as you would in a local Docker setup.

The instructions remind you to make sure the IAM role used by CodeBuild has the necessary permissions to interact with services like Amazon ECR. Once everything is configured, you can then trigger your build.

The introduction of the Docker Server capability in AWS CodeBuild has sparked some discussion among developers and DevOps professionals. While there is appreciation for significantly reduced build times, there are currently limitations in integration with infrastructure-as-code tools.

For instance, a GitHub issue in the AWS Cloud Development Kit (CDK) repository highlights that:

“As of now, the AWS CDK does not support this capability because CloudFormation also does not expose it yet. CDK can only provide support once CloudFormation does.”

This suggests that while the feature is promising, its adoption may be hindered until full support is available in tools such as CloudFormation and CDK.

Despite these integration challenges, the Docker Server capability has been lauded for its performance improvements. In the official AWS blog post, Donnie Prakoso shared benchmark results demonstrating a 98% reduction in build time, from nearly 25 minutes down to just 16 seconds, when utilizing this feature.

This new feature competes with existing solutions, such as Docker Inc’s Docker Build Cloud, GCP’s Cloud Build, and GitHub Actions Docker Layering.

This enhancement to AWS CodeBuild underscores AWS’s commitment to improving developer productivity and optimizing the CI/CD workflow. By reducing build times and streamlining the image creation process, the Docker Server capability enables development teams to deploy applications more rapidly and reliably.

About the Author

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Applying Observability to Leadership to Understand and Explain your Way of Working

MMS Founder
MMS Ben Linders

Article originally posted on InfoQ. Visit InfoQ

Leadership observability means observing yourself as you lead. Alex Schladebeck shared at OOP conference how narrating thoughts, using mind maps, asking questions, and identifying patterns helped her as a leader to explain decisions, check bias, support others, and understand her actions and challenges.

Employees and other leaders around you want to understand what leads to your decisions, Schladebeck said. Leadership observability is treating oneself as the system that is under observation, she explained:

It’s being able to ask questions and get meaningful answers because “my brain” is capable of telling me why I acted / reacted / communicated / decided the way I did.

Heuristics give us our “gut feeling”. And that’s useful, but it’s better if we’re able to take a step back and get explicit about how we got to that gut feeling, Schladebeck mentioned. If we categorise and label things and explain what experiences lead us to our gut feeling, then we have the option of checking our bias and assumptions, and can help others to develop the thinking structures to make their own decisions, she explained:

I had a colleague who had to decide which of their team to put forward for a customer project. I was able to share my context of assessing the person on their technical skills and their ability to represent the company.

Schladebeck recommends that leaders narrate their thoughts to reflect on, and describe their own work to the ones they are leading. They can do this by asking themselves questions like, “Why do I think that?”, “What assumptions am I basing this on?”, “What context factors am I taking into account?” Look for patterns, categories, and specific activities, she advised, and then you can try to explain these things to others around you.

To visualize her thinking as a leader, Schladebeck uses mind maps. She groups things and experiences together, and makes different branches for different topics:

I categorise the mindmap branches with categories of what I’m doing, such as “making decisions”, “dealing with conflict”, “managing time and tasks&rquot;. Then I collect activities within them like “collecting options”, “pros and cons”, “personal preference”. And then I add examples as they happen.

Schladebeck also describes general continuums of thought, such as “planning versus exploring” and “directing versus letting others lead”. Using these things, she tries to make her inner workings clearer for others.

Observing her self-confidence as a leader, Schladebeck found out how she increases it:

I try to remind myself that I’ve managed all my hard days and tasks so far. I call my coach or my friends if I need extra support. And a look at the world stage reminds me that there are people in much higher jobs with much lower qualifications than me!

She also observed how she deals with conflicts, which is something that she is still working on to further improve it:

At the moment my focus is to listen carefully and ask for the facts / concrete examples on both sides. Based on those, we can start conversations.

Schladebeck mentioned that by observing how she leads, she has learned which activities are hard for her (like “keeping quiet and letting others decide!” and “interacting to understand when I disagree”) – and, more importantly, why they are hard. Once she can identify why she doesn’t manage a specific thing very well, she can choose to work specifically on those aspects. It helps her to be aware of “what I’m doing” on a daily basis:

Being able to link your current activity back to a higher goal is very important in leadership work!

InfoQ interviewed Alex Schladebeck about what she learned from observing how she leads.

InfoQ: How do you balance making decisions versus having other people decide?

Alex Schladebeck: I’m still working on this! I’m currently trying to leave a gap, some seconds, before I jump into the conversation with my input. This leaves room for others, and for my brain to catch up to the situation and think about whether my input is needed right now.

InfoQ: What if a decision may disappoint people, how would you handle it?

Schladebeck: It’s not often that absolutely no one is disappointed! This is why being clear and explicit about how and why you make decisions is important. What is the context? What are the risks or opportunities?

And – of course we’re going to disappoint people, and it’s ok that they are disappointed. I don’t try to convince them otherwise. Accepting the feeling doesn’t mean that you’ll change the decision. It does mean you understand the human who is affected.

InfoQ: Has there been an observation that surprised you?

Schladebeck: When my then-boss would cancel a meeting at short notice, I would wonder why he did that. It didn’t feel respectful. And then I was the proud owner of the manager-level calendar when my role changed… I realised that short-term management of your calendar is necessary. Sometimes you have three meetings planned in parallel! And only really on the day do you find out which ones are really happening.

On the other hand, if you try to clear your calendar weeks in advance to make sure you only have one meeting at a time, the effort is often wasted. By the time the week in question rolls around you have three again!

I’ve also become a short-term calendar manager. What I try to do though is be very clear about how and why that happens. And if I have people who really don’t like it, then I make sure their meetings don’t get moved at short notice. That, however, does mean that they might get moved more.

About the Author

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


GitHub Unveils Prototype AI Agent for Autonomous Bug Fixing

MMS Founder
MMS Mark Silvester

Article originally posted on InfoQ. Visit InfoQ

GitHub recently introduced a prototype AI coding agent designed to fix bugs and propose code changes through pull requests autonomously.

Unlike GitHub Copilot, which assists developers in real time, the new agent operates independently, scanning codebases, identifying issues, and submitting suggested fixes as pull requests. This represents a shift from developer assistance to a more autonomous code-maintenance model.

According to GitHub, the agent builds on the capabilities of Copilot and leverages CodeQL for semantic code analysis, which enables understanding the meaning and structure of code beyond simple text matching. It is also integrated with a software library of common vulnerability and bug patterns. Once it detects a relevant issue, the agent formulates a potential fix and opens a pull request, complete with code changes and a descriptive message outlining the rationale. Developers can then review, modify, or merge the pull request as needed.

The announcement coincides with the rise of autonomous AI agents in software development. Tools like SWE-agent from Princeton have demonstrated early results in multi-step bug fixing and test-driven development. These tools are part of a broader trend towards software that can not only assist but also act, handling iterative development tasks with minimal human oversight. GitHub CEO Thomas Dohmke described this shift by stating, “Instead of you just asking a question and it gives you an answer, you give it a problem and then it iterates on that problem together with the code that it has access to”.

The GitHub team emphasised that this prototype is still in early development and is being tested internally. It is not yet available for public use, and GitHub has not announced a timeline for broader rollout. However, the company said that the technology represents a long-term investment in reducing the manual burden of software maintenance and improving code health at scale.

Developers have shown interest in GitHub’s coding agent as a way to automate routine bug fixing. In a Reddit thread, early users described successful test runs and called the tool a potential “game changer.” However, some raised concerns about trust, testing coverage, and change management. A GitHub Community discussion also highlighted worries around the implications of AI-generated pull requests, particularly in complex codebases.

The move aligns with GitHub’s broader AI strategy, which includes integrating large language models into workflows beyond code generation, such as documentation, issue triaging, and now, autonomous pull request creation. As part of this strategy, GitHub continues to explore how AI can take on repetitive engineering tasks, freeing developers to focus on higher-level design and problem-solving.

About the Author

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Codefacture Expands Operations to Enhance Service Capabilities and Reach New Markets

MMS Founder
MMS RSS

Posted on nosqlgooglealerts. Visit nosqlgooglealerts

PRESS RELEASE

Published June 12, 2025

Codefacture is a software development company founded in Türkiye that is accelerating its growth to meet increasing global demand for custom-built CRM, ERP, custom software, and enterprise automation solutions.

Ankara, Türkiye – In response to surging demand for intelligent, adaptable enterprise software, Codefacture, a fast-growing innovator in CRM, ERP, and custom software development, has announced a major operational expansion. This strategic initiative is designed to support larger-scale development, broaden client services, and extend the company’s reach across Türkiye, Europe, and North America.

Founded in 2024, Codefacture is a software development company that has quickly established itself as a trusted partner for businesses seeking flexible, scalable, and fully customized digital solutions. From manufacturers needing smarter resource planning to service providers optimizing their customer engagement workflows, Codefacture’s platforms are designed to fit the specific way each business operates – not the other way around.

With this expansion, the company is positioning itself to serve an even broader array of industries and geographies while maintaining the high level of customization and personal attention it’s known for.

A Custom-First Approach to Business Software

Unlike many enterprise software providers who offer rigid, off-the-shelf systems, Codefacture’s approach is grounded in adaptability. Every platform it delivers is built around the unique processes, data structures, and operational goals of the client. This has made the company particularly valuable to mid-size and enterprise businesses that are outgrowing legacy tools or facing challenges adapting generic platforms to complex, industry-specific needs.

All of Codefacture’s platforms and custom software solutions are built using a modern, scalable technology stack designed for performance, flexibility, and rapid development. The team leverages powerful frameworks and libraries such as Node.js, Laravel, React, Vue.js, Next.js, and Tailwind CSS to deliver intuitive and responsive user experiences. On the backend, Codefacture works with a range of SQL databases as well as NoSQL solutions, ensuring data structures are optimized for each client’s specific needs. The use of TypeScript and Python further enhances code quality, scalability, and integration capabilities across complex systems.

CRM and ERP Solutions Built for Real-World Complexity

Modern businesses demand more than just functionality – they need systems that map perfectly to their workflows. Codefacture designs and delivers custom-built CRM and ERP solutions that go beyond conventional software limitations. From streamlining sales pipelines to automating inventory and financial processes, each platform is configured to match the exact operational structure of the client.

Whether replacing outdated legacy tools or introducing ERP to a fast-growing enterprise for the first time, Codefacture’s systems are engineered for performance, transparency, and adaptability.

Custom Software Development for Specialized Business Needs

Codefacture delivers fully custom software solutions designed to meet the unique operational challenges of each business. From internal tools to external platforms, every product is built from the ground up to reflect the client’s exact workflows, goals, and data environment. By avoiding one-size-fits-all templates, Codefacture ensures that each solution is intuitive, scalable, and aligned with long-term strategy. This approach enables organizations to streamline operations, improve performance, and respond faster to change through software that’s specifically designed – not adapted.

Strategic Growth for Sustainable Impact

The newly announced expansion involves key investments in three core areas:

  1. Team Growth: Codefacture is significantly increasing its workforce, hiring software engineers, UI/UX designers, project managers, and customer success specialists. This will allow the company to support larger client portfolios and reduce turnaround times for product development and support.
  2. Technology Infrastructure: The company is upgrading its development stack and infrastructure to support larger, more complex deployments. New integrations and APIs are being developed to enhance cross-platform functionality, and a new DevOps pipeline is being rolled out to accelerate delivery cycles.
  3. Client Experience: Codefacture is enhancing its onboarding and support processes with expanded documentation, dedicated account management, and a new analytics dashboard that gives clients real-time insights into their system performance and business metrics.

Expanding Into New Markets

As Codefacture expands, it is actively seeking opportunities in new markets. With strong demand from Europe and North America, the company is exploring partnerships with resellers, consultants, and implementation partners to bring its platforms to a wider range of organizations.

Early interest has been particularly strong from sectors such as:

  • Manufacturing – Where demand for modular, transparent ERP systems is critical for cost control and agility.
  • Healthcare – Where custom CRM solutions are being used for patient outreach, scheduling, and compliance.
  • Education – Where schools and universities need systems to manage admissions, communications, and internal workflows.
  • Logistics – Where real-time visibility and data-driven decision-making are becoming industry standards.

Codefacture’s expansion strategy includes not only growth in client acquisition but also knowledge-sharing: the company plans to launch webinars, industry whitepapers, and case studies over the next year to help businesses better understand how tailored software can reshape their operations.

A Vision-Driven Company in a Fast-Changing World

Digital transformation is no longer optional – it’s an operational necessity. Yet for many businesses, transformation is delayed by the complexity of their processes and the lack of software flexible enough to fit them. Codefacture exists to solve this exact problem.

Driving Digital Transformation with Adaptability at the Core

The global business landscape is evolving faster than ever. Off-the-shelf tools often create more friction than they solve. Codefacture exists to eliminate that gap – by delivering tailored platforms that enable companies to streamline operations, improve visibility, and scale without compromise.

About Codefacture

Codefacture was founded in 2024 in Ankara, Türkiye, with a mission to revolutionize how businesses adopt and scale enterprise software. By focusing on deeply customized CRM, ERP, business automation platforms and custom software, the company empowers its clients to unlock greater efficiency, transparency, and scalability.

Whether automating inventory control, managing large sales teams, or coordinating procurement, Codefacture’s tools are designed to adapt, evolve, and grow with every business they touch.

Vehement Media

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Mistral Releases Its Own Coding Assistant Mistral Code

MMS Founder
MMS Daniel Dominguez

Article originally posted on InfoQ. Visit InfoQ

Mistral has introduced Mistral Code, a new AI-powered development tool aimed at improving the efficiency and accuracy of coding workflows. Mistral Code utilizes advanced AI models to offer developers intelligent code completion, real-time suggestions, and the capability to interact with the codebase using natural language. By understanding the structure and relationships within a project, Mistral Code delivers context-aware support for a range of tasks, helping developers write and optimize code more effectively.

Mistral Code can assist with real-time code completion, offering suggestions for code as developers type, which helps reduce errors and speed up the development process. It also identifies syntax and logical errors, providing suggestions for correcting them, which minimizes the time spent on debugging.

In addition to its code completion and debugging capabilities, Mistral Code also generates code documentation automatically. This includes inline comments and API documentation, improving the maintainability of the code and making it easier for teams to collaborate. The tool can even generate unit and system tests to ensure the code produced is fully functional, reducing the burden on developers to manually create tests.

Mistral Code is also designed to assist with code migration. It can generate code snippets in target languages, allowing teams to adapt their existing codebases to new frameworks or languages with ease. Furthermore, the platform analyzes code performance, identifying bottlenecks and offering suggestions for optimizing speed and efficiency.

The AI models behind Mistral Code, such as Codestral and Devstral, are built to be fully customizable and tunable, allowing developers to adjust the models to fit the specific needs of their codebase. This flexibility enables the platform to integrate into different development environments, whether for individual developers or larger teams. The platform supports enterprise-grade features, including team management, detailed analytics, and deployment flexibility.

Community feedback on Mistral’s code is largely positive, with developers praising its efficient, clean code generation across languages. AI and Data specialist Shubham Sharma commented:

Mistral Code revolutionizes enterprise AI development—delivering frontier-grade coding models directly into secure, compliant workflows. No more POC purgatory.

And Fahim in Tech user shared:

If your team needs an AI assistant that understands your code, respects your security, and actually helps ship features not just autocomplete lines Mistral Code might be your new favourite tool.

Mistral Code is integrated directly into JetBrains and VS Code, which simplifies the workflow by allowing developers to stay within their existing development environment. While tools like Windsurf, Cursor, and Copilot offer code completion and assistance, Mistral Code allows natural language interactions with the codebase and offers customizable AI models.

About the Author

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


AWS Unveils Independent European Governance and Operations for European Sovereign Cloud

MMS Founder
MMS Steef-Jan Wiggers

Article originally posted on InfoQ. Visit InfoQ

AWS has announced key components of its independent European governance for the AWS European Sovereign Cloud, including a new EU-controlled parent company and a dedicated Security Operations Center. With this strategic move, AWS aims to launch its first region in Brandenburg, Germany, by the end of 2025, specifically to meet the stringent digital sovereignty requirements of European governments and enterprises.

The AWS European Sovereign Cloud is designed to combine operational autonomy with the expansive service portfolio of the AWS Cloud. AWS emphasizes its long-standing “sovereign-by-design” approach, where customers control data location and movement. These stringent requirements are primarily driven by industry concerns over data access and extraterritorial laws, such as the U.S. CLOUD Act. This new cloud builds on that commitment, offering the same performance, innovation, security, and scale AWS customers expect.

A new European organization and operating model will be established for the AWS European Sovereign Cloud, comprising a new parent company and three subsidiaries incorporated in Germany, all of which will be led by EU citizens residing in the EU and subject to local laws. Kathrin Renz, currently vice president of AWS Industries, will serve as the company’s first managing director, legally bound to act in the best interest of the AWS European Sovereign Cloud.

Furthermore, the model ensures that customer content and metadata remain within the EU, with operations managed exclusively by personnel residing in the EU. Its dedicated infrastructure will be entirely located within the EU, physically and logically separate from other AWS regions, with no critical dependencies on non-EU infrastructure. It will also feature independent services, such as its own Amazon Route 53 (utilizing European Top-Level Domains) and a dedicated “root” European Certificate Authority for SSL/TLS certificates, alongside Euro currency billing.

An independent advisory board, comprising at least four EU citizens (including one independent member not affiliated with Amazon), will be established. This board will provide expertise and accountability on sovereignty-related aspects and act in the best interest of the AWS European Sovereign Cloud. The design also enables continuous operation even in the event of a connectivity interruption with the rest of the world.

The AWS European Sovereign Cloud will feature a dedicated European Security Operations Center (SOC) mirroring global security practices. This SOC will be led by an EU citizen residing in the EU, who will be responsible for advising the managing director and supporting customers and regulators on security matters.

AWS has also closely collaborated with European regulators, including the German Federal Office for Information Security (BSI), signing a co-operation agreement to further governance and technical standards for operational separation and data flow management. To provide verifiable trust and adherence to sovereignty controls, AWS is introducing the Sovereign Requirements Framework (SRF). The AWS European Sovereign Cloud will maintain key certifications, including ISO/IEC 27001:2013, SOC 1/2/3 reports, and BSI C5 attestation, with independent third-party audits based on the SRF, available via AWS Artifact.

(Source: About Amazon News)

The AWS European Sovereign Cloud initiative comes amidst a significant and ongoing push in Europe for greater technological sovereignty. David Linthicum, a prominent industry analyst, commented on this broader trend in a tweet on X:

The growing push in Europe to reduce reliance on US-based cloud providers is a bold and important move toward technological sovereignty. By promoting homegrown solutions, the EU is striving for greater control over sensitive data and reduced dependency on external powers. However, this shift raises an important question: Could Europe be sacrificing access to the advanced capabilities offered by leading US cloud providers in the process? US cloud giants have set the standard for cutting-edge innovation in areas such as artificial intelligence, machine learning, and scalable global infrastructure.

Linthicum further noted that while Europe’s push to develop its own cloud ecosystem (citing initiatives like Gaia-X) is a step in the right direction, it faces “steep challenges in terms of infrastructure investment, scalability, and competing with over a decade of expertise and innovation.” He concluded that:

Striking the right balance will be critical. Europe must ensure its sovereignty efforts don’t unintentionally limit access to crucial capabilities that drive innovation and competitive advantage in a globally connected economy.

This initiative from AWS comes as other major cloud providers are also making significant commitments to address European concerns about digital sovereignty. For instance, Microsoft recently announced five digital commitments to strengthen its support for Europe’s technological landscape, including a 40% expansion of its European datacenter capacity, a pledge to uphold digital resilience (including a “European cloud for Europe” overseen by a European board), and the completion of its EU Data Boundary project.

The first AWS European Sovereign Cloud Region is set to launch in the State of Brandenburg, Germany, by the end of 2025, backed by a €7.8 billion investment – a commitment that ensures customers can meet their evolving digital sovereignty needs without compromising on the full power of AWS.

It will offer a comprehensive suite of services, including artificial intelligence (Amazon Bedrock, Amazon Q, Amazon SageMaker), compute, containers, database, networking, and security. These services, built on AWS’s sovereign-by-design foundation, will simplify how customers achieve digital sovereignty while gaining the security, control, compliance, and resilience they need. Customers will also benefit from a wide range of AWS Partner Solutions. Once launched, the AWS European Sovereign Cloud will be open to all customers and partners, reinforcing AWS’s long-term commitment to Europe’s digital future.

About the Author

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Traders Buy Large Volume of MongoDB Call Options (NASDAQ:MDB) – Defense World

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

MongoDB, Inc. (NASDAQ:MDBGet Free Report) saw some unusual options trading activity on Wednesday. Investors bought 36,130 call options on the stock. This represents an increase of approximately 2,077% compared to the average volume of 1,660 call options.

Insider Buying and Selling at MongoDB

In related news, insider Cedric Pech sold 1,690 shares of the firm’s stock in a transaction that occurred on Wednesday, April 2nd. The shares were sold at an average price of $173.26, for a total transaction of $292,809.40. Following the transaction, the insider now owns 57,634 shares in the company, valued at $9,985,666.84. This trade represents a 2.85% decrease in their ownership of the stock. The transaction was disclosed in a legal filing with the Securities & Exchange Commission, which is available through the SEC website. Also, Director Hope F. Cochran sold 1,175 shares of the firm’s stock in a transaction that occurred on Tuesday, April 1st. The shares were sold at an average price of $174.69, for a total transaction of $205,260.75. Following the transaction, the director now owns 19,333 shares in the company, valued at $3,377,281.77. This represents a 5.73% decrease in their position. The disclosure for this sale can be found here. Over the last 90 days, insiders have sold 49,208 shares of company stock worth $10,167,739. 3.10% of the stock is owned by company insiders.

Institutional Investors Weigh In On MongoDB

A number of institutional investors have recently modified their holdings of the stock. OneDigital Investment Advisors LLC boosted its stake in shares of MongoDB by 3.9% during the 4th quarter. OneDigital Investment Advisors LLC now owns 1,044 shares of the company’s stock valued at $243,000 after buying an additional 39 shares during the period. Avestar Capital LLC raised its holdings in shares of MongoDB by 2.0% during the 4th quarter. Avestar Capital LLC now owns 2,165 shares of the company’s stock valued at $504,000 after purchasing an additional 42 shares in the last quarter. Aigen Investment Management LP raised its holdings in shares of MongoDB by 1.4% during the 4th quarter. Aigen Investment Management LP now owns 3,921 shares of the company’s stock valued at $913,000 after purchasing an additional 55 shares in the last quarter. Handelsbanken Fonder AB raised its holdings in shares of MongoDB by 0.4% during the 1st quarter. Handelsbanken Fonder AB now owns 14,816 shares of the company’s stock valued at $2,599,000 after purchasing an additional 65 shares in the last quarter. Finally, O Shaughnessy Asset Management LLC raised its holdings in shares of MongoDB by 4.8% during the 4th quarter. O Shaughnessy Asset Management LLC now owns 1,647 shares of the company’s stock valued at $383,000 after purchasing an additional 75 shares in the last quarter. Institutional investors and hedge funds own 89.29% of the company’s stock.

MongoDB Stock Down 1.1%

<!—->

Shares of NASDAQ:MDB opened at $210.60 on Thursday. The firm has a fifty day moving average of $179.38 and a two-hundred day moving average of $227.94. The firm has a market capitalization of $17.10 billion, a P/E ratio of -76.86 and a beta of 1.39. MongoDB has a 12-month low of $140.78 and a 12-month high of $370.00.

MongoDB (NASDAQ:MDBGet Free Report) last issued its earnings results on Wednesday, June 4th. The company reported $1.00 earnings per share (EPS) for the quarter, beating analysts’ consensus estimates of $0.65 by $0.35. The company had revenue of $549.01 million during the quarter, compared to analyst estimates of $527.49 million. MongoDB had a negative net margin of 10.46% and a negative return on equity of 12.22%. The company’s revenue for the quarter was up 21.8% on a year-over-year basis. During the same quarter last year, the business posted $0.51 EPS. Equities research analysts expect that MongoDB will post -1.78 earnings per share for the current fiscal year.

Wall Street Analysts Forecast Growth

A number of research analysts recently commented on MDB shares. JMP Securities reiterated a “market outperform” rating and set a $345.00 price objective on shares of MongoDB in a research note on Thursday, June 5th. Macquarie reiterated a “neutral” rating and set a $230.00 price objective (up previously from $215.00) on shares of MongoDB in a research note on Friday, June 6th. Wedbush reiterated an “outperform” rating and set a $300.00 price objective on shares of MongoDB in a research note on Thursday, June 5th. Morgan Stanley reduced their target price on shares of MongoDB from $315.00 to $235.00 and set an “overweight” rating for the company in a report on Wednesday, April 16th. Finally, KeyCorp cut shares of MongoDB from a “strong-buy” rating to a “hold” rating in a report on Wednesday, March 5th. Eight equities research analysts have rated the stock with a hold rating, twenty-four have given a buy rating and one has given a strong buy rating to the stock. Based on data from MarketBeat.com, the stock has a consensus rating of “Moderate Buy” and a consensus target price of $282.47.

View Our Latest Stock Report on MDB

About MongoDB

(Get Free Report)

MongoDB, Inc, together with its subsidiaries, provides general purpose database platform worldwide. The company provides MongoDB Atlas, a hosted multi-cloud database-as-a-service solution; MongoDB Enterprise Advanced, a commercial database server for enterprise customers to run in the cloud, on-premises, or in a hybrid environment; and Community Server, a free-to-download version of its database, which includes the functionality that developers need to get started with MongoDB.

Read More



Receive News & Ratings for MongoDB Daily – Enter your email address below to receive a concise daily summary of the latest news and analysts’ ratings for MongoDB and related companies with MarketBeat.com’s FREE daily email newsletter.

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Investors Purchase Large Volume of Put Options on MongoDB (NASDAQ:MDB)

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

MongoDB, Inc. (NASDAQ:MDBGet Free Report) was the recipient of some unusual options trading on Wednesday. Investors purchased 23,831 put options on the stock. This is an increase of 2,157% compared to the average daily volume of 1,056 put options.

Insider Activity

In other news, insider Cedric Pech sold 1,690 shares of MongoDB stock in a transaction dated Wednesday, April 2nd. The stock was sold at an average price of $173.26, for a total transaction of $292,809.40. Following the completion of the sale, the insider now owns 57,634 shares in the company, valued at approximately $9,985,666.84. This trade represents a 2.85% decrease in their position. The sale was disclosed in a legal filing with the SEC, which can be accessed through this link. Also, Director Hope F. Cochran sold 1,175 shares of MongoDB stock in a transaction dated Tuesday, April 1st. The shares were sold at an average price of $174.69, for a total value of $205,260.75. Following the sale, the director now owns 19,333 shares of the company’s stock, valued at approximately $3,377,281.77. This trade represents a 5.73% decrease in their position. The disclosure for this sale can be found here. Insiders sold a total of 49,208 shares of company stock valued at $10,167,739 in the last ninety days. 3.10% of the stock is currently owned by insiders.

Institutional Inflows and Outflows

A number of institutional investors and hedge funds have recently made changes to their positions in the stock. Cloud Capital Management LLC purchased a new position in MongoDB during the 1st quarter worth $25,000. Hollencrest Capital Management purchased a new stake in MongoDB during the 1st quarter valued at about $26,000. Cullen Frost Bankers Inc. grew its stake in MongoDB by 315.8% during the 1st quarter. Cullen Frost Bankers Inc. now owns 158 shares of the company’s stock valued at $28,000 after purchasing an additional 120 shares during the last quarter. Strategic Investment Solutions Inc. IL purchased a new stake in MongoDB during the 4th quarter valued at about $29,000. Finally, NCP Inc. acquired a new position in shares of MongoDB in the 4th quarter valued at about $35,000. 89.29% of the stock is currently owned by hedge funds and other institutional investors.

MongoDB Stock Down 1.1%

<!—->

Shares of NASDAQ MDB opened at $210.60 on Thursday. The business has a fifty day simple moving average of $179.38 and a two-hundred day simple moving average of $227.94. The firm has a market cap of $17.10 billion, a PE ratio of -76.86 and a beta of 1.39. MongoDB has a twelve month low of $140.78 and a twelve month high of $370.00.

MongoDB (NASDAQ:MDBGet Free Report) last announced its quarterly earnings results on Wednesday, June 4th. The company reported $1.00 earnings per share (EPS) for the quarter, beating analysts’ consensus estimates of $0.65 by $0.35. MongoDB had a negative return on equity of 12.22% and a negative net margin of 10.46%. The business had revenue of $549.01 million for the quarter, compared to the consensus estimate of $527.49 million. During the same period last year, the business posted $0.51 EPS. The business’s revenue was up 21.8% compared to the same quarter last year. On average, research analysts expect that MongoDB will post -1.78 earnings per share for the current year.

Wall Street Analyst Weigh In

A number of equities analysts have commented on the company. Citigroup decreased their price target on MongoDB from $430.00 to $330.00 and set a “buy” rating on the stock in a research note on Tuesday, April 1st. Monness Crespi & Hardt upgraded MongoDB from a “neutral” rating to a “buy” rating and set a $295.00 price target on the stock in a research report on Thursday, June 5th. Canaccord Genuity Group dropped their price target on MongoDB from $385.00 to $320.00 and set a “buy” rating on the stock in a research report on Thursday, March 6th. Cantor Fitzgerald boosted their target price on MongoDB from $252.00 to $271.00 and gave the stock an “overweight” rating in a report on Thursday, June 5th. Finally, Needham & Company LLC reissued a “buy” rating and issued a $270.00 target price on shares of MongoDB in a report on Thursday, June 5th. Eight analysts have rated the stock with a hold rating, twenty-four have issued a buy rating and one has given a strong buy rating to the stock. Based on data from MarketBeat, the company presently has an average rating of “Moderate Buy” and an average target price of $282.47.

View Our Latest Report on MongoDB

MongoDB Company Profile

(Get Free Report)

MongoDB, Inc, together with its subsidiaries, provides general purpose database platform worldwide. The company provides MongoDB Atlas, a hosted multi-cloud database-as-a-service solution; MongoDB Enterprise Advanced, a commercial database server for enterprise customers to run in the cloud, on-premises, or in a hybrid environment; and Community Server, a free-to-download version of its database, which includes the functionality that developers need to get started with MongoDB.

See Also



Receive News & Ratings for MongoDB Daily – Enter your email address below to receive a concise daily summary of the latest news and analysts’ ratings for MongoDB and related companies with MarketBeat.com’s FREE daily email newsletter.

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.