Meta Introduces V-JEPA 2, a Video-Based World Model for Physical Reasoning

MMS Founder
MMS Robert Krzaczynski

Article originally posted on InfoQ. Visit InfoQ

Meta has introduced V-JEPA 2, a new video-based world model designed to improve machine understanding, prediction, and planning in physical environments. The model extends the Joint Embedding Predictive Architecture (JEPA) framework and is trained to predict outcomes in embedding space using video data.

The model is trained in two phases. In the first, over one million hours of video and one million images are used for self-supervised pretraining without any action labels. This enables the model to learn representations of motion, object dynamics, and interaction patterns. In the second phase, it is fine-tuned on 62 hours of robot data that includes both video and action sequences. This stage allows the model to make action-conditioned predictions and support planning.

One Reddit user commented on the approach:

Predicting in embedding space is going to be more compute efficient, and also it is closer to how humans reason… Really feeling the AGI with this approach, regardless of the current results using the system.

Others have noted the limits of the approach. Dorian Harris, who focuses on AI strategy and education, wrote:

AGI requires broader capabilities than V-JEPA 2’s specialised focus. It is a significant yet narrow breakthrough, and the AGI milestone is overstated.

In robotic applications, V-JEPA 2 is used for short- and long-horizon manipulation tasks. For example, when given a goal in the form of an image, the robot uses the model to simulate possible actions and select those that move it closer to the goal. The system replans at each step, using a model-predictive control loop. Meta reports task success rates between 65% and 80% for pick-and-place tasks involving novel objects and settings.

The model has also been evaluated on benchmarks such as Something-Something v2, Epic-Kitchens-100, and Perception Test. When used with lightweight readouts, it performs competitively on tasks related to motion recognition and future action prediction.

Meta is also releasing three new benchmarks focused on physical reasoning from video: IntPhys 2, which tests for recognition of physically implausible events; MVPBench, which assesses video-question answering under minimal changes; and CausalVQA, which focuses on cause-effect reasoning and planning.

David Eberle, CEO of Typewise, noted:

The ability to anticipate and adapt to dynamic situations is exactly what is needed to make AI agents more context-aware in real-world customer interactions, too, not just in robotics.

Model weights, code, and datasets are available via GitHub and Hugging Face. A leaderboard has been launched for community benchmarking.

About the Author

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Podcast: Technology Radar and the Reality of AI in Software Development

MMS Founder
MMS Rachel Laycock

Article originally posted on InfoQ. Visit InfoQ

Transcript

Shane Hastie: Good day folks. This is Shane Hastie, the InfoQ Engineering Culture podcast. Today I get to sit down with Rachel Laycock. Rachel, welcome. Thank you so much for taking the time to talk to us.

Rachel Laycock: Hi, Shane. Thanks for having me.

Shane Hastie: Now, there’s probably a few folks in our audience who don’t know who you are. So let’s start with who’s Rachel.

Introductions [00:56]

Rachel Laycock: So I am the global CTO for Thoughtworks. I’ve been at Thoughtworks for 15 years and I play lots of different technology leadership roles, but my background is as a software developer.

Shane Hastie: In your global CTO role at Thoughtworks, you get to see a lot of what is happening and you’re across many of the trends. One of the things that I know that you are responsible for or certainly deeply involved in is the Thoughtworks Technology Radar. Can you tell us a little bit about how does that come about?

The Technology Radar Process [01:29]

Rachel Laycock: So I am responsible for it, and it’s been running for over 10 years. So my predecessor started it and it started as basically a way for her to understand what was going on. Going to your point of the role, we’ve got 10,000 people across the globe in many different countries and regions. So getting a view of what’s going on and what trends are important was really challenging. And that was when we were probably a third of the size that we are now. And so essentially what we do is twice a year we kind of put a call-out to the Thoughtworkers on the ground of what’s happening, what tools and techniques and platforms and languages and frameworks are you using do you think are interesting? What have you managed to get into production, what have been your experiences with it?

And we mine that from across the globe. And then we get together in person and spend a week basically debating. So we get tech leaders from across the different parts of the globe in different roles, whether they might be head of tech or a regional CTO, or there might be a practice lead and we debate about where these things should go. So they’re in different quadrants. So it could be a language or a framework or a platform or a technique. But the real debate is whether it’s something we’re assessing, whether it’s something that goes into trial or we think it’s something that people should adopt as a default, or if we say hold and you should proceed with caution, which is actually what the hold ring means. People often don’t use it at all, but it just means like, “Hey, we’ve identified some challenges with this, so you should probably proceed with caution”.

And so we spend a week debating it and getting it down to, we try to get around a hundred blips. We use the metaphor of the Radar, so things coming in and out. And then over the course of that week, what kind of themes come up? So what are the topics of discussion is what we end up with our three to five key themes. And so it ends up being a little bit of a trend report, but it’s based completely on our experience. It’s not external research, it’s not peer reviewed except for by the people with deep experience in the room. And people often think it’s a forward-looking report, but it’s just because it’s literally the snapshot and we get it out within a few weeks. We are pretty fast in terms of publishing, but it’s actually a look back. It’s like last six months of technology at Thoughtworks.

Shane Hastie: So what has been most interesting for you in facilitating that?

What Makes the Radar Interesting [03:56]

Rachel Laycock: That’s a really good question. So what’s really interesting is where there’s a hot debate, where something, one region or one team is finding success with something, another has a different opinion or there’s maybe two tools that roughly do the same thing and people have different perspectives on it. That’s when it gets really interesting. The ones where it’s kind of like, yes, everybody thinks that’s a good idea, that’s less interesting when we get into debates is really interesting. But they’re also really hard to blip.

So we have this concept of what we call too complex to blip, where we’re basically like, we’re never getting this in two paragraphs, this whole discussion, so we’re going to have to put out an article or a podcast or something like that. So those basically go into our thought leadership backlog of things that we might write about.

So then you might see them on MartinFowler.com, you might see them on the Thoughtworks podcast, you might see a longer form article on our website that kind of gets into the nitty-gritty of the pros and cons and the nuances that are sometimes involved in discussing especially techniques, but even sometimes tools and languages and frameworks can be hotly debated, which to me is the really interesting part because as a leader, especially in technology, there’s no one tool to rule them all. There’s never, this is the one true answer. It always is, it depends. And those conversations and discussions give me as a technology leader, a deeper understanding of where those, it depends, cases lie, which gives me better tools and insights for sharing with our clients and helping them think about it as well.

Shane Hastie: What’s been the most surprising thing that’s come out of that Radar for you.

Rachel Laycock: I think that the surprising thing that came out of the Radar is the amount of books and key thought leadership that set the tone in the industry. And I’ll use microservices as an example. I remember being in the discussion when that was being discussed and like any new thing, it was very hotly debated. Some people were like, that doesn’t seem like a good idea. Here’s all the problems associated with it because there are lots of challenges. We’re talking about a very complex architecture that requires a lot of skills in the teams to be able to build in that way and run software in that way. And so it was the kind of thing that was hotly debated. And then that, it started off as an article and finally got .com and then became the book. And I myself did plenty of talks on the conference circuit about the pros and cons of microservices and when you should do it and when you shouldn’t.

It was a big side effect that I don’t think anyone planned. It wasn’t like we went into that and be like, we’re going to get together every six months and we’ll produce this Radar and then just assume that books and other great things are going to come off the back of that. And so it was much more organic. First of all, the Radar was only supposed to be an internal publication, but when we started sharing the insights with clients, they were like, “Oh, that’s really helpful”. So then we started publishing externally and then the books followed from that.

Data mesh is another example. I remember that also being discussed in the Radar conversation of another technique, another approach. Again, very hotly debated internally. It wasn’t just the thousands of Thoughtworkers said, yes, this is a great idea. It was like, let’s see some use cases and see how it plays out. And then they eventually become kind of the canonical book. So it’s been exciting to be part of that journey, but it’s surprising. You wouldn’t have expected it.

Shane Hastie: So this is the engineering culture podcast. What is it about the culture of the organization and that group that enables this to happen?

Culture and Organizational Dynamics [07:23]

Rachel Laycock: Well, the Thoughtworks itself, I recently published an article, ’cause obviously everybody’s talking about AI and software right now and how productive we can be, and I pointed out that I don’t think we’ve ever hired anyone at Thoughtworks just because of how fast they can code, because it’s never just been around just coding. It’s always been around attitude, aptitude, integrity. Those were the three kind of, I guess values that we hired for. But there’s also a curiosity to Thoughtworkers, a constant. If you look at our sensible defaults and continuous improvement, continuous learning, curiosity, these types of things, there’s a lot of, I guess statements and things we say at Thoughtworks, like strong opinions, loosely held. And so when you then bring together leaders, especially if they’ve grown up at Thoughtworks, you come into the room and people are not afraid to say what they think and they’re also happy to be told they’re wrong.

And I’ve heard people that have come from other organizations be not used to that at all, where you have to be careful who you say in front of which people, and if you say the wrong thing, it could be a career limiting move. At Thoughtworks, it’s really not like that for better or worse. Very challenging as a leader when everybody’s always challenging you and asking you, “But did you think about this? And have you asked the right question?” But you bring that into the room when we’re discussing technology and you end up with really thoughtful perspectives that have taken into account different opinions and people change their mind throughout the process of that week. Maybe not always, and maybe not on everything, but I do find that quite unique to Thoughtworks. I will say that we’ve helped a lot of clients really like the Radar, and we’ve found that helping them build their own radar for their own organization has been super helpful.

And then as they progress down the path and they’ve done it a few times, they’ll be like, “Well, how do you handle this and how do you handle this?” And I’m like, oh, those are all the challenging exceptions of just dealing with people in a room with lots of different opinions. But this is a great thing. It means you’ve created the culture of people being able to express their opinion and hear the different voices in the room and come to a reasonable conclusion. So I just think that without Thoughtworks kind of culture, I don’t think we would have the Radar.

Shane Hastie: If I want to, and you’ve given us some pointers there, but if I wanted to instill something like that culture in an organization, where do I start?

You Can’t Copy Culture – You Can Encourage It [09:46]

Rachel Laycock: That’s really challenging because it’s hard to change culture of an existing organization. They say if it’s easy to change, it was never part of the culture in the first place. So I think the thing about Thoughtworks is its culture is kind of what I said around attitude, aptitude, integrity, curiosity, continuous improvement, and the fact that the culture was also built around agile, it was agile right from the start, has created that kind of culture. But that’s not to say you can’t introduce some of those into a different organization.

So whenever I’ve been helping clients go on the agile journey, the continuous delivery journey, the microservices journey, the digital transformation journey, all the different journeys that how we’ve constantly renaming things, but it’s often the same kinds of concepts that we bring to the fore. And I always say to clients, you won’t be Thoughtworks at the end of this. And that’s not the intent, right? The intent is that you take some of the best things about us that fit in with your culture and you transform your culture because if you don’t transform your culture into something that’s around continuous improvement and continuously evolving software instead of an old mindset of build, deploy, run, move on, there’s certain aspects to the culture that have to change in order to get into that continuous improvement, continuous evolution mindset. And you can bring those to the fore.

And some of the ceremonies help, although I’m not a fan of certifications that are built around just ceremonies, but they have to have intent. The reason why you do a stand-up every morning is so that you can quickly adjust if people are heading in the wrong direction and everyone has a shared context. The reason why you do retrospectives is so that you actually improve how the team is working and the ways of working for the team. If you just do those things, but you’re not clear on the intent, then you don’t get the value. And so I think when you start to introduce these types of ceremonies that are a part of XP, that are part of agile, that are part of what people have been doing with digital transformation with clear intent, then you can start to bring some of that culture along.

And then of course, another critical piece is the recruiting, as I said, attitude, aptitude, integrity has always been our thing. It was never about, we must hire from these universities and people have to have these things. It was always about who they are and what they brought to the table and what their approach was. And if they were essentially up for continuously learning and adapting, and most of the time we got that right, nobody gets recruiting right a hundred percent of the time, but most of the time we got that right and we were able to continue to grow the culture that we wanted.

Shane Hastie: Shifting tack a tiny bit, and you did touch on it when we were talking about the Radar, the efficiency focus that seems to be so prevalent today with generative AI. We’re going to bring in the co-pilots, we’re going to generate huge amounts of code and we’re going to be so much more efficient. I don’t see that really happening, do you?

AI Efficiency Hype and Reality [12:41]

Rachel Laycock: No, I don’t, to be honest, to be totally blunt. And even when we are more efficient, people will build that in by default and will no longer be more efficient. I’ll tell you what I mean by that. So let’s say you measure efficiency in your organization through velocity, or how long does it take you to do so many story points? Well, you can almost do a kind of from them to there at this point in time, but once people start adopting those tools, they’re going to estimate those story points and that velocity based on the tools that they’re using.

What I’ve noticed is this is not like the agile movement, which was from the development teams driven by the engineers of recognizing that this waterfall approach was not helping us in many of the cases in terms of building software. And so if we take XP practices of pair programming to test-driven development, continuous integration, these kinds of things, and then some of the things I talked about earlier, like stand-ups and retrospective, that’s going to help us move fast as well as have high quality resilient features out the door.

But it was driven by engineers and by software development teams, not just engineers, also project managers and other folks that were part of the development teams. This focus on efficiency comes from the top down. And most of the technology leaders that I speak to are like, “My board’s putting pressure on me to measure efficiency and then tell me how much faster I am”. And I’ve been hearing this for a year and a half now, where they’re coming to me and saying, “What’s your efficiency metric? How are you measuring it?” And it’s notoriously hard to measure, by the way, for more reasons than I can even name here. But it’s also the wrong focus because there’s the issue of building high quality products at speed, at scale that are resilient in production is never been how much faster can I write code? That’s never been the problem.

The problem is often the legacy systems that are hampering their ability to move forward, it could be some processes and ways of working that are hampering their ability to move forward. It could be alongside the legacy. It’s like they don’t have the right continuous integration and continuous delivery and deployment pipelines in place and they don’t the right testing in place. And these are problems that I’ve seen time and time again in organizations that are the real barriers to them moving effectively and achieving results effectively. And honestly, at the end of the day, these tools, whether it’s code generation or coding assistance, they amplify indiscriminately. So you can write code faster, but it doesn’t mean that it’s high quality code, not if you don’t have the right guardrails in place. And so you could actually create more problems, right?

It’s like, okay, now I can write twice as much code. And it’s like, cool. Now you’ve got twice as much technical debt than you had before. And what was your biggest problem before in terms of being able to move quickly? Oh, it was technical debt. It wasn’t actually writing features faster.

And so I’m hopeful that as an industry, we’ll kind of move away from board-driven development as I’ve started calling it and back into, okay, let’s get these tools into the hands of the engineers and into the people that are part of the product software development life cycle. And then let’s see what great things they can do with them to solve some of the really intractable problems in software around technical debt, around legacy modernization, around running existing systems, around making systems more resilient instead of the hyper focus on let’s just build more and build it faster.

But I have a strong opinion on that. I’m happy for it to be weakly held and somebody to prove me wrong, but it hasn’t happened yet and it’s been two years.

Shane Hastie: If we do get these tools in the hands of the right people for the right reasons, what are some of the potential outcomes that you can see happening there?

Practical AI Applications [16:25]

Rachel Laycock: Well, one of the things I saw really early on when we gave some of our developers access to these tools that we’re dealing with some really challenging problems in modernizing legacy systems is the ability to use these tools alongside other techniques to do code comprehension. So to understand code bases that you can only understand with an SME. And I’m looking at things like mainframes and COBOL, but that’s not the only ones. There’s plenty of other code bases written in all kinds of languages that very few people in an organization really understand or really have context of. They were written in a time, maybe there wasn’t great documentation, there wasn’t much testing. They require that SME. And we saw people immediately starting to see results of just being able to comprehend and interrogate what a code base was doing. And I did a video on this at YOW! last year, so you can find that and Google it, but I talk about what was the techniques that we used to do that. So that was one.

There’s another organization that we just started partnering with called Mechanical Orchard, which founded by the people that founded Pivotal Labs, again, big proponents of XP practices. And they’ve started to use generative AI not only to understand existing code bases, but to actually transform them from old style code bases into new style code bases. And I’m not talking about just moving it from COBOL to Java, and it still looks like COBOL and it’s affectionately called JOBOL, but I’m talking about really being able to build out the test harness and then transform the code and then check that it’s performing at the other end. And so there’s some really interesting stuff going on there as well. And then I think what’s also an important factor is when you get these tools in the hands of really experienced developers, they can test the edges of what these things can do really well and where the gaps still are.

AI Coding Mirroring the Microservices Intent [18:16]

And I’ll use the example of when we first were introducing the concept of microservices, one of the early concepts was, these very modular small pieces, if you build it in such a way that it’s small enough that you can easily comprehend it, then when you want to make changes to it, you don’t change it, you just rebuild it, which I don’t think anyone ever really did because there’s still effort involved in doing that. But let’s say you did have a really nicely architected modular system and you’ve built great test harnesses, and that’s all in a pipeline. Well, maybe with Generator you could rebuild small modular components quite easily. So that’s where I think it starts to get interesting is like, what can we do with the code base based on the current state of that code base? If it’s well architected and very modular, you could probably do different things with it versus it being a legacy code base. How can we take a legacy code base and turn it into something that’s well architected and modular?

But I think what will be really important will be how we specify the software and how we verify it. And so organizations that have gone to the effort of having really strong guardrails and really good verification in their systems with continuous deployment and continuous delivery, I think are going to be able to do more interesting things with these tools earlier than those that are not in that state. And so I’ve not exactly predicted exactly where it’s going, but I think those are the things that we’re exploring. And I think those are the things that start to get interesting when you put it in the hands of the people who are really solving problems day in and day out in software.

Shane Hastie: Will I one day be able to take my monolith and drop it into a funnel and out of the end comes full microservices?

The Monolith to Microservices Question [19:58]

Rachel Laycock: One day, maybe not in the near term. In the near term, these tools can help you do that, but they’re not going to do it for you. It’s not insert code base here out pops well-architected modular architecture. It’s going to take still a lot of humans in the loop along the way. And that’s probably a good thing. ‘Cause I think that the hype around the end of the software engineer I think is greatly overestimated right now. But I do think the role will change and the kinds of things that you do day in and day out could change based on the tools, but that’s always been true. Once we started using IntelliJ and IDEs, we typed a lot less, we’re probably going to be typing even less, but the understanding of the architecture of the system and getting that right for how you want the thing to run in production, that still requires real depth of experience. And I don’t think that’s going anywhere anytime soon.

All the POCs and all the hype I see we’re talking about like, “Oh, look, I can build this app in five minutes and it used to take me days”. And I’m like, yes, but it’s a single app. It’s like, that’s not what most of us are doing. When we’re building scaled enterprise software, we’re not just building one random app. That’s really not the problem we have to solve. It’s fun and it’s cool, great side project, and I’m all for vibe coding, building my own little apps at home, but in production guardrails are required.

Shane Hastie: So one of the things that sits in my mind is these tools are really good for experienced engineers. How do we build that experience?

Building Developer Experience [21:34]

Rachel Laycock: That’s a great question. It’s going to get a lot more challenging. Right now, the way we build that experience is we have what we call leverage across a product team. So you have experienced people, you have some mid-level experienced people, and then you’ll have some fresh out of university or one to three years in their role and mixing them together, you get that kind of mentoring situation where they learn from each other. And then obviously most of us learned from the first time we put something in production and it went wrong, is that it’s usually the hard lesson that really teaches you about the importance of good testing and feature toggles and all of this good stuff. And we do a lot of that. At least since I’ve been in the industry, we’ve been using IDEs more or less. And so yes, a lot of it is auto complete, but most of the debugging and everything you had to figure out yourself.

Now, if you introduce tools that are helping you do the debugging or they’re helping you fix the tickets, to me, that’s where a lot of the learning is often is when things go wrong. If it always goes right, then you only ever learn the happy path. And that is the thing that’s puzzling to me is that if we get to a stage where we’re able to build more software and we’re able to rebuild more software faster and more effectively, then less people will be running bigger systems because there’ll always be more software to build. Nobody’s run out of ideas of products, things that they want to build, but it brings up the question of like, well, how do you grow really deep expertise in folks that, they’re at higher levels of abstraction. So when something goes wrong, if the AI can’t help them, how are they going to figure it out?

And that’s kind of a puzzle to me. And I think there’s various different tests we’re running in terms of different shapes and team sizes that you can leverage with different tools, but I don’t have an answer to that, what the shape of a team will look like and how we’ll grow experience in engineers in the future when it changes dramatically. And I’m sure that the industry faced this problem when we moved into the layer of abstraction where we longer had to worry so much about the performance of the machine, and a lot of that was taken care of. People were like, “Well, who’s going to care about that? And what if something goes wrong?”

And in the end it’s like we track, we have the kinds of verification in place that tells us if the performance is not going well, and then we’ll go and in there and debug stuff. And we do seem to figure it out. But yes, it’s an open question for me. I’m sure we’ll get to a place in the industry where we figure out how to create new career paths for people, but there’s a lot of unknowns right now, and that in itself just generates a lot of risk and fear.

Shane Hastie: Risk and fear and the turbulence that we’ve seen in the industry. And as a result of that massive disengagement and all of those things, how do we shift as an industry? What are the things we can do to get better?

Addressing Workforce Concerns and Hype-cycle Effects [24:34]

Rachel Laycock: Yes, it’s a good point. I think the challenge with all this hype and all this noise around, oh, we won’t need software engineers anymore, the agent’s going to do everything, is it does disengage the workforce. And in fact, what I predict is we’re still going to need engineers that really understand systems. It’s just that they’ll probably have coverage over more systems because a lot more of it can be automated, which is great, but we still need them in those roles. I don’t see them going anywhere anytime soon. And I am worried that with all the hype, I mean the technologists are getting disengaged. It’s hard to get people excited and say like, “Hey, use this tool and see what we can do with it”, if they’re being told, “Use this tool and you won’t have a job in five years”.

So I guess I’m just hoping that as an industry, we get over this hype cycle and I’m starting to see signs of it in the news, but we’ll see, where the models settle down a little bit. The tools settle down a little bit, and then it’s more incremental, the change. And then we’ll start to know, okay, well, how do things change with these models and with these tools?

But it certainly, for inside Thoughtworks, what I’ve been saying is it’s not me, the CTO or the technology leadership that’s doing this to you, this is happening in the industry. And I recognize this is coming from top down. It’s not you guys saying, “Hey, we found these cool tools”, which is kind of the Radar, going back to the earlier conversation. The Radar helps us identify what kinds of things we wanted to tell our clients to use because our teams were like, “These tools are so much better. Can we get clients to use them?” But I’m trying to engage people with I believe that we’re still going to need deep technology professionals, and I want to help everybody at Thoughtworks learn to be what the new version of that is.

Now, I could be wrong. But so could the hundred other 200, 300 other people spousing different perspectives out in the world? No one knows exactly how things are going to turn out. But I do think it’s really important as technology leaders to try and figure out how to engage people and get them excited about this. Because if they don’t, it’s just going to keep coming from the outside in, from the board, from organizations that are incentivized to make a lot of noise about this. And I believe we’re going to need deep technologists, especially if they’re covering more systems in the future. And so I don’t want people to get disinterested in the industry or look to go into another field or decide not even to join. And that’s what worries me the most actually right now, is that people coming out of university is like, “Oh, well, there’s no point. Everybody’s saying that this won’t be a job in two years”. I just don’t buy that. And I think that’s really problematic.

Shane Hastie: How do I tell my granddaughter that there’s a good career in technology today?

Technology Careers Remain Viable Despite AI Advances [27:25]

Rachel Laycock: Well, my view is technology is not going anywhere. It’s like the machine right now can’t do complex design. It doesn’t know intent. It doesn’t know why the software was built, why it was. It can tell you what it’s doing, but it doesn’t know why. And so that’s where humans fit in. Good design still comes from humans. And at the end of the day, these large language models are just built off past knowledge. There’s still a lot of creativity that goes into software. And I think it’s this constant thinking about software in terms of we’re building bridges. We lean into these engineering metaphors, and I just wish that they would die actually, because it’s just as much like an art and a craft that it is a science and it’s evolved so much because there’s so much creativity that humans can bring to it.

And so as I said earlier, will the day-to-day tasks of a software developer change? For sure, but is technology going anywhere? Is the machine going to build it and run it all for us? Not anytime soon, I believe. And so I do think that there’s still plenty of roles in the technology industry for all of us. But predicting what’s going to happen in 10 years, that’s much more challenging.

Shane Hastie: Rachel, really interesting conversation. Thank you so much for taking the time to talk to us today. If people want to continue the conversation, where do they find you?

Rachel Laycock: Oh, that’s easy. I’m on LinkedIn. You can just search for me and ping me and I’m happy to talk.

Shane Hastie: Thank you so much.

Rachel Laycock: All right. Nice to meet you, Shane. Thank you.

Mentioned:

About the Author

.
From this page you also have access to our recorded show notes. They all have clickable links that will take you directly to that part of the audio.

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Dev Proxy v0.28 Introduces Telemetry for LLM Usage and Cost Analysis

MMS Founder
MMS Almir Vuk

Article originally posted on InfoQ. Visit InfoQ

The .NET team has released Dev Proxy version 0.28, introducing new capabilities to improve observability, plugin extensibility, and integration with AI models. A central feature of this release is the OpenAITelemetryPlugin, which, as reported, allows developers to track usage and estimated costs of OpenAI and Azure OpenAI language model requests within their applications.

The plugin intercepts requests and records details such as the model used, token counts (prompt, completion, and total), per-request cost estimates, and grouped summaries per model.

According to the announcement, this plugin supports deeper visibility into how applications interact with LLMs, which can be visualized using external tools like OpenLIT to understand usage patterns and optimize AI-related expenses.

The update also supports Microsoft’s Foundry Local, a high-performance local AI runtime stack introduced at the Build conference last month. Foundry Local enables developers to redirect cloud-based LLM calls to local environments, reducing cost and enabling offline development.

As stated, Dev Proxy can now be configured to use local models, quoting the following from the dev team: 

Our initial tests show significant improvements using Phi-4 mini on Foundry Local compared to other models we’ve used in the past. We’re planning to integrate with Foundry Local by default, in the future versions of Dev Proxy.

To configure Dev Proxy with Foundry Local, developers can specify the local model and endpoint in the languageModel section of the proxy’s configuration file. This integration offers a cost-effective alternative for developers working with LLMs during local development.

Regarding the .NET Aspire users, a preview version of Dev Proxy extensions is now available. These extensions simplify integration with Aspire applications, allowing Dev Proxy to run either locally or via Docker with minimal setup. As reported, this enhancement improves portability and simplifies the configuration process for distributed development teams.

In addition, support for OpenAI payloads has been expanded. As stated, previously limited to text completions, Dev Proxy now includes support for a wider range of completion types, increasing compatibility with OpenAI APIs.

The release also brings enhancements to TypeSpec generation. In line with TypeSpec v1.0 updates, the plugin now supports improved PATCH operation generation, using MergePatchUpdate to clearly define merge patch behavior.

As noted in the release, Dev Proxy now supports JSONC (JSON with comments) across all configuration files. This addition enables developers to add inline documentation and annotations, which can aid in team collaboration and long-term maintenance.

Concurrency improvements have also been made in logging and mocking. These changes ensure that logs for parallel requests are grouped accurately, helping developers trace request behavior more effectively.

Two breaking changes are included in this release. First, the GraphConnectorNotificationPlugin has been removed, following the deprecation of Graph connector deployment via Microsoft Teams.

Furthermore, the –audience flag in the devproxy jwt create command has been renamed to –-audiences, while the shorthand alias -a remains unchanged.

The CRUD API plugin has been updated with improved CORS handling and consistent JSON responses, enhancing its reliability in client-side applications.

Finally, the Dev Proxy Toolkit for Visual Studio Code has been updated to version 0.24.0. This release introduces new snippets and commands, including support for the already mentioned OpenAITelemetryPlugin, also improved Dev Proxy Beta compatibility, and better process detection.

For interested readers, full release notes are available in the official repository, providing a complete overview of features, changes, and guidance for this version

About the Author

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


How Tripadvisor Migrated to The Composable Architecture for Their SwiftUI App

MMS Founder
MMS Sergio De Simone

Article originally posted on InfoQ. Visit InfoQ

In a thorough article, Tripadvisor iOS principal engineer Ben Sarrazin described their journey toward adopting The Composable Architecture (TCA) for their existing iOS app, moving away from the Model-View-ViewModel-Coordinator (MVVM-C) architecture.

Sarrazin explains that the decision to move away from MVVM-C was driven by several factors, all tied to increasing app complexity and a growing team. One of the pain points was navigation:

Perhaps the most painful aspect of our MVVM-C implementation is the navigation structure — or more accurately, the lack of one. Our coordinators can launch any other coordinator, creating a web of navigation possibilities that is nearly impossible to document or reason about.

For example, this complexity became evident when an anonymous user visiting the site attempted an action requiring authentication. Even in such a common scenario, the MVVM-C navigation involved numerous coordinators, view models, and event mappers, making the codebase hard to understand and modify.

Another challenge stemmed from the coordinators’ reliance on UIViewControllers, which added complexity and required using Combine as a communication layer between SwiftUI and UIKit.

Debugging Combine-based event chains proves exceptionally difficult, especially when they traverse multiple coordinators and are composed of several layers of publishers and operators.

TCA, by contrast, promised several advantages, such as seamless integration with SwiftUI, robust testing capabilities, and improved composability. The Tripadvisor team also valued TCA’s evolution and maturity, along with its high-quality documentation and support.

To handle the migration, the Tripadvisor iOS team adopted a bottom-up approach. They began by identifying view models without children and replacing them with TCA stores, then gradually worked their way up to parent view models.

A similar “leaf-to-root” strategy was applied to navigation elements, but with a twist. In fact, since TCA requires centralized, state-driven navigation, coordinators were not replaced one-to-one. Instead, parent coordinators assumed the navigation responsibilities of their children. This ultimately resulted in a single source of truth for navigation: a global router implemented as a TCA reducer.

This navigation consolidation represents perhaps the most transformative aspect of our migration. Where we currently have dozens of coordinators with overlapping responsibilities and complex interactions, we’ll eventually have a clean, state-based navigation system that’s both more powerful and significantly easier to understand.

The migration required a complete mindset shift, explains Sarrazin, and came with some challenges. One key insight was that replicating the existing feature hierarchies in TCA wasn’t always the best approach. Instead, the team learned to take into account the implications of sending actions between parent and child components, which can lead to excessive bidirectional communication. A better pattern, they found, was to centralize shared behaviors in parent components where possible.

Another challenge arose when too many actions were dispatched in a short time span, for example, while scrolling through a list. To address this, the team found it effective to debounce high-frequency inputs and minimize the number of actions sent to the store, favoring simple state updates within reducers instead.

An area where TCA also brought many benefits was testing, helping reduce test brittleness.

We found that tests written with TCA’s TestStore provided much stronger guarantees about application behavior. A test that passes gives us high confidence that the feature works as expected, which wasn’t always true with our previous testing approach, especially with the heavy dependency on Combine and schedulers.

Additionally, the team found that TCA tests often served as a form of design feedback: when a test became hard to read or write, it was usually a sign that the underlying code could be improved.

Overall, the migration proved very effective, according to Sarrazin. His article offers many valuable insights that go beyond what can be covered here. Do not miss it if you’re interested in the full details.

About the Author

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Anthropic Releases Claude Code SDK to Power AI-Paired Programming

MMS Founder
MMS Robert Krzaczynski

Article originally posted on InfoQ. Visit InfoQ

Anthropic has launched Claude Code SDK, a new toolkit that extends the reach of its code assistant, Claude, far beyond the chat interface. Designed for integration into modern developer workflows, the SDK offers a suite of tools for TypeScript, Python, and the command line, enabling advanced automation of code review, refactoring, and transformation tasks.

At its core, Claude Code SDK is built around Model Context Protocol (MCP)—a system that allows Claude to understand the developer’s environment by injecting relevant tools, file systems, and context into its reasoning. Developers can now run Claude as a subprocess, use it in GitHub Actions, or call it in local scripts with structured JSON or streamed responses. The SDK is designed to address a common limitation of AI-assisted coding: the lack of context and integration.

Early adopters are already weighing in. David Richards, a principal software engineer, shared his experience:

Claude Code’s capabilities are a huge leap forward. I was initially skeptical of coding assistants due to the technical debt they often caused, but Claude Code changes the game completely. Its ability to understand context and generate production-ready code has transformed my development workflow.

David Richards points to a common sentiment among senior engineers who have previously found AI assistants lacking in nuance, particularly in large and complex codebases. Claude Code appears to address that challenge directly, integrating with tools like TypeScript servers, linters, and static analysis to produce suggestions that require less cleanup and fewer manual corrections.

However, not all feedback has been positive. Wajahat Islam Gul, a React and Next.js developer, raised a concern about the implications for learning and mentorship:

Wouldn’t it kill one of the main purposes of these code reviews? That is learning.
If a junior engineer is running Claude to automatically fix issues marked by a senior engineer, what will happen in a few years when the junior becomes a senior?

As Claude Code SDK becomes more widely adopted, engineering leaders will need to evaluate how it fits into their team’s development workflows, including its impact on code quality, collaboration, and skill development.

Anthropic has also highlighted security and control features in the SDK, ensuring that teams can manage API access, customize tooling integrations, and audit AI-driven code changes. This level of configurability is expected to appeal to large organizations with stringent development standards.

More technical details can be found in the official documentation

About the Author

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


MongoDB (NASDAQ:MDB) Trading 3.3% Higher – Here’s Why – MarketBeat

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

MongoDB, Inc. (NASDAQ:MDBGet Free Report)’s stock price shot up 3.3% during mid-day trading on Thursday . The company traded as high as $217.96 and last traded at $217.54. 350,033 shares changed hands during mid-day trading, a decline of 82% from the average session volume of 1,949,819 shares. The stock had previously closed at $210.60.

Analyst Upgrades and Downgrades

A number of research firms have issued reports on MDB. The Goldman Sachs Group decreased their target price on MongoDB from $390.00 to $335.00 and set a “buy” rating for the company in a research report on Thursday, March 6th. Guggenheim lifted their price target on MongoDB from $235.00 to $260.00 and gave the company a “buy” rating in a research report on Thursday, June 5th. Canaccord Genuity Group reduced their price target on MongoDB from $385.00 to $320.00 and set a “buy” rating for the company in a research report on Thursday, March 6th. JMP Securities reiterated a “market outperform” rating and issued a $345.00 price target on shares of MongoDB in a research report on Thursday, June 5th. Finally, Loop Capital cut MongoDB from a “buy” rating to a “hold” rating and reduced their price target for the company from $350.00 to $190.00 in a research report on Tuesday, May 20th. Eight equities research analysts have rated the stock with a hold rating, twenty-four have issued a buy rating and one has given a strong buy rating to the company’s stock. According to MarketBeat.com, the stock has a consensus rating of “Moderate Buy” and a consensus target price of $282.47.

View Our Latest Stock Analysis on MongoDB

MongoDB Price Performance

The company has a 50 day simple moving average of $179.38 and a 200-day simple moving average of $227.94. The stock has a market cap of $17.10 billion, a P/E ratio of -76.88 and a beta of 1.39.

MongoDB (NASDAQ:MDBGet Free Report) last posted its quarterly earnings results on Wednesday, June 4th. The company reported $1.00 earnings per share (EPS) for the quarter, topping the consensus estimate of $0.65 by $0.35. MongoDB had a negative return on equity of 12.22% and a negative net margin of 10.46%. The business had revenue of $549.01 million for the quarter, compared to analyst estimates of $527.49 million. During the same period last year, the company earned $0.51 EPS. MongoDB’s revenue for the quarter was up 21.8% compared to the same quarter last year. As a group, research analysts expect that MongoDB, Inc. will post -1.78 earnings per share for the current year.

Insider Transactions at MongoDB

In other MongoDB news, CAO Thomas Bull sold 301 shares of the business’s stock in a transaction dated Wednesday, April 2nd. The shares were sold at an average price of $173.25, for a total value of $52,148.25. Following the transaction, the chief accounting officer now owns 14,598 shares in the company, valued at approximately $2,529,103.50. The trade was a 2.02% decrease in their ownership of the stock. The transaction was disclosed in a legal filing with the Securities & Exchange Commission, which can be accessed through the SEC website. Also, insider Cedric Pech sold 1,690 shares of the business’s stock in a transaction dated Wednesday, April 2nd. The shares were sold at an average price of $173.26, for a total transaction of $292,809.40. Following the completion of the transaction, the insider now owns 57,634 shares in the company, valued at $9,985,666.84. The trade was a 2.85% decrease in their position. The disclosure for this sale can be found here. Over the last 90 days, insiders have sold 49,208 shares of company stock worth $10,167,739. Insiders own 3.10% of the company’s stock.

Institutional Inflows and Outflows

A number of hedge funds and other institutional investors have recently bought and sold shares of MDB. Summit Trail Advisors LLC bought a new position in MongoDB during the 4th quarter valued at approximately $224,000. Allspring Global Investments Holdings LLC raised its position in MongoDB by 9.0% during the 4th quarter. Allspring Global Investments Holdings LLC now owns 191,115 shares of the company’s stock valued at $46,691,000 after purchasing an additional 15,825 shares during the last quarter. Van ECK Associates Corp bought a new position in MongoDB during the 4th quarter valued at approximately $211,000. Stanley Laman Group Ltd. bought a new position in MongoDB during the 4th quarter valued at approximately $7,520,000. Finally, Avestar Capital LLC raised its position in MongoDB by 2.0% during the 4th quarter. Avestar Capital LLC now owns 2,165 shares of the company’s stock valued at $504,000 after purchasing an additional 42 shares during the last quarter. Institutional investors own 89.29% of the company’s stock.

About MongoDB

(Get Free Report)

MongoDB, Inc, together with its subsidiaries, provides general purpose database platform worldwide. The company provides MongoDB Atlas, a hosted multi-cloud database-as-a-service solution; MongoDB Enterprise Advanced, a commercial database server for enterprise customers to run in the cloud, on-premises, or in a hybrid environment; and Community Server, a free-to-download version of its database, which includes the functionality that developers need to get started with MongoDB.

Featured Articles

Before you consider MongoDB, you’ll want to hear this.

MarketBeat keeps track of Wall Street’s top-rated and best performing research analysts and the stocks they recommend to their clients on a daily basis. MarketBeat has identified the five stocks that top analysts are quietly whispering to their clients to buy now before the broader market catches on… and MongoDB wasn’t on the list.

While MongoDB currently has a Moderate Buy rating among analysts, top-rated analysts believe these five stocks are better buys.

View The Five Stocks Here

The Next 7 Blockbuster Stocks for Growth Investors Cover

Wondering what the next stocks will be that hit it big, with solid fundamentals? Enter your email address to see which stocks MarketBeat analysts could become the next blockbuster growth stocks.

Get This Free Report

Like this article? Share it with a colleague.

Link copied to clipboard.

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


MongoDB Target of Unusually High Options Trading (NASDAQ:MDB) – MarketBeat

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

MongoDB, Inc. (NASDAQ:MDBGet Free Report) saw some unusual options trading on Wednesday. Traders acquired 23,831 put options on the company. This is an increase of approximately 2,157% compared to the typical volume of 1,056 put options.

MongoDB Price Performance

Shares of NASDAQ MDB traded up $0.06 during midday trading on Thursday, hitting $210.66. 2,180,893 shares of the company’s stock were exchanged, compared to its average volume of 1,956,303. The firm’s fifty day simple moving average is $179.38 and its 200-day simple moving average is $227.94. The company has a market cap of $17.10 billion, a PE ratio of -76.88 and a beta of 1.39. MongoDB has a twelve month low of $140.78 and a twelve month high of $370.00.

MongoDB (NASDAQ:MDBGet Free Report) last issued its quarterly earnings data on Wednesday, June 4th. The company reported $1.00 earnings per share for the quarter, topping the consensus estimate of $0.65 by $0.35. The business had revenue of $549.01 million during the quarter, compared to the consensus estimate of $527.49 million. MongoDB had a negative net margin of 10.46% and a negative return on equity of 12.22%. The company’s revenue was up 21.8% compared to the same quarter last year. During the same quarter in the previous year, the firm earned $0.51 EPS. Equities analysts anticipate that MongoDB will post -1.78 EPS for the current fiscal year.

Insider Activity at MongoDB

In other news, Director Hope F. Cochran sold 1,175 shares of the company’s stock in a transaction on Tuesday, April 1st. The stock was sold at an average price of $174.69, for a total transaction of $205,260.75. Following the transaction, the director now directly owns 19,333 shares of the company’s stock, valued at $3,377,281.77. This represents a 5.73% decrease in their ownership of the stock. The sale was disclosed in a document filed with the SEC, which is available through the SEC website. Also, insider Cedric Pech sold 1,690 shares of MongoDB stock in a transaction dated Wednesday, April 2nd. The stock was sold at an average price of $173.26, for a total value of $292,809.40. Following the completion of the sale, the insider now owns 57,634 shares of the company’s stock, valued at $9,985,666.84. The trade was a 2.85% decrease in their ownership of the stock. The disclosure for this sale can be found here. Insiders sold a total of 49,208 shares of company stock valued at $10,167,739 in the last 90 days. Corporate insiders own 3.10% of the company’s stock.

Institutional Trading of MongoDB

Several institutional investors and hedge funds have recently made changes to their positions in MDB. Sumitomo Mitsui Trust Group Inc. grew its position in shares of MongoDB by 12.2% during the fourth quarter. Sumitomo Mitsui Trust Group Inc. now owns 195,443 shares of the company’s stock worth $45,501,000 after acquiring an additional 21,308 shares during the last quarter. Sumitomo Mitsui DS Asset Management Company Ltd raised its holdings in shares of MongoDB by 20.9% in the fourth quarter. Sumitomo Mitsui DS Asset Management Company Ltd now owns 7,382 shares of the company’s stock valued at $1,719,000 after buying an additional 1,274 shares during the last quarter. Summit Trail Advisors LLC acquired a new stake in shares of MongoDB during the fourth quarter worth about $224,000. UNICOM Systems Inc. acquired a new position in MongoDB in the 4th quarter valued at about $4,889,000. Finally, Allspring Global Investments Holdings LLC increased its position in MongoDB by 9.0% in the 4th quarter. Allspring Global Investments Holdings LLC now owns 191,115 shares of the company’s stock valued at $46,691,000 after acquiring an additional 15,825 shares during the period. 89.29% of the stock is owned by hedge funds and other institutional investors.

Wall Street Analyst Weigh In

Several research analysts have recently weighed in on the stock. Truist Financial decreased their price objective on shares of MongoDB from $300.00 to $275.00 and set a “buy” rating for the company in a report on Monday, March 31st. The Goldman Sachs Group dropped their price target on MongoDB from $390.00 to $335.00 and set a “buy” rating on the stock in a research note on Thursday, March 6th. Redburn Atlantic upgraded MongoDB from a “sell” rating to a “neutral” rating and set a $170.00 price objective for the company in a research report on Thursday, April 17th. Citigroup dropped their target price on MongoDB from $430.00 to $330.00 and set a “buy” rating on the stock in a research report on Tuesday, April 1st. Finally, Macquarie reaffirmed a “neutral” rating and set a $230.00 price target (up previously from $215.00) on shares of MongoDB in a research report on Friday, June 6th. Eight analysts have rated the stock with a hold rating, twenty-four have issued a buy rating and one has given a strong buy rating to the company. According to data from MarketBeat.com, MongoDB currently has a consensus rating of “Moderate Buy” and a consensus target price of $282.47.

View Our Latest Research Report on MDB

About MongoDB

(Get Free Report)

MongoDB, Inc, together with its subsidiaries, provides general purpose database platform worldwide. The company provides MongoDB Atlas, a hosted multi-cloud database-as-a-service solution; MongoDB Enterprise Advanced, a commercial database server for enterprise customers to run in the cloud, on-premises, or in a hybrid environment; and Community Server, a free-to-download version of its database, which includes the functionality that developers need to get started with MongoDB.

Featured Articles

Before you consider MongoDB, you’ll want to hear this.

MarketBeat keeps track of Wall Street’s top-rated and best performing research analysts and the stocks they recommend to their clients on a daily basis. MarketBeat has identified the five stocks that top analysts are quietly whispering to their clients to buy now before the broader market catches on… and MongoDB wasn’t on the list.

While MongoDB currently has a Moderate Buy rating among analysts, top-rated analysts believe these five stocks are better buys.

View The Five Stocks Here

10 Best Cheap Stocks to Buy Now Cover

MarketBeat just released its list of 10 cheap stocks that have been overlooked by the market and may be seriously undervalued. Enter your email address and below to see which companies made the list.

Get This Free Report

Like this article? Share it with a colleague.

Link copied to clipboard.

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Traders Purchase High Volume of Call Options on MongoDB (NASDAQ:MDB)

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

MongoDB, Inc. (NASDAQ:MDBGet Free Report) was the recipient of unusually large options trading on Wednesday. Stock investors acquired 36,130 call options on the company. This represents an increase of 2,077% compared to the typical volume of 1,660 call options.

Analyst Upgrades and Downgrades

MDB has been the subject of several recent research reports. Cantor Fitzgerald boosted their price objective on MongoDB from $252.00 to $271.00 and gave the company an “overweight” rating in a report on Thursday, June 5th. Canaccord Genuity Group decreased their price target on shares of MongoDB from $385.00 to $320.00 and set a “buy” rating for the company in a research note on Thursday, March 6th. JMP Securities restated a “market outperform” rating and set a $345.00 price objective on shares of MongoDB in a report on Thursday, June 5th. Scotiabank lifted their price objective on shares of MongoDB from $160.00 to $230.00 and gave the company a “sector perform” rating in a report on Thursday, June 5th. Finally, Macquarie reissued a “neutral” rating and set a $230.00 target price (up from $215.00) on shares of MongoDB in a research report on Friday, June 6th. Eight equities research analysts have rated the stock with a hold rating, twenty-four have given a buy rating and one has given a strong buy rating to the company’s stock. According to data from MarketBeat, the company currently has a consensus rating of “Moderate Buy” and an average target price of $282.47.

Check Out Our Latest Report on MongoDB

MongoDB Trading Down 1.1%

<!—->

Shares of NASDAQ MDB opened at $210.60 on Thursday. The stock has a fifty day simple moving average of $179.38 and a two-hundred day simple moving average of $227.94. MongoDB has a 1-year low of $140.78 and a 1-year high of $370.00. The firm has a market cap of $17.10 billion, a price-to-earnings ratio of -76.86 and a beta of 1.39.

MongoDB (NASDAQ:MDBGet Free Report) last posted its quarterly earnings results on Wednesday, June 4th. The company reported $1.00 earnings per share for the quarter, topping the consensus estimate of $0.65 by $0.35. MongoDB had a negative return on equity of 12.22% and a negative net margin of 10.46%. The company had revenue of $549.01 million during the quarter, compared to the consensus estimate of $527.49 million. During the same quarter in the prior year, the company earned $0.51 EPS. The firm’s quarterly revenue was up 21.8% compared to the same quarter last year. Analysts forecast that MongoDB will post -1.78 EPS for the current year.

Insiders Place Their Bets

In other MongoDB news, insider Cedric Pech sold 1,690 shares of the firm’s stock in a transaction on Wednesday, April 2nd. The shares were sold at an average price of $173.26, for a total value of $292,809.40. Following the completion of the sale, the insider now directly owns 57,634 shares in the company, valued at approximately $9,985,666.84. This trade represents a 2.85% decrease in their ownership of the stock. The transaction was disclosed in a document filed with the SEC, which can be accessed through this hyperlink. Also, Director Hope F. Cochran sold 1,175 shares of the company’s stock in a transaction dated Tuesday, April 1st. The shares were sold at an average price of $174.69, for a total transaction of $205,260.75. Following the transaction, the director now owns 19,333 shares in the company, valued at $3,377,281.77. This represents a 5.73% decrease in their position. The disclosure for this sale can be found here. In the last ninety days, insiders sold 49,208 shares of company stock worth $10,167,739. Company insiders own 3.10% of the company’s stock.

Institutional Trading of MongoDB

A number of institutional investors have recently modified their holdings of MDB. HighTower Advisors LLC raised its holdings in MongoDB by 2.0% during the 4th quarter. HighTower Advisors LLC now owns 18,773 shares of the company’s stock valued at $4,371,000 after acquiring an additional 372 shares during the period. Jones Financial Companies Lllp grew its position in shares of MongoDB by 68.0% in the fourth quarter. Jones Financial Companies Lllp now owns 1,020 shares of the company’s stock valued at $237,000 after purchasing an additional 413 shares in the last quarter. Smartleaf Asset Management LLC raised its stake in shares of MongoDB by 56.8% during the 4th quarter. Smartleaf Asset Management LLC now owns 370 shares of the company’s stock valued at $87,000 after purchasing an additional 134 shares during the period. 111 Capital purchased a new stake in MongoDB during the 4th quarter worth about $390,000. Finally, Steward Partners Investment Advisory LLC boosted its stake in MongoDB by 12.9% in the 4th quarter. Steward Partners Investment Advisory LLC now owns 1,168 shares of the company’s stock worth $272,000 after purchasing an additional 133 shares during the period. 89.29% of the stock is currently owned by hedge funds and other institutional investors.

About MongoDB

(Get Free Report)

MongoDB, Inc, together with its subsidiaries, provides general purpose database platform worldwide. The company provides MongoDB Atlas, a hosted multi-cloud database-as-a-service solution; MongoDB Enterprise Advanced, a commercial database server for enterprise customers to run in the cloud, on-premises, or in a hybrid environment; and Community Server, a free-to-download version of its database, which includes the functionality that developers need to get started with MongoDB.

See Also



Receive News & Ratings for MongoDB Daily – Enter your email address below to receive a concise daily summary of the latest news and analysts’ ratings for MongoDB and related companies with MarketBeat.com’s FREE daily email newsletter.

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Apple Completes Migration of Key Ecosystem Service to Swift, Gains 40% Performance Uplift

MMS Founder
MMS Matt Foster

Article originally posted on InfoQ. Visit InfoQ

Apple has migrated its global Password Monitoring service from Java to Swift, achieving a 40% increase in throughput and significantly reducing memory usage—freeing up nearly 50% of previously allocated Kubernetes capacity. 

In a recent post, Apple engineers detailed how the rewrite helped the service scale to billions of requests per day while improving responsiveness and maintainability. The team cited lower memory overhead, improved startup time, and simplified concurrency as key reasons for choosing Swift over further JVM optimization.

Swift allowed us to write smaller, less verbose, and more expressive codebases (close to 85% reduction in lines of code) that are highly readable while prioritizing safety and efficiency.

Apple’s Password Monitoring service, part of the broader Password app’s ecosystem, is responsible for securely checking whether a user’s saved credentials have appeared in known data breaches, without revealing any private information to Apple. It handles billions of requests daily, performing cryptographic comparisons using privacy-preserving protocols.

This workload demands high computational throughput, tight latency bounds, and elastic scaling across regions. Traffic fluctuates significantly over the course of a day, with regional peaks differing by up to 50%. To accommodate these swings, the system must quickly spin up or wind down instances while maintaining low-latency responses.

Apple’s previous Java implementation struggled to meet the service’s growing performance and scalability needs. Garbage collection caused unpredictable pause times under load, degrading latency consistency. Startup overhead—from JVM initialization, class loading, and just-in-time compilation, slowed the system’s ability to scale in real time. Additionally, the service’s memory footprint, often reaching tens of gigabytes per instance, reduced infrastructure efficiency and raised operational costs.

Originally developed as a client-side language for Apple platforms, Swift has since expanded into server-side use cases. Apple’s engineering team selected Swift not just for its ecosystem alignment, but for its ability to deliver consistent performance in compute-intensive environments. 

The rewrite also used Vapor, a popular Swift web framework, as a foundation. Additional custom packages were implemented to handle elliptic curve operations, cryptographic auditing, and middleware specific to the Password Monitoring domain.

Swift’s deterministic memory management, based on reference counting rather than garbage collection (GC), eliminated latency spikes caused by GC pauses. This consistency proved critical for a low-latency system at scale. After tuning, Apple reported sub-millisecond 99.9th percentile latencies and a dramatic drop in memory usage: Swift instances consumed hundreds of megabytes, compared to tens of gigabytes with Java.

Startup times also improved. Without JVM initialization overhead or JIT warm-up, Swift services could cold-start more quickly, supporting Apple’s global autoscaling requirements.

Apple’s migration reflects a broader trend: the shift toward performance-oriented languages for services operating at extreme scale. Meta has a long history with Rust from hyper-performant Source control solutions to programming languages for the blockchain. Netflix introduced Rend, a high-performance proxy written in Go, to take over from a Java-based client interacting with Memcached. AWS increasingly relies on Rust in services where deterministic performance and low resource usage improve infrastructure efficiency.

While this isn’t a sign that Java and similar languages are in decline, there is growing evidence that at the uppermost end of performance requirements, some are finding that general-purpose runtimes no longer suffice.

About the Author

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Presentation: Rust: A Productive Language for Writing Database Applications

MMS Founder
MMS Carl Lerche

Article originally posted on InfoQ. Visit InfoQ

Transcript

Lerche: I’m Carl. I work on Tokio primarily, the open source async runtime. I probably started that about six, seven, eight years ago now. Now I’m doing that. I’m still working on that at Amazon. I’m at Amazon, but I’m working on Tokio there, so I’m on the open-source team. I’m going to try to convince you that Rust can be a productive language for building higher level applications, like those web apps that sit on top of databases, like the apps that back mobile apps or web apps.

Even if I don’t convince you, I’m going to try to have you leave with something of value, I’m going to give you some tips and tricks that will be generally useful working with Rust. How many people here have already written Rust? You’ve heard of Rust, I assume here. You’re not here for the Rust computer game. It’s ok. You don’t have to know Rust, but you know a little bit about it. Who already believes that Rust is generally useful for higher level use cases, that are not performance sensitive? Hands up, you’re like, “Yes, I will use Rust for building a web app now. It’s the best language for everything”.

Overview of Rust

Rust is a programming language that has roughly the same runtime performance as C or C++ do, but does this while maintaining memory safety. What’s novel about Rust is it’s got a borrow checker, which does that enforcement of memory safety at compile time, not at runtime. Rust is still relatively new, relative to other programming languages. At this point, it’s gotten to the point of being established. The rate of growth over the past few years have been really quite stunning. It’s gained adoption at quite a number of companies, small companies and big companies, Google, Amazon, Dropbox, Microsoft, they all use Rust these days. Amazon, where I’m at, we’re using Rust to deliver services like EC2, S3, CloudFront. You might have heard of them. Rust is being used more within these services to power them. It’s become an established language.

The vast majority of Rust’s adoption these days is at that infrastructure level. For networking stuff, I’m talking about databases, proxies, routers. It’s definitely less common today to see Rust being used for higher level applications like those web applications. I’m not saying no one does it. Some people definitely have, I’ve spoken with them. Mostly Rust is used at that infrastructure level. I’ve been using Rust for 10, 11 years now, which actually I thought about it, that’s the core of my life, kind of scary.

When I started using Rust, when I got involved with Rust, I also was like, ok, Rust is a systems level programming language. It’s used for those lower-level cases. I myself did not really think, Rust is a good use case for those web apps. That’s not something I considered. Over the past couple years, personally, my mind’s been changing on that. I started asking myself, is that really an inherent truth? Is Rust really only a systems language? I’m not a Rust maximalist by any means. I know I probably might sound like one, who’s like, use Rust for everything. I don’t actually believe that. I believe you should just use the best language for the job. When people ask me, what language should I use? Oftentimes I’ll say something else. We should pick the best tool for the job.

That said, what the best tool for the job is not necessarily a black and white answer. You really want to pick the language that’s going to be as productive for you for that use case, but productivity has many aspects. There’s the obvious one, how quickly can you ship your features? How quickly can developers on a team work together? How quickly can developers ramp up? It’s also the context of like, what do developers know coming in? Because you take a bunch of, it doesn’t matter, JavaScript developers, Java developers, and you put them on a Rust project solo, they’re not going to do very well. The reverse is also true. Throw me on a JavaScript project, I’m like, I don’t know. I probably wrote JavaScript a while ago. I forgot everything.

Then, besides just shipping features, there’s just actually getting the level of quality that’s required by the project. By that, I mean not all software projects have the same quality requirements. I’m sure we all believe we ship great software all the time. Realistically, sometimes you just got to ship. Bugs are ok. When you’re building a car, hopefully that’s not true. Different levels of quality depending on what you’re actually working on. Lots of aspects to consider.

How Rust Fits in Different Dimensions

Let’s talk about how Rust fits within those different dimensions. The first one being quality, that’s where Rust really shines. It’s an entire value proposition. Rust is a really good language for writing high-quality code with, both from a performance point of view, but also from the point of view of minimizing defects and bugs. On the performance side of things, that’s like what you hear about the most. Rust is really fast. It’s compiled. There’s no garbage collector. That’s not new. C and C++ do that. Those have been around for a while. Why haven’t those gained as much adoption as Java? Because there is that quality side of things. With C or C++, about 70% of all high-severity security issues are memory-related. If quality is an issue, maybe C and C++ aren’t the right choice, which is probably why there are languages like Java. Less obvious, you’ve probably heard of some stuff like fearless concurrency.

Rust’s type system can prevent a whole bunch of other bug categories, like data races. Rust’s really good for writing high-quality code. Maybe some less good things up until now if all things were equal, Rust would be a pretty slam dunk sell, high-level or infrastructure cases, but all things are not equal. Most of the complaints I hear about Rust when talking with developers can be summarized as, Rust is not as productive. Usually, that’s not what people tell me. There’ll be things like, when I tried to use Rust, I ended up fighting with the borrow checker. Or maybe you hear things like, Rust is great when it compiles, my code just works, but getting it to compile can be challenging. These are the kinds of things I hear. That really does boil down to that question of productivity.

Right now, the choice developers are making when picking Rust is to trade that development time, so longer development times for higher-quality code, but less development time than if you’re going to use a different language to reach that same level of quality. If you have a software project, that performance bar is high and that quality is high, you’re going to actually be able to reach that goal quicker with Rust than other languages. If maybe you’re willing to sacrifice a bit of quality for faster development time, maybe it doesn’t make sense. That’s the general sentiment you hear around online discussions with Rust for those higher-level use cases. The borrow checker, it’s all just unnecessary overhead.

Is that actually true? So far, maybe it doesn’t sound like I’m making a great pitch. Is the type system and the borrow checker fundamental overhead that comes with Rust? I do think there’s a kernel of truth, but reality is a bit more subtle. Again, in my Rust journey, I started with that same belief that Rust is not as productive as other languages, that it really is only good for systems-level programming. After talking with a whole bunch of teams that have been adopting Rust within their organization, that’s not really always what I heard. More often than not, the stories I heard started with like, a team had some performance requirement for some feature, and they decided to look at Rust for that project, so their team learned Rust. They were able to ship their code, meet that performance requirements, oftentimes, with minimal tuning.

Then they started noticing, over time, their software ran more reliably. They got paged less. They also noticed, as their team got more familiar with Rust, because they had to keep working on that software over the lifetime of that project, as a team, they didn’t actually notice their productivity drop as maybe they would have expected going into it. They still maintained that productivity. Also, they found lots of other different advantages as they started adopting Rust more in other cases, like in more higher-level cases themselves, that they were able to get more code reuse and other benefits like that. I started hearing the story over and again. I started to reevaluate my own assumption that Rust is not as productive.

Yes, it’s true, Rust is maybe not the best language for prototyping. First, that type system really does push you to write correct code. When you’re prototyping, you want to get your code running fast, even if it’s mostly broken, Rust’s type system might get in the way of that. What Rust lacks for prototyping, it makes up in the long run by speeding up development time over the long term, so over the entire lifespan of a software project. Code tends to live a lot longer than you might originally expect. How many times you just write something, it won’t last long, and then 5, 10 years later, it’s still there. That happens more often than we’d like to believe. The type system, yes, it can add friction when prototyping. Then the other side of the coin, it makes Rust more explicit. If you’re just looking at a piece of Rust code in isolation, you know a lot more about it than with other languages.

For example, if you get a mutable reference, you know that you can’t mutate that code elsewhere. That matters a lot because we’re going to be reading code a lot more than writing it over the lifetime of the project. Besides just references, in general, Rust tends to prevent a lot more of the hidden dependencies or magic. Just looking at Rust code, you can tell a lot more what’s going on, and that has benefits for that maintenance aspect. Things like during code reviews, debugging, all the stuff that you have to do to maintain a software project over its lifetime, Rust helps speed that up. It helps improve the productivity of the team over that lifespan. If you’re spending less time on that maintenance aspect, it also does mean you’re spending more time building new features. Anecdotally, this is what people tell me as they’ve used Rust over a couple years. That is where they’re seeing that tradeoff happen and part of why they are seeing their productivity with Rust stay just as high as with other languages.

Rust’s Learning Curve

You may have noticed up until now, I’ve been qualifying things with, once they have successfully adopted Rust. What I think is true today is, Rust is harder to learn than other programming languages. There are a number of reasons. While it’s true Rust as a language isn’t trivial, I think a bigger reason why Rust is harder to learn is that it’s a pretty different language. One, it looks like an object-oriented language if you squint a lot, but it’s not. It’s not at all an object-oriented language. One big pitfall I see when people are coming to Rust and learning Rust, especially coming from object-oriented languages, is they take their patterns and they try to apply it to Rust, and then that just goes poorly. What if we could make Rust easier to learn? I think that is going to be a big step towards making it a compelling language for that higher level because of how the learning is a big part of what is that initial productivity friction that teams see.

Second, and this applies more to Rust at that higher level, which is what we’re talking about now, is that the Rust ecosystem is a lot less developed than something like JavaScript. JavaScript ecosystem has tons of libraries, off-the-shelf components. Other languages do too. Rust, less so for the higher-level use case, and that’s in part because of Rust’s history coming up as a systems level language. Because if you’re building something at the systems level, the ecosystem is actually really developed there. There are libraries for a lot of different things and they’re all really nice. That’s part of that self-fulfilling cycle where Rust says it’s a great language for systems level programming. Developers come, they build stuff they need, more developers come, there’s like a self-reinforcing cycle that hasn’t really happened at that higher level. The second big aspect, I think, that we really need for Rust to really get to that level of being a great language for higher level is a more developed ecosystem there.

There’s not nothing. What libraries are there today for building those higher-level web apps, database-backed apps? At a very high level, to build a database application, you’ll need to have some HTTP router, takes inbound requests, and you, as the developer, handle those requests by using a database client, an ORM or something, and then you send the result back over the HTTP response. What is there as the ecosystem? There are libraries to do the router side, definitely a lot of good options there. There’s Axum, there’s Warp, there’s Actix, and probably others.

That website, arewewebyet.org, is definitely something you should go to if you want a more comprehensive list to find things. I’m personally partial to Axum, so if you go look at one, I recommend Axum. The state there is pretty strong. For the database client side of things, I think there are fewer options. Diesel, the original ORM for Rust. If you’ve tried to use Diesel, it works. I’ve heard that it can be harder to use. The main other options can be something like SQLx. It’s a nice little library if you like writing your SQL queries by hand, but personally, I think there’s really a need for those higher-level use cases to have a nice high level ORM.

Personally, over the past year, I’ve been working on that. Toasty, it’s open on GitHub, but be warned, it’s still in the very early days. It’s more of a preview, it’s not released on crates.io. The examples work. Lots of panics. Again, very early days. I’m hoping by sometime next year, hopefully mid, probably later, it’ll be ready for real-world apps, but I really want to get it out there and get people looking at and providing feedback early. Goals for Toasty. Toasty doesn’t just target SQL. Toasty does not abstract away the datastore, so you can’t use Toasty assuming SQL, then swap out the backend transparently to a datastore like DynamoDB or Cassandra. I don’t think that’s something a library can reasonably do.

However, personally, what’s bugged me when I’ve looked at ORM libraries in the past is there tends to be this full ecosystem split between ORMs and libraries that support other types of databases when there really is 90% overlap. The majority of the work that these libraries do is that mapping between structs and your database and doing create, read, update, the basics, so basic queries. Having to always have complete splits between those two ecosystems always bugged me. Toasty starts with the basic features that are common across all of these different flavors, and then lets you opt in to database-specific capabilities, whether that’s SQL or DynamoDB or Cassandra, but also prevents you from doing things that wouldn’t work on each target database. You obviously don’t want to do a three-way join on Cassandra.

Second and more importantly, I think, is that I really wanted to build a library that prioritized ease of use over maximizing performance. That isn’t to say that Toasty doesn’t care about performance, but you’re using Rust, you are coming here for things to be pretty fast. When designing the flow of Toasty, when designing the happy path specifically, I’m focusing on ease of use. If there’s a design tradeoff that I have to make between ease of use and really getting that last bit of performance, I’m going to pick ease of use here. That brings me, again, back to learnability. I do think Rust can become easier to learn. Yes, Rust has features that can be complicated and harder to use.

If you’ve looked at Rust, you’ve probably had these and you probably know what I’m talking about. I believe you don’t need to use these features to be productive with Rust. The basic Rust language is not that hard and you can be very productive with it. For Rust to really get to that point where it can really be a productive language for that higher level case, one, learning materials need to focus on that core, easy part of the language, and libraries need to focus as well, not bring in all the hard features.

Hard Parts: Traits and Lifetimes

What are the hard parts? When talking with developers who say Rust is hard, it really comes down to either traits or lifetimes, somehow. Both of these topics are not trivial. If you’re new to Rust and you structure your code wrong with these two features, it’s really easy to dig yourself into a hole that’s hard to get back out of. I think that part is really the biggest part of what contributes to that feeling that Rust is hard to use or not as productive. Because once you become more experienced with Rust, and that experience comes over a non-trivial amount of time, you know how to use these features, you know how to avoid the pitfalls, but that’s not really something helpful to tell a new developer that has to ship something next month.

It’s like, don’t worry, in like six months, you’ll be an expert at these things, or something like that. What do we do about it? If traits and lifetimes are hard, maybe the answer is as simple as avoid using them. Maybe it’s a little controversial, but I think that most developers using Rust can become very productive with hardly touching these. The problem is that, again, learning materials, will introduce these early, and a lot of libraries use traits and lifetimes heavily as part of their core APIs. You pick some of these beginning libraries and you’re like, ok, and there’s like five lifetimes stuck in this basic API that you’re supposed to call. I’m like, why?

Tips and Tricks for Using Rust

Personally, I started compiling a set of tips and tricks for using Rust. At Amazon, we got a lot of new developers onboarding Rust. I’ve had to compile some things that I tell them on their learning journey. They’re not just for beginners. I find myself following these as well when writing Rust. I’m not going to be giving you a tutorial of writing web apps with Rust. I don’t think that’s super useful. You can go and look at the guides, like Axum guides and Toasty if that’s what you want. Instead, I’m going to go over some of Rust features that I like and try to put those in context of building for web app developers. Hopefully, those tips and tricks, if you go and read the guides and learn Rust and maybe even talk to other developers who are within your org, teach them Rust, those will be helpful. The first tip is, really try to prefer using enums when possible. A trait is a way of defining generic code. You should use traits if you don’t know all the possible types that are going to be passed in. This is especially true for libraries.

You might want to write a generic function and you don’t know all the possible types ahead of time. That is true. You probably need to use a trait there. When building the application, like the end product, we do know all the types that are going to get passed in. We don’t need to use a trait. We can use an enum instead. This is going to greatly simplify our code. This principle applies in many cases. One time I see it come up often is that question of mocking.

This comes up a lot, how do I mock in Rust? It’s so hard. I get these questions a lot because, again, at Amazon, I’m on the Rust team and we get all of the questions like, how do we do this in Rust? I know this question comes up a lot when building these apps. Let’s look at a quick example. Imagine you’re building a very basic payment processing routine. You have your billing client that issues network calls, and you want to test this by mocking the billing client. This is almost always why I see people try the first time. They define a bill trait and then they go make their billing logic generic over that trait. The problem is going to be that you have this trait bound but this trait bound is going to leak everywhere in your application as well. Not just that, it’s going to start like that bill at a very high level but then propagate everywhere.

Then you have all of these different traits. If you keep adding more traits for every single thing you want to mock out, this is going to just compound and become super complicated. Again, this is an application that you control all the types that come in. You know there are only going to be two implementations, the real billing client and the mock one. The easier option is going to be just to use an enum here and list out all the billing clients as variants. You avoid using the traits. There’s no more trait bounds. It adds a little bit of boilerplates to define that enum, but there are crates out there that can help get rid of all that boilerplate and you’ll now have no more traits in your handle payment, and that won’t cascade everywhere.

A nice segue to procedural macros. Procedural macros let you write a Rust function that generates code for the user at compile time. That enables a lot. I do think it’s one of Rust’s superpowers that can unlock a lot of productivity and really actually is one of the reasons why Rust can be competitive for productivity at that higher level. Let’s look a bit about it. Here’s a Hello World example with the Axum library. That JSON exclamation point, call the contents of that, that’s clearly not Rust syntax. It’s JSON syntax. Rust has no support in the language for JSON syntax, but this compiles. The way it works is that there’s a Serde JSON library.

If you use Rust, you probably already heard Serde. It provides the implementation for that macro call. It’s implemented as a Rust function that takes a syntax tree and transforms that syntax tree to something else. In this case, an instantiation of a Rust value representing that JSON. I’m not going to labor too much. Again, you probably know Serde. Here’s a derive attribute macro and it works in a similar way. That struct definition is passed to Serde as an AST. Serde takes that and then generates transparently all the code needed to serialize that struct. Now you can use it as an Axum type response, and that’s really powerful.

Applying this at Toasty, the ORM library I’m working on, I think the initial obvious way to design the library would be to use a procedural macro on structs that define the database schema, something like this. I decided not to do this, at least initially. I’m going to tell you why. Procedural macros are one of Rust’s superpowers. As you start using Rust and you end up using it more, you’re probably going to start writing some. They do come at some amount of cognitive cost. You’ll even notice this, like there’s definitely an undercurrent of pushback to procedural macros within the Rust community. I don’t think it’s because procedural macros are bad. They’re definitely great, I love them. You need to be aware, again, of this cognitive cost. They generate all of that output transparently at compile time. If you need to debug the output or look at the output, that is, I think, where some of the problem comes. Just ask anyone who’s tried to debug proc_macro output. It can be challenging.

For Toasty, instead I took inspiration from Prisma, which is a JavaScript ORM client. They do a separate schema file, and code is generated from there. In a lot of ways that’s similar to procedural macros in that there’s a program that generates code for you. The difference being is it generates real files that you could open up and look and see all the output of the generated code. I think specifically for Toasty, that’s pretty useful because Toasty generates a lot of structs and methods that the developer is supposed to use. For example, this user find_by_email method is generated by Toasty. If you can just open a file and read it and find all of those methods and just explore it like real code, I think that’s useful. Does that mean this code generation strategy is superior to proc_macros? Not at all. They’re different. I think it depends on the context. The reason why I’m bringing it, again, if you get to this point where you’re starting to write some libraries and introduce proc_macros, I think this is going to be something to keep in mind.

How do you decide between these two strategies? For me, there’s two different factors that I consider. First is, how much context from the surrounding Rust code is required by the macro? If the answer is any, I think odds are that you’ll be better off with a procedural macro instead of that external code generation strategy. A quick example, again, revisiting that JSON exclamation point macro, you could see that the contents of that macro referenced variables, so that’s highly contextual. Just from the conceptual level, the response struct is very tied to that specific request handler. It would be a bit jarring to have to jump to a different file to see how each is defined. Highly contextual case, and I think this is a really good use case for procedural macros.

Then, the second factor is, how important is it for the user to discover the details of that generated code? Just how important is it to just read the generated code? I’m going to consider the Serde derive example again. The procedural macro here generates an implementation of that serialized trait. The trait definition itself is public. The specifics of the implementation don’t really matter as much, because it’s just an implementation. It’s a lot less important for the user to just open up that generated code and read that implementation. Again, I think this is a great use case for procedural macros. Toasty, on the other hand, it’s going to generate a lot of bespoke methods, which is why I decided, again, initially to go with a code generation strategy. In short, code generation is a great strategy to reduce boilerplate. Proc_macros is one of Rust’s superpowers and a super-helpful way to do that. Just be sensitive of how much code is generated and how the user is supposed to learn to use that proc_macro.

Back to traits. Yes, you should prefer enums over traits, but there will be times when a trait is appropriate, especially when building libraries, traits are a necessity, like I said. I still think, even for the library case, preferring enums over traits applies. When adopting a trait is necessary, just try to keep it as simple as possible. Doing something like this is probably ok. It lets the caller pass in any type that can be converted to a string. This helps avoid boilerplate. It can be good. This, on the other hand, is what I’m calling a second-order trait bound. The more complicated the trait bound, the harder it becomes for the user to reason about what types you can pass in. The compiler messages get harder. Now you can start to see, this is hard to reason about. This is why new people come to Rust, say, it’s so hard. It’s stuff like this. To have a trait bound like this, there has to be a ton of value to that trait bound so that that value outweighs the complexity. I think, historically, Rust libraries have leaned too much on traits.

In my years, I’ve definitely been a big offender of overusing traits. This example here comes from Tower. It’s a simplified version. It’s a library I worked on that uses traits heavily. There is an argument for it, but I think it’s not worth it, is the short of it. The theme of this talk is really, as you get familiar with Rust, you’re going to be lured into the power of Rust’s advanced features. Try to push back and really focus on how newcomers to your code, whether it’s a new developer from the organization coming in is going to be able to read this and understand it.

For Toasty, this is the generated find_by_email method I mentioned earlier. The argument is a trait. It’s a first-order bound. I’m hoping that this is the most complicated usage of traits that 95% or more of Toasty users will experience. I did include a lifetime. It’s one of the hard parts as well. There’s a similar theme to try to avoid using lifetimes and instead pass return values. Here, I’m including a lifetime. I’m not 100% sure it carries its weight yet, which is why I’m hoping you try Toasty, tell me what kind of experience you have. I may or may not end up getting rid of this lifetime here.

Result vs. Panic

Let’s talk a bit about result versus panic. Result is a type. It’s typically used as a return type to make it explicit to the caller of a function that that function could encounter an error. Languages like Java would handle this usually with an exception. The advantage of making error handling an explicit part of the return type is that it does force the caller to be aware of that error and handle that error, or their program will not compile. That is a big part of what leads to fewer bugs with Rust, because you can’t necessarily forget about handling the edge cases with exceptions. Rust also has panic, which is a different way of modeling errors. Panics are a lot like exceptions in that, if you panic, it stops the execution flow and starts unwinding the stack. A panic is going to be pretty harsh and really is for when something goes pretty wrong.

Two ways of handling errors. Which one do you choose? It really comes down to whether the caller is expected to handle the error case or not. Let’s say you have a socket. You’re reading data from the socket, and the socket unexpectedly closes. That is an error case that will happen in real life. You, as the programmer using the socket, you should gracefully handle that case somehow. Socket operations in Rust, all the methods are going to return result. Now, when to panic. This is going to be for error cases that have no sane way to handle at runtime. What I mean by that is, oftentimes, that’s a bug in the caller’s code that ends up in an unexpected error case. Handling bugs in your own code and responding to errors like that is a lot. To hard stop makes sense there, because if you have a bug in your code, you are now in unexpected state that is hard to recover from. A panic is going to be usually when there’s a bug in the code.

To illustrate what I mean by a bug in the code, let’s look a bit at Toasty. I want to talk a bit about how Toasty handles the n plus 1 problem, which is a textbook ORM problem. Here, when we’re loading a user, we’re iterating the todos, and we’re printing the todos category. The n plus 1 problem is if the ORM implicitly and lazily loads associations and issues queries when it’s loading those associations, there’s going to be a database query that’s issued for every iteration of that loop. That’s bad. What you actually want to do is load all the necessary data up front. In this code example, when you’re calling find_by_email, Toasty doesn’t know that you’re going to want the category. First, with async Rust, it’s not actually possible for Toasty to implicitly load that data on demand, because at every time there might be a network call, there needs to be a .await. That’s actually pretty nice, because now you can look at this code sample and immediately know where the database queries might happen.

Again, recall I mentioned hidden dependencies are magic earlier. This is another illustration of where Rust, the language, can prevent that. Here, the only database query that gets issued is to load that user up front. If only the user is loaded, what happens when you call user.todos right there? Toasty panics. I specifically didn’t want that todos method to return a result, because as a caller, what would you do in that case with that result? You’d have to add boilerplate every place to handle it, which probably the only way to really handle a result in that case would be a .unwrap. That’s going to add a whole bunch of unnecessary friction when using Toasty the library. To avoid that panic, the caller then specifies which associations they want to eagerly load at query time. If you try to access the association without eagerly loading it, that’s a runtime bug, so a panic.

Using Indices for Complex Relationships

One quick tip for the road. You may have heard it said that you can’t implement a doubly-linked list with Rust, as an example of the borrow checker limitations. You can. It’s not actually true. You can do it. You can implement that doubly-linked list with Rust without using any unsafe code, if you store the nodes in the vec and then use indices to represent links. That’s a pattern that’s super useful once you get to modeling more complex data. It can be scaled up, again, to more complex data relationships. If you want to watch another video, this video covers it in great depth, youtube.com/watch?v=aKLntZcp27M. I highly recommend it to everyone as a almost required viewing.

Summary

What’s my point with all this? I do think Rust could be a great general-purpose language, including for those higher-level use cases like database applications, where productivity is more important than performance. There’s still some work to get there. Primarily, we need a more fleshed out ecosystem for those higher-level libraries. I’m trying to do my part. Part of why I’m giving this talk is to convince you that Rust has that potential. If we can reach that potential of growing that ecosystem and making Rust easier to learn with libraries that are easier to learn, learning materials that focus on ease of use, we can get to the point where we have this language that is really fast, lets you write more reliable code, lets you get paged less often, and is as productive.

At the end of the day, building these web apps is almost just taking all these components and gluing them together. I don’t think you need to be an expert in lifetimes and traits and all that to do that kind of work. That is my main point. Try out Toasty. Give me feedback. I’m still myself trying to figure out what exactly is an API for that ease of use. Feedback is super useful.

Questions and Answers

Participant: Your argument of enums versus traits, you show an example where it’s pretty messy if you use traits. You have a very clean implementation, but the trick is the messed up function. When you argue for enums, you intentionally or unintentionally skip the implementation part, which I think is where the messy part will be. How do you then argue for enums if your code becomes forced everywhere?

My argument is, if you have enums, then you have to write code twice to handle enums.

Lerche: Yes, if you have enums, you have to write code twice to handle enums. There’s proc_macros to handle that for you. If you look at enum_dispatch, a proc_macro that lets you use that enum style pattern, just like you’d use a trait. If you have an enum, the only place you’re going to have duplication is in the implementation.

See more presentations with transcripts

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.