Month: July 2025

MMS • Bruno Rey

Transcript
Rey: I started coding in 1999. I was 12 years old at the time. That was the first programming book that I read cover to cover. That was my first programming language (Visual Basic). It’s not very popular today, but today I have no shame in saying it. I’ve been working professionally in tech for 17 years, working since I had my first full-time job at a serious company. It took me 15 years to become a staff engineer, which is my current role at Eventbrite. You might think 15 years to become a staff engineer is a long time, and you’re right, that’s a problem. That’s hopefully what will not happen to you if you listen to this talk, and how you can help others grow faster as well.
This talk is for those who want to grow in their career, who want to climb that ladder, but it’s also for those who want to help others grow. We’re going to discuss growth, basically. We’re going to discuss the ladder. I expect most of the people to be on the second part, but there might also be some on the first part. Again, the topics are the same, so there’s really no difference.
The Plan – You Trace Your Own Path at Your Own Pace
I thought a lot about what drives growth. I also did a lot of research. I also talked to a lot of people preparing for this talk. What I came up with in order is that, the first factor, and probably the most important one, is ambition. It’s really hard to grow in your career if you don’t have that ambition. In second place, I find that capacity is important. Being able to perform, being able to learn new things, all of that. In third place, you might be very good at both. You might be excellent at your job, but you need an external factor. You need that opportunity. There’s nowhere for you to grow. It’s really hard to do that. The reason why I put ambition first instead of capacity or curiosity, or whatever, becomes really clear when you think about the definition of ambition. The one I like from the Oxford Dictionary says that ambition is a strong desire to achieve something, typically requiring determination, effort, and drive.
If you read between the lines, what this is telling you is that a person with ambition will be able to develop that capacity as needed. They will do that through determination, effort, and drive. They will learn whatever they need to learn. I want you to put yourselves, for a moment, in the shoes of a person who’s very early on their career, just starting, just accepting their new job. Probably the company offered them a position and told them, “Yes, this company is growing. You’re going to grow really fast. It’s going to be great for you”. Things don’t always go that way. Sometimes we find that the first years of our career look more like this. This is very typical. It’s not terrible, but if you want to accelerate growth, you should try to avoid this. You should start by identifying the problems. What is causing this type of growth in your career that’s not really fast because it’s not really directed?
Typically, it’s one of two things. It’s either a lack of focus on execution or a lack of direction. When we go to high school and college, they tell us that we should strive for perfection. You want to have that perfect grade. You want to have that perfect opinion from your teachers about your work. As soon as you get into business, you need to completely change your mindset. Perfection is the killer of execution. This is very well put in the Amazon Leadership Principles as bias for action. That’s one that I really like. In the early years of someone’s career, it’s very important to help them make this shift of mind, make this shift of paradigm, and to start focusing on execution over perfection, without dropping quality, of course.
Secondly, if someone is having trouble finding their direction, there are ways to help them. The best thing is for them to do research first on their own. Then, the second turn, to seek for mentoring. When I talk about research, you can read a book and you can read a blog, that’s good. You should also do your own research. If you think, I want to be a staff engineer in five years, or I want to be a manager, you should look probably within your company and ask yourself, what does a manager’s life look like? What does a staff engineer do? How do these people work? How did they get there? Is that really what I want for my life? That’s the type of research that will bring things a bit more down to earth and make them more achievable.
You should also look at blogs and books to get the theory behind that. Again, when someone is coming out of college, seeing things in the real world makes a real difference. It’s important to do that part as well.
Again, put yourself in the shoes of someone who’s early in their career and they’re having trouble growing. Typically, they ask themselves, why are we not growing? What’s happening? Many people will come up with these answers. The market is terrible right now. I miss the golden era. We all did. There’s always some nostalgia to some golden era that never existed, really. That doesn’t matter. This can be real, but they can also be seen as just excuses. There are also people who will come up with these types of responses, “I haven’t been directing my efforts efficiently, or, I have poor time management skills”. If you look at it, these ones on the right hint much more introspection and sound much more personal. The ones that I put here are real life examples from people close to me, from people I’ve mentored, and even some of my own. This is a very popular framework. Some of you might have recognized it already. This is known as the victim-player framework.
Essentially, what it says is that what you cannot control makes you weak, but what you can control makes you powerful. We have the idea, mostly when we’re young, to think about showing weaknesses is what makes me weak. In real life, it’s quite the opposite, showing your weaknesses is what makes you strong. I’m not going to go into a lot of detail on the victim-player framework. I’m going to give the quintessential example of it, which is how two different people might react to a traffic jam. One employee can say, “It’s a very big problem. It was totally unpredictable. There is nothing I can do now. I’m just stuck here”. An ideal employee would probably call their boss and tell them, “Boss, I’m sorry. I had a big problem today. I’m at a traffic jam. I’m going to have to take the first call from my car. I hope that’s not a problem. I can stay up and make some extra time today if necessary.
Also, I’ve realized that this is a recurring pattern, and I’ve already made a decision to start taking the train instead, because it’s becoming unacceptable”. This is someone taking ownership of that situation. Of course, this is a very typical example, but in real life, you know how it goes. This is also very similar to other concepts. They’re all the same thing. It’s also called being proactive or taking agency, not being the victim. It’s the same thing. It changes over time. The concepts are slightly different, but they’re all aimed at taking control of what you can, essentially.
I mentioned I included a couple of my own here. Today, I have no shame in saying that. I’m not ashamed of my first programming language, which you already know. I started working when I was 20. Early on in my career, I noticed that I was having issues with my work. Mostly, it was very hard for me to have a meeting with people outside of my company because my social anxiety was terrible. It’s really hard for me to speak publicly here today. Also, I couldn’t execute anything over a long period of time.
Obviously, I was job hopping a lot, which is sometimes normal in the early stage of your career. When I was 23, I was doing a lot of introspection, and I had the good luck of having a great manager at the time who provided me with a safe place where I could feel confident talking to him. It took me a lot of time. I didn’t have the tools at the time to identify this. It took me three years. When I did, I came up to him one day and told him, “I think my depression is setting me back”. He told me, “Yes”. However, his response was great.
After that, we came up with a plan together. He spent months working with me in order for me to get better. I’m better today. He continued to make sure that I felt safe at work and that there wouldn’t be any retaliation or anything. He took care of the projects. He paired me with some other people and everything. He was a great guy, Gustavo. Shortly after this, there was a meeting at a bar with work colleagues. Someone else got a promotion. Of course, I didn’t because I was doing terrible. This was back in the day when we worked at offices and we had to wear suits and everything. This was a very traditional company. This person had bought a new suit and came to the bar. Someone told him, you look great. Someone else said, a man who takes care of himself is a man who can take care of others.
That phrase at that time really resonated with me because that’s when it hit me, like, how am I expected to lead a team if I cannot put my own life back together? I talked about that a lot with my boss at the time. I got my shit together, basically.
That started working better. If you think about my manager’s reaction at the time and why he helped me, it wasn’t just because he’s a great guy, which he is. It was also in the best interest of the company. When people don’t grow, many people will blame their company for their bad career path. They will say, the company is holding me back and things like that. Truth is, companies usually don’t do that. It’s very rare for a company to do that. It’s in the best interest of a company for their employees to do well, to thrive. They will help you.
Companies don’t want to deal with low-performing employees. Companies don’t want to put you on a performance improvement plan or anything like that. It’s very expensive for the company to do that. It’s more profitable to have employees who are doing well. Managers are expected to help them do well in order to achieve that, in order to have a well-functioning team that starts providing value, at a good pace.
Once I made that click, that’s when my career really took off. That’s when I really started growing. I had good mentoring. I was lucky to have that. The best advice I got was to keep it at your own pace and keep it consistent.
A very typical example for this is comparing the workplace to physical exercise. It’s easy to go out running for 10 minutes. Most people can do that. It’s really hard to do that consistently two or three times a week, or once a day for one year, two years, or five years. Will you go out every day? Will you go out even when you have a congestion? Experts say that it’s totally fine. Will you go out even on the cold days, when it’s raining, when it’s snowing? I don’t run as much. I cycle a lot. That day was cold, my friends. I can tell you that. This sounds very obvious when we talk about exercise.
For many people, it’s not so obvious when we talk about our professional career, and it should be. It should be just the same. It’s really obvious when you really think about it. When you plan for your career growth, you should acknowledge that you will have highs and lows, and that you will have good days and bad days. That your ambition is important only up to a level, and you should not overdo it.
I talked a lot about me during the time from the perspective of someone who is growing. From the perspective of a manager or a leader who’s helping someone, we should also identify that. That person who tries to run a marathon on day one. That person who attempts to lift 100 pounds on day one. That’s totally impossible, of course. In the work environment, it’s sometimes quite obvious as well. Think about that person who only cares about the title specifically or their salary.
That person who doesn’t care about the project, doesn’t care about the company. Especially the person who doesn’t care about the team, who doesn’t empathize with the human beings around them. That person is a ticking time bomb to any team. They might be good in the short term, they might put out good work or whatever, but that will be very bad for your team and for the company in the longer term. There’s a great talk on this topic by Simon Sinek called, “Performance vs Trust”. The way he puts it basically, if you put trust in one axis and performance on the other one, performance being how much work they output and trust being how much trust they inspire to their colleagues, to their teammates, of course, nobody wants these folks, and of course, everybody wants these ones.
The way he frames it is that you should choose always people from this category over people from this category. It sounds obvious, but sadly in the workspace, it’s not so obvious all the time. There’s a reason. He goes on to saying that promotion cycles typically tend to favor these individuals. The reason for that being that promotion cycles should be objective and they should be based on real hard data. It’s easy to measure performance, but it’s really hard to measure trust. How much trust does a colleague inspire? How much do they empower their team to be better just by feeling safe? It’s hard to do that. This was called out on the keynote by Lizzie. TAPPs is one framework to do that, there are many others. It’s one way to measure it.
The truth is, this can be measured and this can be measured as well. It’s not impossible. It’s more costly and you should do that investment in order to find these folks. The people from this category can be grown into being here in the short-term or in the mid-term. If you find someone here, it’s almost impossible to get them here. That was also mentioned by Charlotte. It’s really hard to change the character of a person. You can change their capacity. You can help them grow. You can teach them things, but you cannot change who they are.
Growth Strategies
We’re going to talk about a few more concrete growth strategies. The first thing I’d like to mention for this is that, any employee attempting to grow should understand the roles and expectations of their company. There’s a lot of resources online to do this. Levels.fyi is a great page. There are public career ladders that you can look up. Eventbrite has given me the permission to make our own career ladder public for the purpose of this talk. You can look it up later. There will be links at the end. I’m not going to go through the career ladders here. It’s a very big topic. If you’re interested in growing, please do your research. There’s a lot of interesting resources on this topic.
On strategies, I think one of the most important ones is to keep perspective in mind. When you’re working on your day-to-day, you should be asking yourself, what is best for me? What is best for my team? What is best for our customers? Also, what is best for my company? This is ambitious for someone who’s just starting on their career. They’re not going to change their company on day one, of course, but having that perspective is important, understanding how their work connects to their company goals. You should also ask, how and why? How does my company’s business model operate? Why is my team focusing on this project in this quarter? Things like that. Why am I being asked to prioritize this task over this other one, which I feel is more important? If you have that ambitious perspective, if you understand your leaders, you will be able to grow faster in your career. This was also mentioned by Dan, on aligning your goals with your leaders’ goals.
Another strategy, of course, is to find your role models. This is probably the most popular ones. There should always be people you look up to, people you admire. You should try to work with them. Maybe observe how they work. See what they do that you cannot do as well yet. You can approach them and ask them how they learned it, how they built the skill, and how you can build it. They might recommend that you read a book or that you pick up a habit. Maybe you can just ask them to mentor you. Ideally, these are people inside your company, but they can also be people from outside as well.
Another thing you should do, of course, is to identify your strengths and weaknesses. We all have strengths and weaknesses. We should do that introspection to try to identify them, but that should always be validated externally because we all have blind spots. That external input is super valuable. As mentors, as leaders, we should make sure that those are clear as well. We should have those uncomfortable conversations with someone. Tell them, your tickets are terrible, you never put enough information. Or, you’re not testing your changes and the team has to cover you all the time with tests. Things like that. Sometimes people don’t do this knowingly because they’re lazy. Or sometimes they’re just not aware of it because it’s one of their blind spots.
If you don’t discuss it with them, it’s quite unfair to judge them on it. You should make sure that those are clear and discussed always. Of course, you don’t have to be good at everything. I really like this type of chart. This was made popular by soccer, particularly soccer video games. It’s great at showing how different people have different strengths and weaknesses, and they all come together to form a team that does well. In this case, they don’t play together. That’s not the point. That’s actual feedback given to me in 2020. I remember really being appreciative of the manager who gave me this feedback. It was so visible, so tangible. Like saying, “You’re having great productive impact. Your throughput is awesome, but you’re doing a lot of rework. You need to work on that. You need to improve that so that I can promote you”. This type of feedback is really valuable. As we grow in our career also, things like the playground opens a bit and things become more vague. This was mentioned in the ambiguous role of a principal as well.
As a staff, you can choose where you want to put your effort. You can choose to be an architect, but you don’t really need to choose only one archetype, as he phrases it. You can choose to be the solver who sometimes acts as a tech lead, or you can shift your time 50-50 between being a tech lead and the right hand. Those are all valuable. There are no rules on how to act. You should find your own place by finding your strengths and finding what your team needs and what your company needs, and how you can apply those strengths.
On the next topic, you should always know yourself and acknowledge that you will have highs and lows. Nobody is perfect. It’s important that even at your lows, you find ways in which you can stay productive and in which you can stay valuable to your team and to the company. Coming back to strengths and weaknesses, it’s important to act on improving your weaknesses, but also, you should maximize your strengths. If you focus on what you’re strong in, what comes naturally to you, that’s what you should do when you’re going through a low. “This work is easy for me, I’m going to focus on this while I’m here, because that’s how I can be valuable to the team. Once I’m better, I’m going to focus on fixing my weaknesses and everything, and that’ll be ok. Right now, I’m going to focus on what’s easy”. You should discuss it with your manager, of course, to make sure that he’s aligned. You should be open if you’re going through a low.
Another thing is what energizes you and what drains you. A typical example is social interactions. Human beings are social beings by nature. We need that oxytocin. We need that serotonin. Different brains need them at different levels. Some people get them at work, and some people don’t. Some people, when they go through a technical topic, a really difficult challenge, they need to discuss that challenge with someone else. After discussing it, they come out energized off the meeting and are more productive for the following hours. A bunch of other people come out of the meeting totally drained and need some time to recuperate. That’s perfectly fine.
Another example is how people act under pressure. Typically, this is during incidents. I am the type of person that’s energized by an incident. I’m fueled by it. I love the team that comes together, the strong dedication and collaboration. Some people just loathe it. They don’t like working under pressure, and they like to pace their work more slowly. As a recent example, a short time ago, me and a colleague, another staff engineer, we were both transferred from one team to another. That basically creates a lull. The whole team starts forming, storming, norming, performing again, and we were both out of our domain. We didn’t know what we were doing, basically.
The other guy goes to our manager and says, “I’m going to take it slow. I’m going to take my time to learn. I’m not going to be productive during the time that I’m learning, but that’s going to make me more productive in the future, and I think it’s a good investment”. The manager says, “That’s great. Thank you”. I come up to the manager and I tell him, “I need that feedback. I need this action. I’m going to take the on-call for a while. Let me deal with the incidents. That’s how I can be valuable to the team fast. That’s how I can provide value with what I know. Also, that’s a way for me to learn fast”. The manager says, “That’s perfect. Thank you”. He was very appreciative that we were both open about our strengths and weaknesses and that we found ways in which to apply them to the team. Two different persons, two different approaches, both were perfectly valid, the right one for each person.
Let’s talk a bit about opportunities. It’s very important to recognize opportunities and to know how to act on them. The typical opportunity, the ideal one, like the Holy Grail, is company growth. Of course, if a company is growing, they will shape up new teams. There will be opportunity for you to grow within them.
Another opportunity is an open position above you. If a tech lead or a manager leaves the team or leaves a company, that position is open, and you can offer yourself to fill it. That’s sometimes a double-blade sword because there’s typically a reason why they left the company. You should make sure that whichever challenges they were facing at the time are challenges that you can deal with better than them, if those were the reasons why he left. It’s typically good to discuss that with the person who’s leaving instead of with the people who remain around you. Another opportunity is challenging times. This is the most difficult one, maybe. A lot of companies have been through layoffs lately, and that can create opportunity for growth, even as challenging as it is. There is the concept of anti-fragile.
The typical example is in the film Superman vs. Batman, where we have Doomsday: each time you hit him, he becomes stronger. A flower vase is fragile, if you hit it, it breaks. Doomsday, you hit him and he becomes stronger. As an employee, some people, when they get hit through going through a challenging time, they come out stronger. Those people are super valuable to a company, and it’s really hard to find them. It’s one way in which you can change a challenging time into being an opportunity.
Finally, if you don’t have opportunities, you should try to create them. You can propose initiatives for your company or for your team. You can maybe ask to be transferred to another team or choose to leave the company. Or you can do things outside of your day job, like coming to a conference, for example. This topic of creating your own opportunities was perfectly covered by Pablo, on finding your dinosaurs. I suggest you look that up as well.
Another thing you should do if you want growth is to find ways to make your work be visible. The first thing, you should always keep a track of your work. When you’re beginning your career, this might sound unnecessary, but once you’ve worked 5, 10, 15 years, it becomes really hard to remember what you were doing 2, 3 years ago. Sometimes it’s good to keep a document. Julia Evans has a great article on this. It’s called the brag document, where you brag about your own work. This is also very valuable when companies are going through change.
Typically, your manager can change, sometimes in the same year, you have two or three managers, and they lose that context of what you’ve done. It’s very hard to transfer that from one manager to another. It’s much easier for them if you provide them with a brag document, where they can track that work with a paper trail. Another thing is to publish your work. This topic was also covered by Pablo. Make sure that your contributions are seen. Aaron Francis has a great article on this, publishing your work, and how it increases your luck.
Leaders – You’ve Grown, Now What?
Finally, let’s talk about leaders for a bit. Once you’ve grown, what can you expect? When we think about staff engineers and principal engineers, one way to define them is how they can impact a large group, like broader than a team, and sometimes how they can impact an entire company. One way we typically do that is setting guidelines on how the company should work, or how the team should work, or the pillar, or whatever you call it in your company. You can work in establishing team boundaries and structures. You can work to define the software development life cycle: how we do PRs, how we do testing. Giuliano from Mercado Libre mentioned at QCon, absolute freedom is not good for companies. We should give the teams those guidelines, and we as leaders should be the ones doing that. Also, we should strive to identify when there are new patterns or new necessities. Once we have set those guidelines, something can fall through the cracks between them, and some team might choose to go on their own path.
At Eventbrite, the way we approach this is by setting a golden path. We don’t have a strict guideline on what teams can do. We give them a golden path, and we say, if you use these technologies and this software development life cycle, your life will be easy. We will take care of your problems for you. If you choose to strive out of that, you’re on your own. You’re a lone wolf. You need to take care of your own infra, your own platform, and everything. It’s not forbidden, but it’ll be more work for your team. One thing that we need to do is to maintain that. One team might be a lone wolf, but if we have two, three, four, five teams doing the same thing, it’s not lone wolves, it’s now a new pack. We need to put them back into the golden path by saying, this pattern that you’re using is now part of a golden path because we see it becoming a new necessity that our company will have.
Another thing is that when we grow to become leaders, you remember that person looking up to their leaders? Once you become this person, you are the person they look up to. Your work must not only be good anymore. Your work must be exemplary because people will look up to you. They will imitate you. They will copy how you work. Sometimes we feel that given our growth, we have great judgment, and we know when to cut corners, which changes we can push without testing, or whatever, which risks we can take, which corners we can cut. We do have that judgment, and we can make those calls, but we should be very cautious. It might be best not to make them only for the sake of being exemplary.
If you choose to do that, some people might imitate you and choose to start cutting corners at their own judgment, and their judgment might be not as good as yours. That creates a challenge for the company. It’s best if you choose to never cut corners so that you instill that way of working in your company, because it permeates into company culture without you knowing it. You should always be aware of that when you’re a leader.
Finally, and most importantly, we must stand for our peers and subordinates when we feel like they’re being treated unfairly. Sometimes, as team leads or tech leads, we see how things operate within a team, and someone from above might not have that visibility, and unfairly blame someone for something that wasn’t really their fault. It’s our job to come up to them and tell them, “You’re misunderstanding the situation. This person was not at fault here. There was a fault at the process. We should look into that instead of blaming the person”. Remember, be ambitious, stay focused, be efficient, find your opportunities, and most importantly, take care of yourselves.
Questions and Answers
Participant 1: I feel like when you grow in your career, you take on tasks which you didn’t really do when you were more junior. It’s like novel tasks. Have you seen opportunities, if you look back at your career, on how to maybe develop those skills when you don’t necessarily have a chance to put them in practice?
Rey: Yes, it’s hard. It’s good to try to jump the gun, and be ready before those necessities come. It’s good to try to develop those necessities for yourself. I think it comes back to the early part of my talk, not aim for perfection, and aim for execution instead. I will not focus on what I will need in the future, I will focus on what I need today, and start being great at that. Once you have that covered, you can start looking into that. Without having the actual necessity and without having the opportunity to apply that in a productive environment, it becomes really hard to develop that on your own.
The obvious way to do it is for tech skills, try to have some pet projects or whatever on the side that you can test new technologies or whatever. I wouldn’t put too much effort into developing something that you might or might not need. If you were to do that, I would frame it like looking at your role models and looking at how they work. Because you can confirm that those are skills that are actually needed in your company. How do you develop them? The best thing to do is to ask them directly how you can develop those skills.
Participant 2: As you mentioned, someone got hit, but then they got stronger. Do you think you’re that kind of person, or what traits does this person have?
Rey: I cannot tell you which kind of traits this person has. I sometimes can identify them, but I cannot put it into words yet. It’s not a skill that I have developed. I wouldn’t say that I’m that type of person all the time, but I have had opportunities where I came out stronger off a challenging situation. My whole team was let go at once and I was the only member who remained there. That was very challenging, but it was also very accelerating to my career growth. I think anybody has the capacity to do that, given the right environment. Environments, when they’re challenging, people around us can help us to mitigate that challenge and come out stronger.
Participant 3: What made you change teams and what advice would you give to someone considering that?
Rey: Sometimes it’s your choice and sometimes it isn’t. If there’s a topic that’s of your interest, you should discuss that with your managers, your leaders, and tell them, I would like to work in this team in the future. It doesn’t need to be this week, it doesn’t need to be this quarter, but I would like to aim my career in that direction. Sometimes companies go through big reorgs and it’s not your choice to end up in one team or another, but it is up to you to make the most out of it. It comes back to the big team player framework. How can I make the most out of this situation? It’s challenging because you don’t have the domain expertise, you don’t have the knowledge, but you need to acknowledge that it’s going to be a rough patch in your career where you need more time to learn, but that it can also be an opportunity if you can make this team succeed by now having you, and how you can do that.
Participant 4: When you’re just a junior engineer and you’ve been in your company for a good bit and you start to recognize some inefficiencies across the organization and the processes, do you think at that level you can already start to plan these initiatives and try to influence direction, or is that something you really have to wait for once you can achieve that level and have that influence? I’m curious on your perspective on that.
Rey: I think it really depends on the context. It’s not the same thing to do that at a company with 10,000 employees against doing that at a startup with 20 employees. If you’re a junior at a startup, you’re very well empowered to make those initiatives. Large companies tend to favor diversity of voices lately and tend to try to listen to anyone. There’s the opportunity to do that, but it’s much harder to come up with a real idea that will actually change the company if it’s so big. I think you can always do it, but it’s going to be much easier if the company is smaller.
See more presentations with transcripts

MMS • Anthony Alford

Google DeepMind introduced Gemini Robotics On-Device, a vision-language-action (VLA) foundation model designed to run locally on robot hardware. The model features low-latency inference and can be fine-tuned for specific tasks with as few as 50 demonstrations.
Gemini Robotics On-Device is the latest iteration of the Gemini Robotics family and the first that can be fine-tuned. It is intended for applications that need to run locally on the robot hardware for low latency or because of a lack of networking. The model follows natural language instructions and uses vision to find and reason about objects in its environment. DeepMind trained the model on dual-armed Aloha robots but also evaluated it on several other robotic platforms, showing that it could handle complex tasks on new hardware. According to DeepMind:
Gemini Robotics On-Device marks a step forward in making powerful robotics models more accessible and adaptable — and our on-device solution will help the robotics community tackle important latency and connectivity challenges. The Gemini Robotics SDK will further accelerate innovation by allowing developers to adapt the model to their specific needs. Sign up for model and SDK access via our trusted tester program. We’re excited to see what the robotics community will build with these new tools as we continue to explore the future of bringing AI into the physical world.
DeepMind first announced the Gemini Robotics family earlier this year. Based on Google’s Gemini 2.0 LLMs, Gemini Robotics includes an output modality for physical action. Along with the models, DeepMind released several benchmarks, including the ASIMOV Benchmark for evaluating robot safety mechanisms and the Embodied Reasoning QA (ERQA) evaluation dataset for measuring visual reasoning ability.
DeepMind tested their model’s ability to adapt rapidly to new tasks. For seven different tasks, such as preparing food and playing with cards, they fine-tuned the model with at most 100 demonstrations; on average, using their model the robot successfully completed the tasks over 60% time, beating the “current, best on-device VLA.” However, the off-device version of the Gemini Robotics model performed even better at nearly 80%.
In a Hacker News discussion about Gemini Robotics On-Device, one user wrote:
I’ve spent the last few months looking into VLAs and I’m convinced that they’re gonna be a big deal, i.e. they very well might be the “chatgpt moment for robotics” that everyone’s been anticipating. Multimodal LLMs already have a ton of built-in understanding of images and text, so VLAs are just regular MMLLMs that are fine-tuned to output a specific sequence of instructions that can be fed to a robot….The neat part is that although everyone is focusing on robot arms manipulating objects at the moment, there’s no reason this method can’t be applied to any task. Want a smart lawnmower? It already understands “lawn,” “mow”, “don’t destroy toys in path” etc, just needs a finetune on how to correctly operate a lawnmower.
Gemini Robotics On-Device is not generally available but interested developers can sign up for the waitlist. There is also an interactive demo of a related model, Gemini Robotics-ER, available on the web. The Gemini Robotics SDK is available on GitHub.

MMS • Daniel Dominguez

Hugging Face has launched its Reachy Mini robots, now available for order. Designed for AI developers, researchers, and enthusiasts, the robots offer an exciting opportunity to experiment with human-robot interaction and AI applications.
The Reachy Mini is compact, measuring 11 inches in height and weighing just 3.3 pounds. It comes as a kit that users can assemble themselves, fostering a deeper understanding of the robot’s mechanics. The robot features motorized head and body rotations, animated antennas for expressiveness, and multimodal sensing capabilities, including a camera, microphones, and speakers. These features enable rich AI-powered audio-visual interactions, making Reachy Mini suitable for a wide range of AI development and research tasks.
Reachy Mini is fully programmable in Python, with future support for JavaScript and Scratch. The robot integrates with the Hugging Face Hub, which gives users access to over 1.7 million AI models and more than 400,000 datasets. This integration allows users to build, test, and deploy custom AI applications on the robot, making it a versatile tool for AI development.
Both versions of Reachy Mini offer a range of capabilities, but the Wireless version includes onboard computing, wireless connectivity, and a battery, while the Lite version requires an external computing source. Regardless of the version, Reachy Mini is designed for accessibility and ease of use, making it ideal for AI enthusiasts, students, and researchers of all skill levels.
Hugging Face’s approach to Reachy Mini aligns with its commitment to open-source technology. The robot’s hardware, software, and simulation environments are all open-source, which means that users can extend, modify, and share their own robot behaviors. The community-driven approach encourages innovation and collaboration, with users able to contribute to the growing library of robot behaviors and features.
The community feedback reflects enthusiasm, curiosity, and constructive critique, with a focus on its affordability, open-source nature, and potential for AI and robotics development.
System design & AI architect Marcel Butucea commented:
Reachy Mini robot ships as a DIY kit & integrates w/ their AI model hub! Could this open-source approach, like Linux for robots, democratize robotics dev?
Meanwhile Clement Delangue, CEO of Hugging Face posted:
Everyone will be able to build all sorts of apps thanks to the integrations with Lerobot & Hugging Face.
The Reachy Mini Lite is expected to begin shipping in late summer 2025, with the Wireless version rolling out in batches later in the year. Hugging Face is focused on getting the robots into the hands of users quickly to gather feedback and continuously improve the product.

MMS • RSS

Sono risultati vani i tentativi di rianimare un bagnante,
castelvetranese di 47 anni, colpito da malore mentre si trovava nel mare di
Triscina per un bagno ristoratore.
La tragedia è avvenuta nel tardo pomeriggio di oggi, nella
spiaggia all’altezza della strada 77: l’uomo, autotrasportatore noto a
Castelvetrano, stava trascorrendo qualche ora al mare con moglie e figli quando
improvvisamente si è accasciato in acqua.
Sono stati chiamati subito i soccorsi, mentre l’uomo veniva portato
a riva.
I sanitari del 118, intervenuti sul posto (nella foto), hanno provato con
le tecniche rianimatorie, in presenza anche di un medico, ma non c’è stato
nulla da fare, ed è stato dichiarato infine il decesso.
Sul posto sono intervenuti anche i Carabinieri della Compagnia
di Castelvetrano, che hanno atteso le disposizioni del magistrato di turno a
Marsala, prima di restituire il corpo alla famiglia.

MMS • RSS

Amazon DocumentDB (with MongoDB compatibility) Global Clusters now supports adding up to 10 secondary Regions, further enhancing scalability and availability for globally distributed applications.
Amazon DocumentDB Global Clusters spans multiple AWS Regions, providing disaster recovery from Region-wide outages and enabling fast local reads for globally distributed applications. This launch increases the number of secondary Regions that can be added to a global cluster from the previously supported limit of up to 5 secondary Regions to up to 10 secondary Regions, providing a larger global footprint for operating your applications. See documentation to learn more about global clusters.
Amazon DocumentDB (with MongoDB compatibility) is a fully managed, native JSON database that makes it simple and cost-effective to operate critical document workloads at virtually any scale without managing infrastructure. To learn more about Amazon DocumentDB, please visit the Amazon DocumentDB product page, pricing page and documentation.

MMS • RSS
UNICOM Systems Inc. boosted its holdings in MongoDB, Inc. (NASDAQ:MDB – Free Report) by 116.7% during the first quarter, according to the company in its most recent filing with the Securities and Exchange Commission. The fund owned 45,500 shares of the company’s stock after buying an additional 24,500 shares during the period. MongoDB accounts for approximately 1.6% of UNICOM Systems Inc.’s holdings, making the stock its 13th largest holding. UNICOM Systems Inc. owned approximately 0.06% of MongoDB worth $7,981,000 as of its most recent SEC filing.
Other institutional investors and hedge funds also recently modified their holdings of the company. Peregrine Investment Management Inc. increased its holdings in MongoDB by 17.1% in the 1st quarter. Peregrine Investment Management Inc. now owns 18,940 shares of the company’s stock valued at $3,322,000 after purchasing an additional 2,760 shares in the last quarter. CWM LLC lifted its position in shares of MongoDB by 8.7% during the first quarter. CWM LLC now owns 3,607 shares of the company’s stock worth $633,000 after purchasing an additional 289 shares in the last quarter. Stephens Investment Management Group LLC boosted its stake in shares of MongoDB by 16.4% in the first quarter. Stephens Investment Management Group LLC now owns 47,926 shares of the company’s stock valued at $8,406,000 after purchasing an additional 6,743 shares during the period. New York State Common Retirement Fund grew its holdings in MongoDB by 20.4% during the 1st quarter. New York State Common Retirement Fund now owns 113,134 shares of the company’s stock valued at $19,844,000 after purchasing an additional 19,174 shares in the last quarter. Finally, Arizona State Retirement System lifted its holdings in MongoDB by 0.4% in the 1st quarter. Arizona State Retirement System now owns 21,504 shares of the company’s stock worth $3,772,000 after buying an additional 83 shares in the last quarter. Institutional investors and hedge funds own 89.29% of the company’s stock.
Wall Street Analysts Forecast Growth
A number of brokerages have weighed in on MDB. Monness Crespi & Hardt upgraded MongoDB from a “neutral” rating to a “buy” rating and set a $295.00 price objective for the company in a research note on Thursday, June 5th. DA Davidson reiterated a “buy” rating and set a $275.00 target price on shares of MongoDB in a research note on Thursday, June 5th. Wolfe Research began coverage on shares of MongoDB in a research note on Wednesday, July 9th. They issued an “outperform” rating and a $280.00 price target for the company. Rosenblatt Securities reduced their price target on shares of MongoDB from $305.00 to $290.00 and set a “buy” rating on the stock in a research report on Thursday, June 5th. Finally, William Blair reaffirmed an “outperform” rating on shares of MongoDB in a report on Thursday, June 26th. Eight equities research analysts have rated the stock with a hold rating, twenty-six have issued a buy rating and one has assigned a strong buy rating to the stock. Based on data from MarketBeat.com, MongoDB has an average rating of “Moderate Buy” and an average target price of $282.39.
Read Our Latest Analysis on MongoDB
MongoDB Trading Down 0.5%
NASDAQ:MDB traded down $0.98 during trading hours on Monday, hitting $201.53. 1,482,490 shares of the stock traded hands, compared to its average volume of 2,310,411. The firm has a 50 day moving average price of $199.07 and a 200 day moving average price of $213.69. The firm has a market cap of $16.47 billion, a price-to-earnings ratio of -176.78 and a beta of 1.41. MongoDB, Inc. has a fifty-two week low of $140.78 and a fifty-two week high of $370.00.
MongoDB (NASDAQ:MDB – Get Free Report) last released its quarterly earnings data on Wednesday, June 4th. The company reported $1.00 earnings per share (EPS) for the quarter, beating analysts’ consensus estimates of $0.65 by $0.35. MongoDB had a negative return on equity of 3.16% and a negative net margin of 4.09%. The firm had revenue of $549.01 million during the quarter, compared to analysts’ expectations of $527.49 million. During the same quarter last year, the firm earned $0.51 earnings per share. The business’s revenue for the quarter was up 21.8% on a year-over-year basis. Research analysts predict that MongoDB, Inc. will post -1.78 EPS for the current year.
Insider Activity
In other MongoDB news, Director Hope F. Cochran sold 1,174 shares of the stock in a transaction on Tuesday, June 17th. The shares were sold at an average price of $201.08, for a total transaction of $236,067.92. Following the transaction, the director directly owned 21,096 shares of the company’s stock, valued at $4,241,983.68. This represents a 5.27% decrease in their ownership of the stock. The transaction was disclosed in a filing with the SEC, which is available through the SEC website. Also, Director Dwight A. Merriman sold 2,000 shares of the business’s stock in a transaction dated Thursday, June 5th. The stock was sold at an average price of $234.00, for a total value of $468,000.00. Following the completion of the sale, the director directly owned 1,107,006 shares of the company’s stock, valued at $259,039,404. This represents a 0.18% decrease in their position. The disclosure for this sale can be found here. In the last 90 days, insiders have sold 32,746 shares of company stock worth $7,500,196. 3.10% of the stock is owned by corporate insiders.
MongoDB Company Profile
MongoDB, Inc, together with its subsidiaries, provides general purpose database platform worldwide. The company provides MongoDB Atlas, a hosted multi-cloud database-as-a-service solution; MongoDB Enterprise Advanced, a commercial database server for enterprise customers to run in the cloud, on-premises, or in a hybrid environment; and Community Server, a free-to-download version of its database, which includes the functionality that developers need to get started with MongoDB.
Read More
Before you consider MongoDB, you’ll want to hear this.
MarketBeat keeps track of Wall Street’s top-rated and best performing research analysts and the stocks they recommend to their clients on a daily basis. MarketBeat has identified the five stocks that top analysts are quietly whispering to their clients to buy now before the broader market catches on… and MongoDB wasn’t on the list.
While MongoDB currently has a Moderate Buy rating among analysts, top-rated analysts believe these five stocks are better buys.

Enter your email address and we’ll send you MarketBeat’s list of seven best retirement stocks and why they should be in your portfolio.
Podcast: The Java Ecosystem Remains Ever-Green By Continuously Adapting to Developers’ Needs

MMS • Kevin Dubois Thomas Vitale

Transcript
Olimpiu Pop: Hello, everybody. I’m Olimpiu Pop, an InfoQ editor, and I have in front of me, Kevin Dubois and Thomas Vitale, to discuss the shortest path these days in the cloud-native ecosystem from code to production. So, guys, let’s take it alphabetically and introduce ourselves. Kevin?
Kevin Dubois: Hello, I’m Kevin Dubois. I’m a developer advocate at Red Hat for the moment, at least. Our Java team is moving to IBM. So, by July, I’ll be mentioning that I’m from IBM, even though we’ll still be representing Red Hat as well. However, my background is in software development, and my passion has always been developer experience, specifically how we can make a smooth transition from local development to production. That’s what we’re covering today, too.
Thomas Vitale: Yes. Hi, everyone. I’m Thomas Vitale. I am a software engineer. I work at a software company called Systematic in Denmark. I am passionate about anything cloud-native and Java. I got sidetracked with all this interest of mine a while ago, writing a book about Cloud Native Spring, and now I’m writing a book about developer experience with my friend Mauricio Salatino. I am particularly passionate about the path to production and ensuring that application developers have the best possible experience when working with cloud-native technologies, such as Kubernetes.
Java, the de facto enterprise language, evolves to meet developers’ needs [01:49]
Olimpiu Pop: Great, thank you for this, guys. I was watching your presentation at KubeCon, and it was nice to see all those things; that took me back to a couple of conversations. One of them was the one I had with Max, whom I won’t pronounce by his middle name, Andersen, the team lead for Quarkus, because the last time I pronounced his name, it went horizontal. I didn’t pronounce it correctly, so I’ll skip that. The other one is at the point where the biggest contender, and probably the only contender, in the serverless space is Node.js. And I know that Quarkus originated from the question: What can we learn from other ecosystems to make Java better? So, Kevin, how do you feel about it? Are we closer to regaining some momentum for the Java language in the serverless space?
Kevin Dubois: I mean, I think so. I’m writing a book on the topic of serverless Java, so, of course, there’s some merit to it, but I think we’re seeing a lot of advancement in the entire Java ecosystem, right? That’s the nice thing about Java: it keeps evolving with time, and it is the de facto standard for enterprise development. So, if there’s something that enterprise users can use, then Java tries to answer that call. Now, regarding Quarkus specifically, that’s one of the reasons we started this project. We saw this little gap in the Java space in terms of a lot of people were using the Spring framework, and there’s a lot of great stuff about Spring, but one thing it wasn’t very good at was the startup speed and the memory usage because it wasn’t designed for that, right?
And so, with Quarkus, the main goal, at least at the start, was to figure out how to make Java startup faster and use less memory to utilize these cloud-native deployment targets. Initially, it was Kubernetes, but also for serverless, because with serverless, we want something that starts up quickly, as otherwise we can’t scale in the way we want to use serverless. We want something that, if there’s a lot of demand, we have a lot of instances that can handle that burst of traffic. And then, if the demand stops, we scale down, allowing us to use these systems flexibly.
And the billing system of serverless is also based on resource usage. So, we wanted something that’s much more compact. That was a pretty good success with Quarkus, as it moved a lot of the tasks that used to happen during startup time to the build time. And then that had the added benefit of making it also really easy to perform native compilation of these Java applications with GraalVM, which allowed for significant performance improvements. And then we see that, actually, now that these natively compiled Quarkus applications typically start up faster than Node.js. So, we’ve come a long way.
Outside of Quarkus. There are also several advancements in terms of startup time and memory. So, we see that there’s a lot of good stuff coming from the Java world to keep up with the demand.
Olimpiu Pop: Great. Thomas, you’re not actively working, at least from what I know about Spring, but you did mention that you wrote a book about Spring. So, what’s your feeling? How is this gap being filled? Because if you look at that, we can say that Kevin is now the competition, the Quarkus side of things. So, how do you see the other angle?
Thomas Vitale: In general, I am pleased to see the advancements in the Java ecosystem because, as Kevin said, whenever innovations or new technologies emerge around Java, Java continues to evolve and advance to support new use cases. I’m actively contributing to the Spring AI project, which integrates AI capabilities into Java applications. It’s great to see the vast Java ecosystem, including Semantic Kernel and LangChain4j, offering extensive support from both frameworks and the Java platform itself. Like the last few versions of Java, which have introduced many useful features that really bring Java to a whole new level when it comes to AI use cases.
From that perspective, being a Java developer is exciting. Regarding serverless in general, and specifically the Spring Boot side, there is support for GraalVM, which enables native compilation, which is awesome. You get instant startup time, reduced memory consumption, and reduced surface attack. That’s really important—that security aspect. But on top of that, the Java platform itself is working towards making startup time faster and reducing memory consumption.
So, overall, there’s a lot of innovation happening in the Java world, so I am excited to be a Java developer at this moment in time. I feel like I’m not missing out on much.
Production deployments are not a big event anymore, as the focus is on quick feedback loops [07:10]
Olimpiu Pop: Yes, that’s my feeling overall as well. And during KubeCon, one of the points that everybody was discussing was green, specifically having the ability to start faster, shut down gracefully, and then have a snapshot from which to start, allowing you to unplug some of the machines. We can explore ephemeral environments more thoroughly these days and ensure that our code is, well, making everybody happy.
The green side of the equation involves people putting money into the operational side, and last but not least, we have been speaking a lot lately about developer experience. And obviously, one of the points that is important for everybody is a shorter path to production. You both mentioned that during your introductions, those points were important. What do you think are the most important aspects when building something and achieving a fast feedback loop? How would you look at that, Kevin?
Kevin Dubois: I think it’s super important. Thinking back, I’ve been developing professionally for, oh, I think about 20 years. So, a while ago, and I remember when I started going to production, it was this big event because we knew stuff was going to go wrong. There were a lot of manual things that we needed to do, and the environment that we were developing on locally was utterly different from our production environment. So, even though we tried to test stuff on our local machine, we would have all these nasty surprises go into production. And now, fast-forward 20 years ago, yes, some stuff still happens, but I feel like we have a lot more control over that whole cycle and being able to reproduce a lot of the stuff that’s going to happen in production on our local machine, or at least in the … let’s say the inner loop or the close outer loop where we can also automate a lot of stuff with CI/CD and GitOps and stuff like that.
But I think in terms of local development, the advent of containers has been super important. I mean, I think when I started working with containers about 10 years ago with Docker, I containerized our environment that we had, kind of these different services running in our testing cloud environment. Yes, we already had EC2s and other resources, but it was still very clunky. And so when I containerized all these different components, we were able to run those on our local machine, reproduce all these different test cases that we were never able to do, and then build that into pipelines.
The amount of time that we spent debugging on production has gone down quite a bit thanks to that. And then also thanks to all these observability stacks that we have now that we can also reproduce, it makes it super nice. And then of course, adding the developer experience of stacks like Quarkus that allow you to also deploy to these environments much more easily is the cherry on top, I think.
Olimpiu Pop: How about you, Thomas? What do you have in your developer toolbox?
Thomas Vitale: Yes, totally agree with Kevin about containers. They have been a real game changer. On top of that, I think the introduction of this project called Testcontainers improve things even further. Testcontainers is a polyglot framework, so it’s not just Java-specific, even if it started in the Java world, but it gives you the possibility to programmatically provision these containers as part of your application. So, you have the same application lifecycle when you start your application. You don’t have to worry about starting up separate containers maybe for a PostgreSQL database, or maybe you want observability, you want to spin up Grafana, or maybe you’re working with an AI application and you want to run an Ollama instance to serve some models. Testcontainers integrated with frameworks like Quarkus or Spring Boot basically gives a very nice developer experience because you don’t even have to think about it.
You start your application and immediately all these services are provisioned for you. So, you can immediately go into the development flow where you are implementing the business logic, maybe a new feature, bug-fixing and all those services are part of the lifecycle.
Then you shut down the application and all those services are also shut down automatically. That’s great, both at development time but also testing time. I remember having to test integrations with databases, using some in-memory test-only databases or with some mocks. And then you go to production with the real database and you find out some incompatibilities because, of course, it’s a different system, but now you can actually use the same exact database across the entire software lifecycle. And that’s really, really important. It’s not only a matter of producing higher-quality software, but also getting a better experience as a developer. So, I enjoy my work more because I don’t have to spend too much time on all these extra activities that distract me from the core of my task, but also that frustrate me.
I think a core property of a good developer experience setup is you don’t get frustrated when you use it. How many times has it happen that we tried out a tool and we got frustrated because it was impeding, it was creating friction in our workflow? I think today we have lots of available tools and strategies in order to prevent it. Unfortunately, it’s not always prioritized or maybe some tools are not always known. And that’s also why it’s really important to talk about this topic, and I really enjoyed talking about it with Kevin at KubeCon.
Olimpiu Pop: Great. So, there is no more H2 SQL to be used, so we can just use Testcontainers and then we are getting a better experience in close to what the production environment would look like probably, right?
Kevin Dubois: Yes-
Thomas Vitale: Yes, exactly.
Kevin Dubois: What’s also nice is that as a new developer coming onto a project that was, in the past, also a big pain to figure out how I need to set up everything and how everything is configured? And these days the pain is also … I mean, I wouldn’t say it’s completely gone, but it’s also quite a bit less. I remember in the past it was sometimes weeks to set up your environment to be usable even to start developing. I think these days, it’s … and maybe I’m a little bit insulated with working on very kind of specific open source projects, but that lead time has gone away a lot too, and I think that’s a big win too.
By shifting left capabilities, not only responsibilities, you enable developers to improve the system’s security [13:51]
Olimpiu Pop: Great. Thomas, you mentioned before the attack surface, and I cannot touch upon that because I spoke so many times about it, and my feeling is that nowadays everything is shifting left. If you attend conferences where you’re discussing security, it’s clear that security is moving left, shifting closer to the developer. Suppose you speak about data that’s shifting left as well. Pretty much now we move a lot of the things that used to be in the outer loop closer to the inner loop so that you can move faster. From all the tools that you brought together in your presentation, what do you think it helps a lot into improving the security of the applications?
Thomas Vitale: Yes, the whole shift left movement, of course, it’s really important. I feel like in the past few years we realized that too often we applied it wrong, meaning that we shift left the responsibility of lots of critical shared cross-cutting concern to each application developer, and that proved to be unsuccessful in many cases. So, we realized that we don’t want to push left responsibility, but we want to push left capabilities. And a term that recently got popular is shifting down.
So, basically we want to have a platform that provides these services to application development teams. So, sort of shifting down the platform that provides these capabilities, hopefully in a self-service manner so that developers are enabled to do better in terms of security, in terms of observability, but without the additional burden of becoming experts in all those fields and learning too many tools that are not part of a normal application development task.
So, in terms of security, I feel like getting more easy access to services, for example, in terms of SBOMs, like keeping track of all the dependencies in a project, and I know both Quarkus and Spring Boot integrates with CycloneDX projects, so whenever you work with an application, you can auto-generate a software bill of materials so you can keep track of all the licenses and the dependencies that you have.
That’s key for vulnerability scanning. But if you have a platform that provides the services, you can also automatically, whenever you push and you change, get a report, find out right away if maybe you’re using a dependency with the wrong license or a license that is not allowed by your organization. There’s lots of attention these days around compliance and security, especially in the public sector. So, getting these capabilities already inside the inner loop … again as services, not as something that developers are responsible for … I think that would really improve the overall security posture of the software.
Olimpiu Pop: Okay.
Kevin Dubois: Yes, I definitely agree with that. I think also integrations into your development cycle itself, right? Into your IDE when you’re importing, when you’re adding a new dependency to your code that you immediately get a notification like, “Hey, this dependency is not up-to-date”. Or you open a project or you start writing code and it is able to scan and say like, “Hey, you’re doing something that you aren’t supposed to do”. Those are the things that we mean by shift lift, right? It’s not like the developer gets all this extra stuff that they need to now work on. It’s just like, here’s a solution for you, instead of here’s a problem for you that you now need to solve as well, right?
Olimpiu Pop: This sounds more like … somehow, a lot of things appeared in the last couple of years. We just dropped things like flux, and we discussed the developer experience, the platform, and quick feedback loops and all the other stuff. But now, somehow listening to you guys, there is more togetherness in software development because people are working together as a team, but also as companies. And this is pretty much a depiction of that, what you mentioned, that probably you have a platform team that is focusing on all these kinds of things that are being built. And then on the other side, you have the tool that allows you to act quickly at the developer level.
Now, given that Kevin is in Brussels, how will the Cyber Resilience Act play with this? Because for a long period of time, people are terrified of what that might bring, but now, as Thomas was mentioning as well, you can generate your BOMs quickly directly from the toolset that you have, and that will allow you to act quickly in terms of other issues as well. So, any insights on those?
Kevin Dubois: Yes, so I’ve worked with customers here in Brussels, but it’s, of course, just not just in Brussels. It’s across Europe. It’s also in the US, depending on what kind of organization in the US. But they need to provide the origin of what kind of software. So, the software bill of material. They also need to make sure that in terms of the container images, what you’re using in those images comes from reputable sources, or that, at least, you can trace it back also that you signed those SBOMs in those containers so that you know during your build process it was really you who provided those things.
It’s not somebody who then snuck into your build pipeline and started doing some malicious stuff. And when we’ve started talking about this a few years ago, it was this huge kind of hurdle for organizations, like, how are we going to do all this? It’s like maybe there’s a tool to create SBOMs and maybe there’s a tool to sign images. There’s the project six store and different projects in that space. But to bring that all together was quite the challenge.
So, what we’ve seen, especially the last year, year and a half, that there’s different solutions. Red Hat provides a solution, of course, but other organizations and different products provide those solutions too to kind of automate that during your pipeline. So, really, it’s a relatively straightforward process, and I think we kind of have these regulations to thank for that because this was really needed, especially as everything becomes more automated and systems become more used for critical data that we’re sending around our countries and to different people, it’s important that we do really think about what is being used and how is it being used, and if we find that there’s an issue that we can quickly trace it back and then we can fix it. Not like a couple of years ago with Log4j, where it took organizations months, if not more, to find out where is it actually being used and then being able to finally track it back and fix it. So, I think that was also a good wake-up call for organizations. We need to automate security as much as everything else.
Olimpiu Pop: My take on this is that the industry is coming of age. We kind of realize that software is eating the world, but we have to take the responsibility that we need to, especially in these days. And, yes, I think it’s a good step forward that we have some legislation in place that forces a couple of the things to be done, especially that it’s really a lot of things are happening out there. I know, I think the last statistic was that around 90% of your application is actually built by somebody else because it’s open source. So, you are just importing stuff that you don’t actually know.
Kevin Dubois: And with AI probably even more so, right?
Olimpiu Pop: Exactly. Yes.
Kevin Dubois: You’re even having our-
Olimpiu Pop: You read-
Kevin Dubois: Yes.
How to integrate generative AI in your Java application and still play by the rules [21:32]
Olimpiu Pop: You read my mind. So, given that Thomas mentioned that he’s putting a lot of effort nowadays into getting AI into Spring and more, let’s say, traditional type of building applications, somehow it’s against the current because AI is big. And then we are discussing about frugality and we’re discussing about faster startup times and the quick loops, and then we are discussing about transparency. And AI is not always transparent
Kevin Dubois: Or green.
Olimpiu Pop: Exactly. So, how can you still have the AI experience put in the small Java box or virtual machine or whatever and still be efficient, Thomas?
Thomas Vitale: Yes, I guess most of the time these days when we talk about artificial intelligence, we talk about large language models and generative AI. That, of course, is just a part of AI. But what we’re doing these days, I guess we have two areas. One is AI assisted development, so using AI tools to help us develop software. But what frameworks like Spring AI or LangChain4j do is allowing us to integrate Java application with AI capabilities.
A great thing that I like about this most recent wave of AI is this consolidation of HTTP APIs in front of the model inference services, meaning that all of a sudden you don’t need to use Python anymore, but you can use your favorite programming language, the one you’re the most comfortable with or the most productive with, and just by coding an HTTP API, you can consume these services. I feel like that was one of the key thing that enabled, at scale, all developers suddenly to be able to consume these machine learning services. And I think that’s really a powerful concept.
So, from that point of view, considering Java is already established as an enterprise tech stack, it’s yet another integration. We have many other types of integration, but of course that comes with lots of challenges. The main one is we cannot rely on large language models 100%. They’re not deterministic. So, one thing is the integration part.
From a technical point of view, Java is really well-equipped to solve that. But the other part is if we want to really use this in production, how can we validate what it’s doing? As you said, it’s a black box. So, there’s a lot of research going on right now about explainable AI, like, okay, we have this machine learning model, we don’t know how it works. We don’t know what’s inside. And there’s lots of research happening just to introduce some more determinism as much as possible into explaining why a certain result is outputted from the model.
So, I feel like that’s the main challenge if we consider a production perspective to reach something that is production-ready, especially in industries that are highly regulated or highly sensitive. I’m thinking about healthcare, for example. We really have to be careful when we start thinking about production use cases.
Olimpiu Pop: Okay. And you had an interesting point here. Obviously, it’s not deterministic and it just gives you a broad range of results. At some point, it can mean the same thing, but in other words. And then we are discussing Java, which has some decades behind, that we always spoke about black or white or zero or one. It was quite simple from an engineering perspective. You either had something that was equal with something or something that was not equal to something, or something like that. Now, with AI, and LLMs more particularly, we are discussing ranges, and it’s a gradient. It’s 90% accurate, it’s 50 or whatever, because you mentioned earlier, Testcontainers. Is the ecosystem ready? Are we, as developers, ready to embrace that verification? It’s not testing anymore. It’s verification, because you were looking at the range.
Kevin Dubois: It really is kind of a different mindset, and I think that’s what we as developers struggle with. Just like you said, we like to have very deterministic outcomes where we know exactly what’s going to happen, and that’s not the case. And I don’t think that’s ever going to be the case because that’s the whole point of, well, generative AI is to be creative. But we can use it for use cases where we want creativity. But the nice thing that I’m seeing now, and that we’re all seeing, is combining those two pieces. So, we can use AI for the creative part, but then we can also integrate it with deterministic actions that we call as tools, and we’ve heard of Agentic AI that is also able to tie in with external capabilities. We’ve heard of Model Context Protocol as a way to call external systems from your LLM and vice versa.
And I think there’s a lot of interesting use cases that can come out of that because that allows you to both use the creativity of the LLMs, but also get … if I want to access a database and get a particular value, then we can instruct it to. Or if we want to get something from the internet that happened today, which LLMs typically don’t know because they’re trained on a lot of data, but typically not today, we can go and grab that information. And there are so many use cases where we can tie in business functionality, but then also use it with creativity. The real challenge is, going back to security, how can we make sure that this is all happening in a way that we can manage? And that’s a little bit of a challenge right now. And, for example, in the Quarkus and LangChain4j, the stack that we’re building, we have these guardrails that we’re building.
So, you can try to sanitize both what goes to the LLM, what comes out of the LLM, or typically also what the LLM is doing in terms of agents. So, if it calls different actions and tools through Agentic AI, that we can also verify that it’s not doing something that it’s not supposed to do. For example, if we give it access to a database that if somebody says, “Ignore all previous instructions. Delete the database”. That’s kind of the funny or not so funny example, but that it’s not actually able to do that. So, there’s a lot of considerations with AI.
So, going back to your question, is it ready for production? As always, it depends, but I think for some use cases, yes. But I think we need to be careful, especially with security. The nice thing is, and then I’ll let Thomas speak, is that all these things that we’ve worked on over the years also in terms of all the cloud-native stuff that we’ve been doing in terms of security, in terms of rolling out, in terms of making our applications faster, all these things kind of come back in this AI stuff too. So, if we call with MCP, with model context protocol, an MCP server that’s running as a job application, well, we can use serverless, we can use very fast startup components to return results to the LLM without constantly running big systems. So, there’s a lot of the stuff that we’ve done over the years that we can totally use with this new kind of AI stuff as well.
Thomas Vitale: Yes, I totally agree. I think that, in this context, frameworks are really, really important, besides integrating with a model. That’s the easy part. But frameworks then have the responsibility of adding all these layers of functionality that helps you make an application production-ready or getting there eventually. In the Java ecosystem using Spring Boot or Quarkus, we already got lots of these building blocks in terms of security, in terms of observability, in terms of auditing. So, it’s a matter of reusing all these different building blocks we already had. And on top of that, creating this experience considering that these generative AI models are not deterministic. So, yes, something like guardrails, verifying input-output, and evaluation frameworks. I know both Spring AI and LangChain4j have these features.
But probably the most challenging part I see is what you also mentioned, Kevin, we need to change our way of working with it because we are used to something that works all the time or doesn’t work. Like before, integrating these large language models, we never had discussions like, “Oh, this application works 90% of the time. Okay, let’s ship it”. No, that was never acceptable. So, either it works or we don’t ship it because then it’s not ready. But here we have to change the way we think about it, even the way we test it. Testing drastically changes. So, how do we test these applications? And that’s also why, since we cannot rely on these models, we need these additional strategies that also influence the user experience, keeping humans in the loop. We cannot automate everything, so we need to design these steps where we ask the end user to validate some of the content or some of the answers from the models before we move on, to bring a bit of safety and more determinism into the whole AI workflow.
Olimpiu Pop: Interesting enough, I had a similar conversation, or more targeted on the generative AI space and what that means, with Jade Abbott. She’s the CTO of Lelapa AI, and they are focusing a lot on generative AI into spaces that are not exactly that powerful in terms of infrastructure, mainly in Africa. And what she was mentioning is that she had two parts of her brain. One of them is an engineering part of the brain that’s zero is zero, and one is one, and then it’s the machine learning expert where you have a spectrum, a statistic. It’s zero point something and something like that. And then she was mentioning the definition of the bug. What’s an engineering bug? Where one should be equal to one, while in the machine learning space, the AI space, it’s more of a gradient. It’s 50% a bug, it’s 55% of the bug. And I think that’s something that we have to understand.
Do we have solutions for running LLMs locally? [32:11]
And going back to the point that we had earlier, Thomas, when we were saying that long are the days when we had to use … I don’t know, HSQL or H2 SQL for our testing purposes. The L in LLM, the first L in LLM stands for large. Most our machines are not able to host a large language model for the inner loop. Do we have an option or how should we proceed with that?
Thomas Vitale: That’s a very good question. I think maybe two years ago when this whole new trend started, it was indeed a problem. But I feel like right now it’s less of a problem. More and more models are getting smaller, while providing maybe half the size they were last year, providing the same performance though. So, it’s becoming easier and easier to run models on your local machine. And I really think that that’s going to be an ever bigger trend in the future. Running models on your machine, that solves several issues. One is the developer experience, of course, but also in terms of being green and the energy consumption. We are not relying on Cloud resources explicitly for that. I’m already using my machine, I’m already using the resources, so let’s use it also for that. So, I don’t need extra computation power, extra infrastructure deployed somewhere.
I was on a plane recently, no internet connection, and I was just coding. I could use local models thanks to Ollama, both for AI assisted development, but also integrating my application with AI capabilities. So, I feel like we are already there, but I think it is only going to get better from here on. That would be my forecast. I don’t know if you agree, Kevin?
Kevin Dubois: Yes, no, absolutely. I had the exact same experience a couple of weeks ago when I was on a long flight and I really was counting on there being internet. And then you get on the plane and the first thing they say like, “Oh, sorry, the WiFi doesn’t work this time”. And it’s like, ugh. And just like you, I was like, “Oh, wait, I can just use an LLM on my local machine”. And I was able to do most of my work. Even with the code editor, I think I was using Continue or something, and I think I was using Podman AI Lab, but kind of the same inference, running a small model on my local machine. And it worked relatively well. The only kind of funny thing is that I was testing it out and I was like, “Who’s Kevin Dubois?” And it answered saying like, “Oh, he’s a famous soccer player from Belgium”. So, at least it got the Belgium part.
But going back to what you were saying, Thomas, about verifying what the LLM is actually producing. So, some models you can actually ask them, where did you get that source? And so I asked that and it was very convincingly gave me a URL for this famous Belgian soccer player called Kevin Dubois. And I’m like, wow, okay, maybe this person does exist, but I don’t have internet so I can’t really verify. And then when I landed, I looked up the link and it didn’t work. So, it’s very convincingly trying to tell you about facts that sometimes aren’t. But aside from that, no, it was working really well to generate some stuff. And then, of course, in terms of code, you need to hold its hand because it’s still kind of a very junior developer, but it is fascinating to see it all work. Yes.
Olimpiu Pop: I didn’t intend to ask you this, but given that the vibe led us there, I have to ask-
Kevin Dubois: Did you say vibe on purpose here?
How to ensure the quality of the code merged in the Gen AI frenzy era [35:51]
Olimpiu Pop: Yes, yes, of course. You caught me. So, we spoke already about AI and the way how we integrate it in the application and that we have to be careful with. The other side is that, as you mentioned, generative AI is very useful in terms of generating code. And now it’s so easy to just be led and just let it go, especially when you have hard deadlines and the things that you have to make sure that they are getting to production. But how can we make sure that we use all the tools that we have built until now as an industry to make sure that what we put together is really at the quality standard that we would like, so that we avoid all the things that we mentioned before, besides the quality issues, the supply chain attacks, because that’s quite easy to push with generative AI.
Thomas Vitale: Oh, that’s a really good question. You want to go first, Kevin?
Kevin Dubois: Sure. I mean, I think the role of the more senior developer is even more important than it already was. I think it already was important, as a mentor. In this case, the senior developers have lived the pre-generative AI world, so they know what to look for and they know … well, ideally, at least what the gotchas are, and they’ve seen our evolution towards better automation and making sure that all the boxes are checked for deploying to production.
What I am a little bit worried about is the junior developers that are coming on, like, oh, yes, they’re very good at using these tools, which is great. I think that’s a really big asset, but I think it’s up to the senior developers to also take them under their wing to make sure that they’re still doing it the right way and that they don’t just rely on what the code is that’s being generated by these tools because they tend to be … and maybe it’ll get better, but they still tend to be quite verbose and aren’t necessarily using the latest kind of tools or programming paradigms that are out there because they’ve been trained on previous data, and then they kind of train themselves on that data that they’re also producing.
So, there’s a significant role for the senior developers, and I think there’s maybe a gap between the junior developers and then getting to that senior level that we might see. But, yes, we’ll see
Thomas Vitale: Yes, in terms of tooling, but also in practices, I’m a bit worried that we tend to forget about them too much. Also, from a organization management perspective, lots of money has been assigned now to AI. Everybody’s using AI. More and more people are encouraged or even forced sometimes to use AI. But we have a wide variety of tools already and practices established over years and years of software engineering that I feel like sometimes we are forgetting.
For example, we know about test-driven development, starting from a small test in order to validate the design and then move to the implementation. And now what we’re using AI for is I implement the business logic and then I generate the tests because writing test is boring. And then, of course, they all eventually pass after a few rounds. But then are we testing the requirements? So, are we just testing that implementation the model came up with is correct?
I see that happening more and more. And I feel like we should talk more about doing the opposite. I write a test, and then AI writes the implementation, maybe? But, in general, I hope we rediscover the existing tools and we don’t forget about them.
For example, in the cloud-native space, we talk a lot about platforms and one feature, a very convenient feature about platform … very good platform at least … is this capability of bootstrapping new project. Think about something like Backstage, a portal where a developer can go. Maybe I want to build a new web application and then I choose a certain template and I bootstrap a new project with all the guidelines, all the compliance, all the rules that that specific organization has decided to be compliant with, and I can immediately run the project. Maybe I’m using Quarkus or Spring Boot, so I just do git clone and run. Don’t have to install anything else.
Instead of doing that and maybe becoming better at leveraging those tools, those cloud-native integrations … so, also to auto-generate pipelines based on something deterministic … there’s this trend of let’s just have the AI model generate everything from scratch every time. Okay. It might work.
At least for demos, it looks quite impressive. But when we consider enterprises, when you consider production-ready scenarios, I’m a strongly supporter of still maintaining a certain focus and also financing the advancement of all the existing tools and practices that we know of that result in good software engineering practices because we have lots of tools available for doing … even security checking, for example, for migrating a project, upgrading to new version, there are tools like OpenRewrite, there’s Konveyor. We have tools that can deterministically, and in an auditable way, do lots of these tasks. And sometimes I feel like we’re skipping … like, we know about automation, we’ve been doing automation for so many years, but now we just forget all about it. And any automation is using large language models, but that’s not always a good answer.
So, let’s not forget about all the great things that happened in the past few years. Let’s just use large language models as an additional tool in our toolbox, not just as the only tool in our toolbox.
Olimpiu Pop: Yes, that’s my feeling as well is that now we have the tendency of just going and using whatever large language model you have close by to do basic stuff. That until yesterday, you did it by just thinking about it. For instance, I was just, one example is I know I had the phrase and I was curious about how many words are in that one. So, I just dropped it in ChatGPT, and obviously that’s too big a hammer to do it, while you can do it with wc and just have it in a very plain terminal window and just get all the information that you need. So, I think that’s important. And my feeling is it also, it’s pretty much how the search engines, or Google, used to be when I stopped using bookmarks because I had a couple of words, keywords in my mind, and I was just easily finding that thing. But it’s very expensive because it’s pointless to just go all the way there.
Does the sovereign cloud tendency change anything? [42:33]
But basically what you both are saying is you have to be focused on best practices that we learned through the experience of other generations and make sure that our focus is close to that one. So, rather than treating the generative AI as a new colleague in the team that has a different status, it has to go to the same quality gateways as normally, like proper code review, all the checks that you normally have when just pushing the code through production. And again, there is another topic that I have in mind that I wasn’t about to ask you, but, well, it just pops, so I have to, because you’re discussing about the cloud native. The tendency nowadays in cloud is sovereign cloud. So, how do you think all the practices that we develop in the cloud-native ecosystem will change, if they’ll change in any way, with this new tendency?
Kevin Dubois: Yes. Do you want to explain sovereign cloud for maybe the listeners that don’t know it? I mean-
Olimpiu Pop: Well, go ahead. You’re closer to the source than I am, so go ahead.
Kevin Dubois: I mean, hopefully I get it right. But it is basically meaning that we want to make sure that our clouds aren’t running on some cloud provider, that if something happens, Lord forbid, especially in the US, that we as European consumers of these clouds, it doesn’t mean that our data is going to be accessible or, even worse, not accessible by us anymore. I mean, I think that’s kind of the idea.
And I think that’s exactly where our work in the cloud-native space is so great because it’s all open source, or at least most of it is. And it’s sharing these tools so that if something happens, I mean, let’s say Kubernetes is still there, right? The same deployment techniques that we’re using to deploy to one cloud provider we can use for another cloud provider. So, if we build our systems the right way without lock-in to specific providers, it means that we can move our systems around. And then hopefully we also are keeping that in mind for our data and not just go all in with one provider where if something happens, whether it is for regulation purposes or for any kind of other reason, that still remains possible. Right?
Thomas Vitale: Yes, I agree. I feel like all these cloud-native technologies that over the past few years became more and more popular, they are actually key enabler for this strategy here because you can run Kubernetes across many different providers, infrastructure, on-premises, on Edge, in the public cloud from whatever country, and lots of tooling around Kubernetes also working that way. They’re open source. They are not vendor-specific. So, of course, each cloud provider adds lots of value in terms of convenience, in terms of services.
But all the core infrastructure and platform and tools around development, delivery, deployment, observability, they are all open source projects. There are open specs. I’m thinking of, for example, about OpenTelemetry where you can use these common APIs supported by all observability vendors, so you’re not locked on a specific vendor, but you can move. So, the impact on the business investing in cloud-native technologies, it provides lots and lots of benefits for many different reasons, but not specifically for reaching this digital sovereignty. I think it’s a key enabler.
Olimpiu Pop: Okay, thank you.
Kevin Dubois: Yes.
Olimpiu Pop: So, basically what we are saying is everything that we put together as an industry, basically how the Java ecosystem worked for a long period of time where you had a definition of an interface and then you had the implementation behind the scenes that allowed you to move from one side to another is happening now in the cloud-native space and all the things that would build around the CNCF, the Cloud-Native Community Foundation, allows you to be quite independent from a particular vendor that will allow you to move from one side to another and remain in an arm’s reach, so to say, with these points. Okay. Is there anything else that I should have asked, but I didn’t?
Thomas Vitale: And maybe just a comment on what we just talked about. I think that whole conversation around cloud-native technology also include the new developments in AI. So, lots of machine learning models are available as open source. I’m thinking about Mistral AI from France. They publish their models as open source, Apache to license. So, you can run them on your local machine on premises, or you can use the cloud platform, but also from that perspective, that ensures portability. So, you’re not locked on a specific platform or infrastructure vendor, but the combination of open source and open specs, I believe, is a key combination for this entire discussion.
Kevin Dubois: Yes, exactly. And there’s a variation of open source in the AI space right now too, right?
Thomas Vitale: Oh, yes.
Kevin Dubois: Because there’s some models that are like, “Oh, yes, it’s an open source model. You just need to agree to our license and then make sure that you agree to where you can use it and how you can deploy it”. Yes, of course, we need to be a little bit careful with that. And then open source, in my opinion, also means that we have visibility into the sources. So, what was it trained on? What were the weights that it was using? So, that’s where we see some questionable definitions of open source. And I think that’s, especially in terms of the sovereignty, it’s going to be a really important topic.
So, a little shout-out to IBM because they’re really working on models that are completely open, where they disclose the sources that they’re trained on and the weights with their Granite models. And I know that there’s quite a few organizations that are really looking for that kind of real open source models, because, otherwise, well, if we use our models and it generates some stuff that it shouldn’t, because it’s trained on proprietary data, an organization doesn’t want that to happen. They don’t want to be sued because they’re now disclosing information that was trained on, let’s say, Thomas’s books that … that it shouldn’t do that, right? So, I think we’re going to see a lot of-
Thomas Vitale: Well, that happened but it’s fine.
Kevin Dubois: Yes, exactly.
Thomas Vitale: But actually that lead us back to … we were talking about supply chain security, and this is part of that discussion. For example, CycloneDX has a spec also for generating this or providing bill of materials, also, for machine learning models, because it’s really important to know we need transparency. We need to know the data set that has been used for training. We need to know all the ways … It’s not easy right now. There’s lots of … well, let’s call them discussions and arguments around it, but for organizations, it’s not easy at the moment to find the specific models that give you that same safety that you rely on a standard open source project because it’s different. It’s not just the software. There are so many moving parts and there’s still not enough clarity. So, I see that as a bit of a challenge we need to overcome soon if we want to move forward in production use cases.
Olimpiu Pop: Yes, that’s true. And what I was thinking … well, closing the gap with what we discussed earlier, that we have more than the BOMs. And in terms of what you actually get, we have also six store when you have a signing for your libraries, and then there also the reproducibility part of it. And I think that’s probably the next step with the open source AI. It’s now getting a lot of momentum. You see Hugging Face has a growing number of millions of models available, but what’s actually missing are these parts. What data was it trained on? What are the parameters and all the other details? That’s probably the next step. And somehow I’m calmer now with the European legislation that is coming together because it creates some safety nets from that angle.
Thank you, guys. Thank you for your time.
Thomas Vitale: Thank you.
Kevin Dubois: Thank you.
Mentioned:
.
From this page you also have access to our recorded show notes. They all have clickable links that will take you directly to that part of the audio.

MMS • Sergio De Simone

Docker launched a new feature to let developers define, build, and run agents using Docker Compose, with the aim of streamlining the agent development process and reducing repetitive tasks. Additionally, Docker Offload, now in beta, provides a way to seamlessly offload building and running models to remote GPU compute.
Adding support for defining agents using Docker Compose is a further step in Docker’s strategy to position itself as a key tool provider for agent development, much like it did for container-based development. As Docker’s Mark Cavage and Tushar Jain note, this means simplifying repetitive and tedious tasks that agent development typically involves, such as iterating with different models, securely connecting to MCP tools, and packaging everything so teammates can easily run the same workflow.
The new feature allows developers to declare open models, agents, and MCP tools in a compose.yaml
file, then build and run them using docker compose up
. Docker Compose integrates with many current agentic frameworks, including LangGraph, Embabel, Vercel AI, Spring AI, CrewAI, Google ADK, and Agno.
To help developers get started with using Docker Compose for agent development, Docker has created a GitHub repository with sample projects for all supported frameworks. For instance, one example shows how to build a collaborative multi-agent fact checker using Google ADK:
The Critic agent gathers evidence via live internet searches using DuckDuckGo through the Model Context Protocol (MCP), while the Reviser agent analyzes and refines the conclusion using internal reasoning alone. The system showcases how agents with distinct roles and tools can collaborate under orchestration.
The corresponding compose.yaml
file defines two services, adk
and mcp-gateway
, and includes a models
section listing used models along with their arguments. Docker introduced the possibility to package and run local models with Model Runner in Docker Desktop 4.40, but you can also use remote or cloud-based models by providing the appropriate credentials.
Docker Compose modular architecture makes it easy to create compose overrides for multiple agent configurations. For example, you can define a variant that uses OpenAI instead of a local model, or one that targets Google Cloud Run. This lets you combine multiple compose files to easily switch your agent setup:
docker compose -f compose.yaml -f compose.openai.yaml up --build
Another new feature for agent development supported in the latest Docker Desktop is Docker Offload. This fully managed service can be used as a drop-in replacement for Docker Model Runner when local resources are not sufficient, allowing developers to run models and containers on a cloud GPU transparently using the same workflow as for local deployment.
Docker Offload frees you from infrastructure constraints by offloading compute-intensive workloads, like large language models and multi-agent orchestration, to high-performance cloud environments. No complex setup, no GPU shortages, no configuration headaches.
The service is currently available in beta, and Docker is providing 300 minutes of free usage to help developers get started.

MMS • RSS
Shares of MongoDB, Inc. (NASDAQ:MDB – Get Free Report) have been given an average recommendation of “Moderate Buy” by the thirty-five brokerages that are currently covering the company, MarketBeat.com reports. Eight research analysts have rated the stock with a hold rating, twenty-six have assigned a buy rating and one has assigned a strong buy rating to the company. The average twelve-month target price among analysts that have covered the stock in the last year is $282.39.
Several equities analysts recently weighed in on the company. UBS Group raised their price objective on MongoDB from $213.00 to $240.00 and gave the company a “neutral” rating in a research note on Thursday, June 5th. Daiwa America raised MongoDB to a “strong-buy” rating in a research report on Tuesday, April 1st. William Blair reaffirmed an “outperform” rating on shares of MongoDB in a research report on Thursday, June 26th. Guggenheim lifted their price objective on MongoDB from $235.00 to $260.00 and gave the company a “buy” rating in a research report on Thursday, June 5th. Finally, Loop Capital downgraded MongoDB from a “buy” rating to a “hold” rating and lowered their price objective for the company from $350.00 to $190.00 in a research report on Tuesday, May 20th.
View Our Latest Stock Report on MongoDB
MongoDB Stock Down 0.5%
Shares of MongoDB stock traded down $0.98 on Monday, reaching $201.53. The company’s stock had a trading volume of 1,482,272 shares, compared to its average volume of 2,310,108. The company has a market capitalization of $16.47 billion, a P/E ratio of -176.78 and a beta of 1.41. MongoDB has a 52-week low of $140.78 and a 52-week high of $370.00. The stock has a 50-day moving average price of $199.07 and a 200-day moving average price of $213.69.
MongoDB (NASDAQ:MDB – Get Free Report) last announced its quarterly earnings results on Wednesday, June 4th. The company reported $1.00 EPS for the quarter, beating the consensus estimate of $0.65 by $0.35. MongoDB had a negative return on equity of 3.16% and a negative net margin of 4.09%. The business had revenue of $549.01 million for the quarter, compared to analyst estimates of $527.49 million. During the same quarter in the prior year, the company posted $0.51 EPS. The company’s quarterly revenue was up 21.8% on a year-over-year basis. Sell-side analysts forecast that MongoDB will post -1.78 earnings per share for the current fiscal year.
Insider Activity
In related news, Director Hope F. Cochran sold 1,174 shares of the business’s stock in a transaction that occurred on Tuesday, June 17th. The shares were sold at an average price of $201.08, for a total value of $236,067.92. Following the completion of the transaction, the director owned 21,096 shares in the company, valued at $4,241,983.68. The trade was a 5.27% decrease in their position. The transaction was disclosed in a document filed with the SEC, which is available at the SEC website. Also, CEO Dev Ittycheria sold 25,005 shares of the business’s stock in a transaction that occurred on Thursday, June 5th. The stock was sold at an average price of $234.00, for a total transaction of $5,851,170.00. Following the completion of the transaction, the chief executive officer owned 256,974 shares of the company’s stock, valued at $60,131,916. This trade represents a 8.87% decrease in their position. The disclosure for this sale can be found here. Over the last 90 days, insiders sold 32,746 shares of company stock worth $7,500,196. Insiders own 3.10% of the company’s stock.
Institutional Investors Weigh In On MongoDB
A number of institutional investors have recently made changes to their positions in MDB. Jericho Capital Asset Management L.P. bought a new position in MongoDB during the 1st quarter worth about $161,543,000. Norges Bank bought a new position in MongoDB during the 4th quarter worth about $189,584,000. Primecap Management Co. CA increased its holdings in MongoDB by 863.5% during the 1st quarter. Primecap Management Co. CA now owns 870,550 shares of the company’s stock worth $152,694,000 after purchasing an additional 780,200 shares in the last quarter. Westfield Capital Management Co. LP bought a new position in MongoDB during the 1st quarter worth about $128,706,000. Finally, Vanguard Group Inc. increased its holdings in MongoDB by 6.6% during the 1st quarter. Vanguard Group Inc. now owns 7,809,768 shares of the company’s stock worth $1,369,833,000 after purchasing an additional 481,023 shares in the last quarter. 89.29% of the stock is owned by institutional investors and hedge funds.
About MongoDB
MongoDB, Inc, together with its subsidiaries, provides general purpose database platform worldwide. The company provides MongoDB Atlas, a hosted multi-cloud database-as-a-service solution; MongoDB Enterprise Advanced, a commercial database server for enterprise customers to run in the cloud, on-premises, or in a hybrid environment; and Community Server, a free-to-download version of its database, which includes the functionality that developers need to get started with MongoDB.
Read More
Before you consider MongoDB, you’ll want to hear this.
MarketBeat keeps track of Wall Street’s top-rated and best performing research analysts and the stocks they recommend to their clients on a daily basis. MarketBeat has identified the five stocks that top analysts are quietly whispering to their clients to buy now before the broader market catches on… and MongoDB wasn’t on the list.
While MongoDB currently has a Moderate Buy rating among analysts, top-rated analysts believe these five stocks are better buys.

Looking for the next FAANG stock before everyone has heard about it? Enter your email address to see which stocks MarketBeat analysts think might become the next trillion dollar tech company.

MMS • RSS

In a recent in-depth interview, we had the privilege of speaking with Deepak Dastrala, CTO of Intellect Design. Deepak provided a comprehensive overview of Intellect AI’s mission, its innovative use of Artificial Intelligence in financial services, and the strategic partnership with MongoDB.
MongoDB’s Role in AI-Powered Full-Stack Modernisation
This is an important point. We developed this data platform, commencing its journey approximately eight years ago. Initially, our use case focused on commercial insurance, specifically underwriting. In North America, the underwriting of commercial insurance involves some of the most extensive use of data to make critical decisions, such as whether to underwrite particular corporate clients across various lines of business. This could involve property policies or worker compensation policies for employees.
When we began this endeavour, we sought a suitable database for managing unstructured data. Our collaboration with Mongo commenced eight years ago for this purpose. Given our unstructured use cases, MongoDB’s document database was an appropriate fit. Throughout our AI journey, MongoDB has become an integral component of our data platform.
In the last couple of years, while continuing my work with AI, I also assumed responsibility for our wealth business and capital markets business. This provided an opportunity to identify friction points within our wealth platform. Our wealth platform is utilised by some of the largest banks and sovereign wealth funds. The trigger point for this particular use case emerged from challenges with our analytics, where we experienced delays in completing data loads and encountered performance bottlenecks. Having previously worked with unstructured data, I understood that the complexities and challenges associated with it are considerably greater than those with structured data use cases.
It then became clear why we were addressing this problem in a siloed manner. Historically, MongoDB has been primarily recognised as a document database, without widespread understanding of its relevance for OLTP (online transaction processing) and OLAP (online analytical processing) use cases. This led to my collaboration with MongoDB. We aimed to demonstrate how MongoDB is equally, if not more, relevant for structured data use cases, transaction use cases, and analytics use cases. My objective was not merely to resolve the immediate analytics problem, but also to future-proof our data strategy for the wealth platform and for future AI initiatives.
Therefore, I began working with MongoDB on the modernisation journey of our wealth platform. We determined how it not only addresses current concerns regarding transaction and analytics use cases, but also how it can be made relevant for AI use cases. In this manner, I perceived an opportunity to advance significantly in the AI journey. The goal was to establish our wealth platform as a system of record and a system of engagement. With this unified data platform approach, we can also transform it into a system of intelligence. This outlines the strategy.
Development Level After MongoDB Collaboration
Deepak Dastrala: Regarding the analytics use case that I previously referenced, upon completing the modernisation, the immediate observation was a performance enhancement of over 50 to 70%. Furthermore, end-of-day loads, which previously required more than four hours, can now be completed in less than 50 minutes. Third, the total cost of ownership, compared to our prior choice, represents only 30% of the annual expenditure for that infrastructure. These three metrics represent a direct, like-for-like comparison. Additionally, we are now able to leverage the platform for AI use cases, a capability not present with our previous database.
Improvements in Batch Processing Framework and Architecture
The initial problem was a purely ETL (Extract, Transform, Load) approach, necessitating processing at a specific time of day. This presented a fundamental constraint. Secondly, the prior use of a SQL loader imposed a very rigid schema. Thirdly, PL/SQL was entirely coupled and primarily executed sequential processing. All of these components have been replaced with MongoDB aggregations. This means we no longer need to await the end of the day. We are capable of near real-time streaming, encompassing both data change capture and aggregations. This has resulted in a 70% increase in speed, for a direct comparison. More significantly, we can perform these operations during daytime hours, as a key advantage of MongoDB, which many underestimate, is its optimal performance for real-time dashboards.
Scalability and Operational Efficiency for Customers
From our perspective, many of our customers are cloud-based and distributed across various regions. The expectation is that we can independently scale OLTP (Online Transaction Processing) or vector use cases without concern for the entire system. Particularly with vector use cases, given the volume of incoming data, extensive prior planning is often not feasible. We must operate in near real-time. That, in my assessment, constitutes a significant differentiation.
Future-Proofing AI Solutions and Data Trends
That is an excellent question. The true power of MongoDB resides in its flexible schema. This eliminates concerns about specific data types or rigid structures. In conventional systems, the data model is paramount, and any modification to it can trigger a ripple effect throughout the application. In this scenario, however, the flexibility to even alter the schema at runtime provides significant advantages.
Allow me to offer another illustrative example. Consider a structured data model where specific entities can be tagged for an AI use case. This implies that you can embed the document for that entity and also treat that particular entity as unstructured data, thereby making it more relevant and useful for an AI application. This enables a hybrid nature of unstructured data within a structured framework.
Partnership Benefits Beyond Technicalities
With AI, every business, whether technology-oriented or otherwise, undergoes a fundamental re-evaluation. The critical aspect is speed to value—the ability to evolve rapidly and address market needs. If speed to value is the primary driver for all, then a partner capable of innovating at the same pace is essential. For instance, the concept of MCP (Model Context Protocol) in AI did not exist a few months ago, yet MongoDB is already supporting it.
Secondly, one must begin to consider what an AI-native database truly entails, because, unlike before, embeddings are now central. In the future, everything will likely be stored as embeddings. MongoDB’s acquisition of OHAI and its integration of native embedding models and re-ranking models help us reduce the cost associated with utilising third-party solutions. For me, to maintain a competitive business advantage, my partner must also remain at the forefront of innovation. Failure to do so would be detrimental. That is one key area.
A second area pertains to our global business, with clients worldwide. In my personal view, operating in AI without a cloud model presents significant challenges in managing data scale and scaling requirements. Each of our clients maintains preferred cloud providers due to long-term commitments. Therefore, my choice of database must coexist with their priorities. Whether they operate on Azure, GCP, or AWS, my database must support all these environments and adhere to region-specific compliance. This is where MongoDB assists us in capitalising on global opportunities within our clients’ preferred cloud environments.
Security Layers in BFSI Data Handling
It is important to note that a key differentiation lies in query capabilities. Primarily, almost everything is encrypted to a significant extent, and MongoDB’s ability to query on encrypted data is, in my assessment, unparalleled among competitors. Typically, data is encrypted, then decrypted for querying. However, the ability to query directly on encrypted data without decryption is a notable feature. This is one example I provide.
Generally, security in transit and security at rest are expected. The crucial question, however, is how to meet every compliance requirement. As you rightly stated, operating in the UK and EU necessitates compliance with GDPR. Two aspects are critical: first, what we, as an application or platform, implement; and second, if equivalent compliance, from both a data sovereignty perspective and other considerations, is not inherent in MongoDB, then it would present a challenge for us. MongoDB, by default, provides all of these capabilities. This is why some of the largest banks are their customers; it is considerably challenging in a cloud era to comply with and meet the expectations of banks and financial services, particularly given that regulations in Europe and the UK are far more stringent than elsewhere in the world. I believe this is where MongoDB’s default offerings and native innovations in encryption, compliance, and overall data sovereignty significantly assist us.
Adhering to the Zero Trust Model and Evolving Compliance
Yes, it does. Adherence is one aspect, but the ability to audit and demonstrate compliance to any third party is also crucial. We perform all necessary procedures for this, as the certification process itself is quite rigorous. For example, while much of this discussion pertains to structured data, consider the application to unstructured data.
Currently, as an organisation, we are pursuing ISO 42001 certification. This ensures that our AI system complies with the expected regulations for any AI system. Therefore, we are actively adopting new compliances as they evolve due to AI. Our differentiation lies not in merely restricting ourselves to past compliances but in adapting to the evolving regulatory landscape, which is critical, as every system will, in some form, become an AI system due to the inherent nature of the changes occurring. Technology is evolving. We cannot rely solely on a single compliance framework.
Intellect AI’s Future Perspective and AI Expansion
Firstly, having initiated the development of AI products significantly earlier than others and having operationalised them at scale, we possess a profound understanding of what change management entails in an AI-driven environment, as opposed to a typical digital one. The transformation is significant because, historically, especially in highly regulated industries like BFSA, the digital journey was expected to be deterministic. This means if you anticipated a value of one thousand rupees, it should be precisely one thousand rupees. With AI, however, it might indicate “with 99% confidence, it is one thousand.”
Therefore, how to foster business acceptance of this probabilistic journey is a key focus. We ensure that change management is an integral, thoroughly considered process for businesses, particularly enterprises, to adopt AI. This is an area where we possess understanding and continuously strive for improvement. Without this focus, everything risks remaining in the Proof of Concept (POC) phase, never reaching production. That is our primary priority.
Secondly, we maintain an active partnership with Columbia University. Together, we established a lab named DAP lab, which stands for Data Agents and Process. The objective of this lab is significant because currently, there is an excessive focus solely on models, often overlooking the broader picture of what is required to build an AI-native enterprise. For example, I believe we are currently at a stage comparable to 1996 or 1997 in the internet’s evolution, where we had Hotmail and Yahoo Mail. However, the evolution into internet banking, then mobile banking, and subsequently cloud computing and other advancements, occurred when fundamental business models transformed, not merely with the advent of Hotmail or Yahoo Mail.
To illustrate, while the internet gained popularity in 1995, internet banking only emerged around 2000-2001, taking approximately five to six years. This delay was necessary to establish trust in conducting financial transactions over the internet. Considering this, reflect on the extent of innovation required across the infrastructure space. Currently, all infrastructure, including cloud, is designed for applications, not for agents. In the future, there will be billions of agents. Consider how an agent from an SBA bank can securely and safely communicate with an agent from an Axis Bank to facilitate a transaction. What does trust signify in such a scenario?
All of these aspects are central to our focus. We are entirely concentrated on defining what a trusted infrastructure looks like across data, application, and security domains, particularly for highly regulated industries, to enable trust in AI and operationalise AI at scale. Essentially, our goal is to identify the innovations necessary for any enterprise to become an AI-native enterprise. That is our core focus.