Docker Desktop 4.42 Launches with Native IPv6, Integrated MCP Toolkit, and AI Model Packaging

MMS Founder
MMS Craig Risi

Docker Inc. released Docker Desktop 4.42 on June 10, 2025, enhancing networking flexibility, AI workflow integration, and model distribution. Native IPv6 support now enables users to choose between dual-stack, IPv4-only, or IPv6-only modes with intelligent DNS resolution. Docker claims that the improved connectivity options make Docker Desktop more adaptable to diverse enterprise network environments.

The release incorporates the Model, Client, Protocol (MCP) Toolkit directly into Docker Desktop, enabling users to discover and manage over 100 MCP servers, such as GitHub, MongoDB, and HashiCorp, without requiring the installation of extensions.

These tools run in isolated containers with signed images and built-in secret management, ensuring a secure and sandboxed environment. Developers can start or stop these servers with a single click or use the new Docker MCP CLI to list, start, or manage services programmatically. For example, commands like docker mcp list, docker mcp start github, or docker mcp stop github offer quick access and integration into automated workflows. Docker’s AI assistant, Gordon, is also integrated directly with MCP, enabling developers to interact with these services using natural language or command-driven prompts, simplifying DevOps setup and infrastructure troubleshooting.

AI workflows receive further enhancements as Model Runner gains support for Qualcomm-based Windows devices, integration with Docker Engine on Linux, and an upgraded GUI featuring Local, Docker Hub, and Logs tabs Developers can now package GGUF-format AI models into OCI-compliant images using the new docker model package CLI command for secure distribution to Docker Hub or private registries.

To do this, a model (such as mistral.gguf) is placed into a directory and packaged using the Docker model package command. This creates an image tagged for reuse, such as username/mistral-model:1.0, which can then be pushed to Docker Hub or another container registry. Once stored, the model can be run locally using standard Docker commands or managed through Docker’s Model Runner GUI, which now includes tabs for local models, Docker Hub integration, and real-time logs. These features allow developers to securely build, run, and distribute AI workloads while maintaining portability and compliance across environments.

On social media, Docker advocate Ajeet Singh Raina highlighted the release as “powerful new capabilities with native IPv6 support, a fully integrated MCP Toolkit, and major upgrades to Docker Model Runner and our AI agent Gordon”.

However, some challenges with the new release have also been reported. Since the release of Docker Desktop 4.42, many macOS users have reported significant instability. On Reddit, one user reported that containers became unresponsive and failed with networking errors, including frequent cURL error 35: OpenSSL SSL_connect: SSL_ERROR_SYSCALL messages. They traced the problem to automatic proxy settings; containers worked when proxy detection was disabled, but macOS kept re-enabling it under corporate policies. Reverting to Docker Desktop 4.41.2 reportedly resolved the issue.

On GitHub issue #7698, several users reported that Docker Desktop simply fails to launch after upgrading to version 4.42.0. One user observed that after installation, the Docker daemon wouldn’t start, and attempts to do so returned errors about the daemon socket. Additional open issues reference failures to expose container ports when running behind connection managers like Traefik, suggesting deeper integration problems with Docker’s network layer.

Some feedback on Reddit suggests that Docker has fundamental performance issues inherent to macOS. One Reddit user noted that Docker Desktop’s excessive memory usage, beyond assigned limits, even occurs when only a single container is running, and macOS swap usage has ballooned over time. While this behavior is partly due to Docker’s VM-based architecture, community members suggest alternatives like Colima or Orbstack to bypass Docker Desktop’s overhead.

A Docker team member acknowledged the proxy-related errors and requested diagnostic logs to expedite troubleshooting. Meanwhile, GitHub threads remain active, tracking widespread issues with startup failures, window hangs, and networking faults under versions 4.42.x. The volume and breadth of these reports suggest that broader regression testing and swift patches may be needed to restore confidence in Docker Desktop on macOS.

These errors do appear to have been largely addressed by the Docker team, and so most new users should be able to utilize the full benefits of the update without facing many of these challenges.

About the Author

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.

Anthropic Introduces Economic Futures Program to Address the Economic Impact of AI

MMS Founder
MMS Daniel Dominguez

Anthropic has announced the launch of its Economic Futures Program, an initiative designed to address the economic impact of AI. With the growing influence of AI on global labor markets and productivity, the program aims to provide valuable insights and contribute to the development of strategies for managing AI’s economic shifts. This program extends Anthropic’s existing Economic Index, focusing on empirical research, data-driven policy development, and expanding economic measurement tools to better understand AI’s evolving role in the economy.

The program is structured around three core pillars. The first pillar, Research Grants, provides funding and resources to independent researchers studying AI’s economic effects. Grants will support investigations into areas such as labor market dynamics, productivity shifts, and new forms of value creation enabled by AI. 

The second pillar, Evidence-Based Policy Development, focuses on creating opportunities for researchers, policymakers, and industry professionals to collaborate and evaluate policy proposals. Topics include labor transitions, fiscal policies, and innovation creation, with a focus on data-driven strategies to address AI’s impact on the workforce and broader economy. 

The third pillar, Economic Measurement and Data, aims to expand the Anthropic Economic Index by creating one of the first longitudinal datasets on AI’s economic usage and its long-term effects. This will help track how AI adoption is transforming industries, job markets, and productivity levels. The goal is to create a robust data infrastructure that will support ongoing efforts to understand AI’s economic impact and help inform future research initiatives.

The program also plans to foster strategic partnerships with independent research institutions, providing resources such as API credits to assist in research. These partnerships will expand the ecosystem of research and policy analysis on AI’s economic implications. Institutions interested in collaborating with Anthropic are encouraged to submit proposals detailing their research efforts.

The need for such research is growing as AI’s role in the economy continues to expand. Policymakers and industry leaders are seeking reliable, real-time data to understand how AI is affecting the workforce, creating new job categories, and altering traditional productivity measures. The Economic Futures Program aims to fill this gap by supporting research that can help guide the development of policies that address the challenges and opportunities presented by AI.

The community comments reflect a mix of curiosity, concern, and cautious optimism about AI’s impact on the workforce. AI Educator Andres Franco commented on X:

Well, at least someone is looking into it. Most people don’t realize how dangerous this AI boom is for the job market in its current state.

Meanwhile user @bryanstrummer shared:

AI’s already reshaping jobs – hope this program delivers actual workforce solutions, not just another think tank writing reports about the ‘future of work’. We’ve seen how that movie ends.

Looking ahead, the program seeks to drive an ongoing conversation about AI’s role in the economy, ensuring that society can effectively manage its economic impacts. As AI continues to change the way we work and interact with the world, initiatives like the Economic Futures Program will play a key role in shaping a sustainable, AI-enabled economy.

About the Author

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.

Podcast: GitHub Next: how their research and prototyping team operates

MMS Founder
MMS Idan Gazit Eddie Aftandilian

Transcript

Shane Hastie: Good day, folks. This is Shane Hastie for the InfoQ Engineering Culture podcast and we are having great fun getting started today, but I’m sitting down with Idan Gazit and Eddie Aftandilian. Did I get that close enough?

Idan Gazit: Success.

Eddie Aftandilian: That was good.

Shane Hastie: Gentlemen, welcome. Thank you for taking the time to talk to us today.

Introductions [00:58]

Idan Gazit: Thank you. Thank you so much for having us,

Shane Hastie: You’re both from GitHub Next. What’s GitHub Next?

Idan Gazit: That’s an excellent question where what would normally be called a long bets team or an innovation team or a labs team, I like to describe us as the department of fool around and find out because we’re there to try things. It says research on the door, but the reality is that we’re a prototyping team.

Eddie Aftandilian: I mean, when we first started we did a lot of things, but we pretty quickly focused down on AI. And as Idan said, our job is to prototype new ideas and test viability. And if we come across something that looks like it might be viable, help it graduate out and become a real GitHub product.

Shane Hastie: And before we go deeper into the team and the products, who’s Eddie? Who’s a Idan?

Eddie Aftandilian: Okay, I’ll go first.

Idan Gazit: Go for it.

Eddie Aftandilian: I’m Eddie. I’m a principal researcher in Next, I lead and manage about half of Next, and Idan manages the other half. My half is sort of the more sort of research-y focused people in Next. So we have people who have backgrounds in machine learning and programming languages. We bring I guess the rigor to Next projects. Idan’s half of the team, I’m going to step on your toes here, but Idan’s, half of the team is sort of more front end developer experience focused. They build really cool prototypes of things and we demo those things and sometimes people get excited about the demos, but in this AI world, it can be very easy to build something that’s cool and actually really hard to make it work for real as a tool. And I see the goal of my wing of the team as taking these cool ideas and making them work reliably.

Sometimes that involves things like building evaluations that we can use to measure how reliably these things work and then we can drive them up. Sometimes it involves taking results from the research literature and then applying them to Next projects. So making this match between things that are already known in the research literature with actual problems that we have. And then sometimes it involves actually coming up with new techniques to solve some problem that we have in a prototype or a real GitHub product. I’ve been in GitHub Next for about four years. I joined at the very beginning of the Copilot project, so I joined to work on Copilot. I was one of the original Copilot team members.

I helped take it from this original hacky prototype all the way to general availability. I worked on it for about a year and a half. I ended up for a while leading quality evaluation and improvement for Copilot for about a year. And then I came back to Next to work on new projects. Before that, I spent 10 years at Google working on internal developer tools. And before that I did my PhD in programming languages at Tufts University in Boston. And so the sort of through line of my career has always been about developer tools, building tools to help Developers be happier and more productive.

Idan Gazit: Hi, my name’s Idan. That’s a tough act to follow, Eddie. I currently lead GitHub Next, but for most of the time that I’ve been here, I joined about five years ago. So just a little bit before Eddie and for most of the time that I’ve been at Next, he and I have sort of been peer managers running. Now I run the group, but we still sort of have our domains of expertise. And like Eddie says, my entire career has also been about developer tools. Prior to GitHub, I was at Heroku and I was a core contributor to the Django web framework and spent a lot of my life in and around open source. And generally speaking around web technologies is sort of my home base, but interaction design, interfaces, user experiences, cognition, perception, these are the things that are generally my job.

And nowadays in the context of GitHub, it’s exactly what Eddie said. It’s bringing both halves of the house to bear on these problems. And I think something that’s interesting about Next is that we don’t execute along reporting lines. It’s not that Eddie’s folks are working on Eddie’s projects and my folks are working on mine and never the twain shall meet instead it’s most challenges to overcome them or to really advance the state of the art.

We have to bring in some folks that have the hard background in machine learning and maybe we need to train a model or we need custom techniques into the models, but then we also need to create the right interface to that and use that to elicit information from the humans using it that is going to make the AI succeed at their thing. So it’s this very interdisciplinary squad. It’s a real joy. It’s one of the most unique teams I’ve ever seen anywhere and a privilege to work alongside everyone. I think that’s all the stuff that matters to me. You’ll hear me rant a lot about nouns and interfaces and I’m really into typography and color and all of those things. But yes, I’m a straight-up nerd.

Shane Hastie: What’s the take to hold a team like that together?

Leading an innovation team [06:18]

Idan Gazit: We have a cycle that we’ve now gone through a number of times and not all parts of the cycle are fun. It sounds really great. It’s the team of permanent hack week, right? Nobody’s saying, “Listen, here’s the Gantt chart, go execute”. In fact, generally speaking, Next, it’s an undirected exploration function of the business. If GitHub’s leadership wants something, they can direct parts of the business to do something. But in our case, what I think is expected of us is that we need to create options for them. We need to roll up to them with here’s a bet that the business could make and here’s the evidence that supports this bet, right? We can’t roll up to them with a doc and a philosophy. We have to roll up with something qualified that’s going to persuade them to spend the resources, not the 15 people that are Next, but the much bigger, broader GitHub.

And so we have this cycle and we’ve now been through it enough times that there’s some pattern. The first part of the cycle is this blank canvas dread phase of we don’t know what we’re making and we’re trying to find something like which spot do we dig in? Where do we see glints of gold? And so I’d say in that phase, the thing to inspire and rally the team is setting, maybe it’s direction setting is the word that I’m looking for. It’s saying these are the strategic directions that are valuable for us to be digging in and figuring out what those are is not easy. It’s quite a bit of prognostication about where the tech is going and where the market is likely to be in a year or two. Because at the end of the day, if what we want to do is make things that graduate and escape the lab, then they need to have a business future.

It can’t just be like, “Oh, this is a technically beautiful thing, but I don’t know what application it has that people are willing to trade money for”. So that’s the hard phase. And then the rest, once we’ve sort of selected projects, that’s the happy phase. There it’s like I think folks are internally driven because they’re excited for the making. And ironically, that doesn’t require much in the way of pushing it all because we hire people that have this zero to one mindset. It makes hiring very difficult because it’s this very intangible quality. If I go up to my recruiter, they’ll be like, “What do you want? You want a front end engineer, you want a back end engineer?”

Hiring people with the right fit [08:52]

I want people who make, and they’re like, “I don’t know how to filter that in a resume”. And I’m like, “I know, that’s really hard, but that’s what I mean”. And so by hiring correctly, that second phase becomes a lot easier. And then the third phase is the storytelling, the going to market, the evaluation and the research on the things. We field a prototype and then we’ve got to go and study how people use it, what value are they getting out of it because that’s the evidence that I need to produce in order to roll up to leadership and say, “Here’s what”. So it’s different things that we need at different phases of this cycle.

Eddie Aftandilian: I mean the thing about sort of hiring the people with this zero to one mindset is important. And I think it’s important when we make a point when we hire to communicate what this is really like to people because it can sound it’s like all sunshine and rainbows that it’s always exploration, but actually the exploration, like you said, is it’s often the hardest part where you’re trying a bunch of stuff. Most of it doesn’t work because that’s the nature of what we do. We try to set ambitious goals and then, I don’t know, I think we say 80% of the stuff we try doesn’t actually work. And that part can be really demotivating. And if you’re not willing to sort of stick it out and have trust in the process, I don’t know, you can get pretty down and it’s important to know what you’re actually getting yourself into when you join Next.

Idan Gazit: Yes, absolutely. I mean I might add something onto that, is that maybe the hardest kind of classification that we have to do in that early phase is not even when it’s clearly not working. That’s an easy classification to make. The hard part is what if it’s cool and it might be good, but we don’t know if it’s really going to be good. And then how do you distinguish the things that are the potentials from the real deal? And I wish I had a flowchart for that, but I don’t, every experiment has its own parameters.

Eddie Aftandilian: You can get distracted by the coolness of something, right? You said in the end, we need to produce something that people want to trade money for. And often the cool thing is not exactly that. So it’s a hard process to go through.

Idan Gazit: Also, you have to let go of everything that you make. Another thing that I’ve learned over the years of hiring folks into Next is that in the hiring process, I’m like, “Listen, every time you succeed, you’re going to give your baby up for adoption”. And that’s what success looks like. You hand it off and you hope that it has a good life, but you can’t guarantee it. And on occasion, we’ve had at least one or two folks that have gone with, they’re like, “No, I want to keep working on this in perpetuity”. And this is maybe the happiest form of attrition we experience at a group like Next is when somebody is just like, “Nope, I want to stick around for the lifetime of this baby and see it grow up”. Then they depart with a thing into engineering, and that’s a win all around for the business. We’re sad for the loss to our team, but it is strictly in the good column for the overall business.

But that’s something that I have to be upfront about because for the things that we try and don’t work out is like we hand it off in the good case or we shelve it, which is the other core activity that we engage in the team is not fooling ourselves when something isn’t working because opportunity cost. If it’s not working, maybe it’ll work in a year or two when the models get better, when we have another brainstorm and come up with a better technique, I don’t know. But right now it’s not working, instead of us throwing “good money after bad”, let’s stick it on the shelf and turn our attention to something else. And that kind of honesty is really hard, particularly when you’ve spent the last month really scrabbling with your fingernails to find purchase on a topic to then be like, “You know what? I give up on this”, can feel like a punch to the gut, but that’s the job.

Shane Hastie: Can you tell us a story of something that looked really interesting but then just didn’t make it?

Examples of successes and failures [13:05]

Idan Gazit: I have one which is Blocks. This one was very personal to me. I was the one who led this thing and I was the one who had to kill it when it didn’t take off. And so Blocks was, this was not even really an AI specific thing. It was this notion of what if we can GitHub itself extensible beyond everybody’s seen now. It’s like if you have a markdown file in GitHub, you don’t see the raw markdown, you see the rendered markdown, and if you have a mermaid diagram in GitHub, you see the mermaid chart, you don’t see the source of it. So we were like, okay, let’s take this concept way, way further and be able to have small applications that you could publish that you could then view your repository instead of seeing a list of files, see it another way or have small applications in your repository. So it’s not a full on platform for deploying applications like a Firebase or Vercel or a Heroku or whatever.

But miniature apps specifically tied to your code bases. And we had a lot of great early signal from the earliest of early adopters. We have a discord and the kinds of folks who self-select into like I want to play with tomorrow’s tools today and I’m willing to endure a lot of stuff being broken for the privilege. And so they came along and they made a lot of really cool blocks. We call them these miniature applications, but at the end, it never crossed the chasm. It never achieved that sort of status, even though there were definitely customers that were asking us like, “Hey, we’d really like to extend GitHub. Is there ways of doing that?”

And there are extension points for GitHub, but nothing that lets you really affect the user experience of GitHub itself. And so we shut it down with a heavy heart and everything. I sent out the shutdown email and that was awful and I’m glad that we did it, but I think that’s a good example of killing your darlings in exchange for being able to pursue new darlings. So that’s my example of something that didn’t pan out.

Shane Hastie: And what about something that did? What happened with it?

Eddie Aftandilian: I mean, Copilot is the big one. So copilot was created within our team. It was the first big success for Next. It was huge and it did turn out to be huge. We didn’t know that at the time, but it started within the team. It started out as a collaboration with Open AI where we got access to their co-generation models, which were distinct from their other models at the time. And we were sort of tasked with figuring out, well, how do we make this thing actually useful people? And so we built different interfaces.

The one that was obviously the right interface for it was this code completion in your IDE and VS Code format. Once we landed on that, it turned into sort of a product, like a productionization march. And for that project, we did most of that within Next. So all Next projects that succeed, we take the technical preview, which means it’s like a closed preview, usually behind a waitlist, people have to apply and we select them as we have capacity, and we usually do that within the team.

And as a Idan mentioned, we start at that point doing user studies and collecting data about what users do, collecting survey feedback in order to make a case about whether to take this full production or not. So we usually do that within the team. We did that for Copilot, and then once we hit the technical preview phase, it became pretty clear to everyone that this was going to graduate and become a real project. And at that point, the team started growing. About halfway through that cycle, we were moved into GitHub engineering. So those of us started in Next on Copilot, moved into engineering, we went on loan. I was on loan for nine months to engineering. We helped take it all the way to general availability, and we also helped build the team there. We did the hiring and such that would make it a sustainable project within GitHub. And then we came back to for the next round of projects.

Shane Hastie: You made the point, there’s no crystal ball. How do you choose the bets?

Decision making and bet selection [17:29]

Idan Gazit: Well, first of all, I think when you say a statement, how do you choose? It implies that there’s this one checkpoint. It’s like here’s the do or die line beyond it, we got to do it. And the reality is that it’s almost never that. It’s almost always a ramp. The moments where you look at something and you’re like, “Oh, snap, that is definitely, definitely going to be a thing”. Those are few and far between. I think original Copilot was the strongest I’ve ever felt that about anything where it’s just you look at it and you’re like, “This is the future. I’m not going to work without this in the future”. The rest of the time you have mixed signals. And so the way that we structure our explorations are that effectively every exploration kind of has a dead man switch on it in the sense that it’s always about to run out of its lease on life.

And at any moment, we’re always asking ourselves, what is the strongest small piece of evidence that we can produce in this iteration that’s going to persuade us to extend this project’s lease on life? So right now, can we get to a chewing gum and duct tape prototype that we could kick the tires on ourselves? And then we’re it heavily and using it and being like, do we believe in it or not? And then we’re done with the kind of validation that we can do just ourselves. It’s time to expose it to more people. Well, there’s quite a large business of developers at GitHub, some of whom are like, “Yes, I want to play out round with something that may or may not ever be good”. So we’ll go to them and we’ll be like, “Hey, who wants to kick the tires on this and give us feedback?”

We have external developers that are close to GitHub, like the GitHub Stars program. These tend to be prominent open source developers and personalities and folks like that. And so we can go to them and show off things that are not ready for the light of day, but because they’re already under NDA, like with GitHub, we’re able to show them, give them early access to things. And so we’ll try it out on them and get signal from them and make sure that they find it valuable, that they find that it does something that they want it to do. Then there’s the apex of this curve is the public technical preview that all projects need to get to that there will open up a wait list and we’ll enroll folks, anyone from the general public who wants to play around with this, and then we’ll conduct user research on them.

And so it’s not like how do we decide in the early phases, it’s maybe that’s where the decision making around direction happens. And in the later phases, it’s more evidence-based around what we’re hearing back. But in those early phases, I’d say part of it is, again, it’s hiring. It’s hiring people that have a lot of experience. Next is a very unusual team in the sense that everybody at Next is at the top end of the experience scale. Our most junior member is a senior, which is highly aberrant. There are no other teams at GitHub that I’ve ever seen that look this way, but I think that it’s fair and correct because we’re sticking on the backs of these individual contributors. We’re telling them, “Go advance the state of the art, don’t mess it up. Bye”. And we’re not giving them a whole lot of guidance. And so because they’re experienced, I trust them to execute.

But beyond that, I’m also asking them to bring their experience and their opinions with them to work. And then inside our house, in our private Slack channel, we’re arguing with one another. That’s a good idea, that’s a terrible idea. Or what if we tried it this way? And we have to have that trust and that candor internally in order to be able to really wrestle with these ideas. And so people are making these small spikes and these innovations can come from anyone on the team. It’s not like I put together some PowerPoint. I show it to the team and I say, “We’re going to do this”. It usually doesn’t look like that. It usually looks more like individuals on the team are putting forward miniature things that they’ve put together. Like yesterday, I played around with the models. I was trying to get them to do something and I did this, what do you think?

The power of demos and internal validation [21:59]

And then we’ll do demo day every Thursday where anybody from the team will do demos. And then I guess it feels a little bit like a jazz band. Everybody’s trying out things and suddenly somebody plays a really cool lick and then everybody’s just like, “Whoa, whoa, whoa. That was really good. Do that again”. And so the first signal that we’re looking for is our own excitement because sometimes that’s the only signal that we can possibly have when it’s novel things. There is no prior art to compare it to. So I don’t know, that’s messy. But I think that that’s the reality on the ground.

Eddie Aftandilian: There’s something very magical that happens when someone shows a demo and everyone else gets excited. And as Idan said, Next is pretty undirected. Especially in this exploration phase, people can wander around and work on what they want to work on. And when you start to see people sort of gravitating towards someone else’s demo or a project, I think we both take that as a super positive sign that there’s something there. These people sort of voting with their feet for what they want to work on and what they want to invest their time in.

Idan Gazit: Yes, the first customer we have to persuade is ourselves. If we’re excited about it, then that doesn’t guarantee that it’s going to be a thing, but it’s a healthy early signal. And I can’t underscore enough the currency of demos at Next demos are everything. It’s like nobody is going to sell anybody else on an ocean just with a deck. You have to actually make something, show something, let other people use it, even if it’s messy and terrible and it doesn’t matter. We’re all comfortable with messy and terrible, but that’s the genesis of great ideas. It always starts out as something messy and terrible.

Shane Hastie: So in that creative space, how are you using generative AI tools?

The benefits of using AI tools [23:50]

Idan Gazit: I feel like I spend most of my time interacting with the APIs of through building things to try out, can the model do something or using our own tools. I can’t say that I’m a super intense user of every generative AI tool out there like everybody else. I use quite a bit of the straight-up ChatGPT or Claude or Gemini. And also in the context of my job, I feel like I need to stay on top of what are the models good at, which is a horse race, right? Today, this model seems to be the best at producing code, but that one seems to be good at, I don’t know, Gemini for example, has famously sort of the longest context window of all of the models. But if you fill up that context window to the brim, does it actually behave well or does it start to forget later in the context window?

Well, that’s something that you can’t read on a spec sheet. You have to get a feel for it by working with it. So I feel like most of my use of generative AI ends up being these kinds of testing out the vibes of the models to see what they’re good at or what they appear to be strong at, or are they chatty? I remember there was one of the releases of OpenAI’s models, we were like, “Wow, this version is really chatty. It seems to really want to add a lot of comments to code”, and we have to actually alter the prompts that we give it to tell it to, “Shut up with the comments, please”.

So things like that I feel like end up being my primary uses for AI. But otherwise, yes, chat like everybody else, LLM is informational alchemist converting from any format to any other format. Take small text and make it big. Take big texts and make it small. Take a diagram, make a text, or the other way around. Those are the natural applications of AI to my mind. And so I spent a lot of time thinking about that.

Eddie Aftandilian: I think for me, the most interesting use that I have that is fairly new is this spoke tutor use of chat models. Let’s say I have a research paper or something and I want to deeply understand it. I can drop that into ChatGPT or Claude and then start asking it pretty difficult questions about the paper. And it’s pretty good at explaining stuff to be. And I find this especially helpful for when I’m going out of my domain. So like I mentioned, my background is in programming languages. Maybe I want to understand deeply something from a machine learning paper. I find the models really helpful for understanding stuff like that. And then the other thing that’s been true actually for a long time is when you’re programming in a new framework or a new programming language, the models are super helpful for that. When I joined GitHub, it’s really a Java developer.

I hadn’t written TypeScript. I’d written a little Python, but not as a work job level of proficiency in Python. And I come here and most of our projects are in TypeScript or Python, and I found even very early copilot super helpful for teaching me basic syntax, teaching me idioms of the language. So what’s the idiomatic way to do this? I know it’s one way to do it, but if I write it that way, it’s going to look like I’m writing. I’ve just transcribed Java into Python, which I know is not what a Python programmer would do. So I find for a long time it found them super useful for this like, “I’m well grounded in this language or this framework, now help me translate the concepts from that language into this new framework or language that I’m trying to work in”.

Idan Gazit: Yes, big plus went to the new languages, new frameworks thing. I like making weird keyboards and I like it enough that I fell down the rabbit hole and okay, none of the existing firmwares for the microcontrollers for custom keyboards are exactly what I want. I know I’m going to write my own one, right? This is perennial sort of side project that is still going on two years later. But I was like, “But I don’t want to write C. It’s been years since I’ve written C. I want to do what the cool kids are doing. And the cool kids are doing a lot of Rust”. And it turns out that you can write Rust for bare metal nowadays. And without Copilot, man, it would’ve would’ve been really hard. But with this suggestion tool in my pocket, even if I don’t use the suggestion as it is verbatim, just having it shine a flashlight or show off idioms or tell me this is how I would use this library or things like that is incredibly invaluable for me as a working developer 100%.

Shane Hastie: Gents, we’ve covered a lot of ground and it’s getting really, really interesting. If people want to continue the conversation, where can they find you?

Idan Gazit: The Next Discord if you go to gh.io/next-discord, that’s definitely sort of our community of early adopters. People who love to injure themselves on the sharp edges of whatever it is that we’re making. We love it both as a community of folks who are really interested in that serve a community of futurists, a community of folks who are interested in tools for thought, because at the end of the day, software development is thought made real, and folks that we tend to come to first when we want feedback, when we want to field something. And so that’s probably the best way. We’re also githubnext.com we try to work as much as possible in the open so folks can see all of our past projects and write-ups there. And of course, we have a Twitter and a Blue Sky account. They’re all linked from the githubnext.com website. So yes, I think those are the good spots to meet us.

Shane Hastie: Well, thank you so much for taking the time to talk to us today.

Idan Gazit: Thank you. We really appreciate you for having us on.

Eddie Aftandilian: This was fun.

Mentioned:

About the Authors

.
From this page you also have access to our recorded show notes. They all have clickable links that will take you directly to that part of the audio.

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.

Docker Desktop 4.43 Expands Model Runner and Brings New Compose-Kubernetes Bridge

MMS Founder
MMS Sergio De Simone

Following the introduction of Model Runner a few months ago, Docker Desktop 4.43 expands its capabilities with improved model management and broader OpenAI compatibility. The release also debuts a new Compose Bridge to simplify the generation of Kubernetes configurations and upgrade the Gordon AI agent.

Docker Model Runner in 4.43 introduces a new user interface for inspecting models through model cards. These cards summarize all available variants within a model family, detailing their features such as number of parameters, quantization, format, size, and architecture.

For developers preferring to work from the command line, the docker model command now supports inspecting, monitoring, and unloading models. At the Docker Compose level, developers can now specify the context size to use for a given model as well as the llama.cpp runtime flags. Furthermore, Model Runner adds support several OpenAI API options, including tool support using {“stream”: “true”} and improved compatibility and security with custom CORS configuration.

Docker Desktop 4.43 also upgrades the Gordon AI agent, adding support for multi-threaded conversations and delivering a 5x performance improvement.

You can now run multiple distinct conversations in parallel and switch between topics like debugging a container issue in one thread and refining a Docker Compose setup in another, without losing context.

Compose Bridge is a new feature that enables converting compose.yaml files to Kubernetes configurations using a single command:

docker compose bridge convert

This innovation automatically generates comprehensive Kubernetes resources, ensuring that local development environments can be quickly and accurately mirrored in production-like Kubernetes clusters.

Compose Bridge is able to automatically create namespaces, configuration maps, deployments, services, secrets, network policies, and persistent volumes based on Compose file declarations. Developers can adjust how Compose Bridge creates Kubernetes resources by customizing a set of template files. To this aim, you can either export the template files used by the default transformation and modify them or build your own templates for resources not managed by the default transformation.

The compose.yaml model may not offer all the configuration attributes required to populate the target manifest. If this is the case, you can then rely on Compose custom extensions to better describe the application, and offer an agnostic transformation

For example, this lets developers add the x-virtual-host metadata to a Compose file and define how it should be translated into Kubernetes configuration by setting a custom ingress attribute in a custom template file. Using custom template files requires re-packaging the Docker image used by Compose Bridge.

As a final note on Docker Desktop 4.43, the MCP Toolkit now supports OAuth and offers improved integration with GitHub and Visual Studio Code.

About the Author

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.

Storybook Releases Storybook v9 with Improved Testing Support

MMS Founder
MMS Daniel Curtis

Storybook, the front-end workshop for UI development, has officially released version 9, bringing improvements to testing through a collaboration with Vitest and other core upgrades such as a flatter dependency structure to optimize performance and improve the overall developer experience.

Storybook version 9 builds on top of some of the testing features that were implemented in version 8, bringing new features such as built in visual testing and a partnership with Vitest to introduce a new feature set called ‘Storybook Test’.

Storybook Test allows you to kick off tests across all stories at once from a new test widget, and also has a watch mode that runs tests automatically once you save a file to enhance the local development feedback loop. It primarily focuses on component tests and implements three categories to test components: Interaction, Accessibility and Visual tests. It can run tests against these both within local development (via the Storybook UI) or within delivery pipelines.

Interaction tests have been part of storybook for a while, but previously they were only available as a tab against individual stories. It is now possible to run these tests for all stories in one go, as well as seeing the status of the full run in the sidebar.

Accessibility tests can be run in storybook via the Storybook accessibility addon, which checks against the industry standard axe-core tooling. Violations to WCAG standards are caught directly within Storybook and shown against the component in an accessibility tab.

Visual tests are powered by Chromatic to run visual snapshot testing against component. This is also available to be run via the new test widget in the sidebar.

The new test widget appears in the lefthand navigation of storybook, and it provides a holistic view of Interaction, Accessibility and Visual tests and enables developers to run a full test suite for all components in a single click. In addition to this, it can provide an easy route into test coverage reports for test runs.

Storybook 9 comes with a number of additional features on top of the testing improvements, such as Svelte 5 support, and with that comes support for language functionality like Runes and Snippets directly within the stories. It has also added React Native support, which for some users is seen as one of the biggest features of the release.

A new Vite-powered NextJS plugin comes shipped alongside Storybook 9. It supports the same features as the Webpack-based version, but provides a version in Vite that is fully compatible with Storybook Test and Vitest.

Core upgrades as part of this release include a flatter dependency structure which the authors claim to make it 48% leaner, and results in a faster install time. A question on Bluesky raised a concern, asking whether this was achieved through ‘pre-bundling’, which can prevent dependencies being patched by the consumer application, Jeppe Reinhold, a contributor to Storybook, acknowledged this by confirming the pre-bundling in some cases, but explains that most of the improvements come from the removal or replacement of dependencies

Storybook is an open-source tool used for the development, test and documentation of UI components. It supports React, Vue, Angular, Svelte and many more common UI libraries. The full release notes, setup guides, and migration tools are available on storybook.js.org.

About the Author

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.

Presentation: One Network: Cloud-Agnostic Service and Policy-Oriented Network Architecture

MMS Founder
MMS Anna Berenberg

Transcript

Berenberg: One Network sounds very unique in an area of time when everybody is building more things, and we were not an exception. As Google, we were latecomers to the cloud computing game. With that, our team started building, and within a few years, by 2020 we’ve organically grown to more than 300 products with a lot of infrastructure, with multiple network paths. Our customer noticed that the products were not integrated, and we noticed that our developer velocity is actually low because every time we need to release a new feature, we need to release it in n times on every network path on a different infrastructure.

The most scary part and important part was the policy. Policy is something that cloud providers worried about day and night because all the cloud infrastructure is controlled by policy. How do you make sure that policies are being controlled on every path without any exception, without needing to modify these 300 products?

Why Is Networking Complicated?

Let’s look at why networking is complicated. On the left, you see Google Prod Network. Google has actually its own network as you know, and they’re running Search, YouTube. On that we’ll build some cloud products, for example here, Borg is a container orchestration system, on it we built Cloud Run. On the top of Cloud Run there is its own networking. Then there’s virtual networking on GCP itself which is called Andromeda. On top of it we build different runtimes like a Kubernetes GKE and GCE which is a compute engine with the VMs.

Over there there’s, again, networking its own, and GKE, as you know Kubernetes has its own layer of networking. On the top of it there are service meshes. Then the same thing applies on multi-cloud or the customer premises or distributed cloud, where layers happen. Then, what happened? In each environment we build applications, and these applications, again, they run on different runtimes, they have different properties. This combination of infrastructure on different network paths and different runtimes created an n-squared problem of what I usually call Swiss cheese: something works here, something doesn’t work there. What’s the solution? The solution is what we called One Network. It’s a unified service networking overlay.

One Network (Overview)

What is the goal of One Network? We say we want to define policy uniformly across services within the constraints of what I just explained, that it had compute heterogeneous networking, different language runtimes, coexistence of monolith services and microservices. It’s all across different environments, including multi-cloud, other clouds, public and private clouds. The solution is One Network, which is like frustration. You can think about, why do I need so many networks? Can I have one network? One Network it is. Policy managed at network level. We iterate towards One Network because it’s such a huge architectural change to manage cost and risks. It’s very much open source focused, so all the innovation went into open source and some went to Google Cloud itself.

How do you explain One Network? We’ll build on one proxy. Before that, every team and basically a lot of proxy war floating around. One control plane to manage these proxies. One load balancer that is wrapped around the proxy to manage all runtimes, so both GKE, GCE, Borg, managed services, multi-cloud. The Universal Data plane APIs to extend the ecosystem, so we can extend with both first-party and third-party services. Uniform policies.

Again, it’s across all environments. When I presented this particular slide in 2020, everybody said it just sounds too good to be true. It was at that time. Who is going to benefit from One Network? Everybody actually. These are the roles that we put together who benefit from One Network. They vary from people who care about security policy, to DevOps and networking, care about network policy, to SREs who care about provisioning large number of microservices or services. To application developers who actually want to worry about their own policy without needing to interact with the platform admins or platform engineering folks. There’s the width, the depth, and the isolation at the same time, so it’s a partition. As well as universal. Everybody cares about orchestration of large environments and everybody cares about observability.

One Network Principles – How?

What are the principles we built One Network on? We build on five principles. We’re going to build on common foundation. Everything as a service. We will unify all paths and support all environments. Then we create this open ecosystem of what we call service extensions, which are basically pluggable policies. We then apply and enforce these policies on all paths uniformly.

1. Common Foundation

Let’s start with the first one. This is One Network pyramid I put together because I was thinking about how to explain the narrowing scope of any layers. We start with the common purpose Envoy proxy, and we’ll talk more about it. It’s an open-source proxy available both on GCP or on-prem anywhere. Then we wrap it around and build around it GCP-based load balancers which actually work both for the VM, containers, serverless. You can build on top of it GKE controller, and now you have GKE gateway that uses the same underlying infrastructure but now only serves GKE services and workloads, and it understands behavior of GKE deployments.

The top of the pyramid is where you actually don’t see gateway at all because it’s fully integrated into Vertex AI, which is our AI platform, for example. It’s just implementation detail. All of that is using the same infrastructure across the products and across the path. All of these layers are controlled by a single control plane which we call Traffic Director. It has a formal API and everything. When I say single, it actually doesn’t mean a single deployment, it’s the same control plane that could be run regionally, or globally, or could be specialized per product if there is a need for isolation. It’s the same binary that runs all over, and so you can control it and orchestrate it the same way.

This is One Network architecture, Northstar. I want to walk you a little bit from the left to the right, you can see different environments. It starts from mobile going to the edge, then to the cloud data center, then on to the multi-cloud or on-prem. There are three common building blocks, there is a control plane, Traffic Director, that controls all of these deployments. There is open-source APIs between the Traffic Director and the data planes that call xDS APIs. Then there are data planes. Data planes are all open source, they’re open source based. They’re either Envoy or they’re gRPC, both of which are open-source projects. Having this open-source data plane allows us to extend both to multi-cloud, on to mobile, and basically anywhere outside of GCP, because it’s no longer proprietary. Talking a little bit about Envoy proxy, it came out in 2016, and we really like it.

The reason we like it is because it was a great, new, modern proxy with the all advanced routing, observability as first class. It got immediate adoption. Google heavily invested in it. The reason we like it, it’s not because it’s a great proxy but because it’s a platform with these amazing APIs. It has three sets of APIs. It has configuration APIs between the control plane and the data plane in the proxy itself, that configures its eventually consistent APIs. They provide both management plane and control plane functionality. There are data plane generic APIs. There is an external AuthZ that does allow and deny, and so you can easily plug in any AuthZ related systems. There is an API that’s called external proc, so basically you can plug in anything and it can modify behind it, you can modify the body, and then return it back. It’s very powerful. Then there is WebAssembly binary APIs for the proxy Wasm.

There are also specific APIs. Envoy had right away, RLS, which is rate limiting service, which is interesting to see that it could have been achieved via external AuthZ, which is more generic because it’s also allow and deny, but yet over here it was specialized. We’re seeing that in the future we’re going to have more of this specialized API. For example, we’re thinking of LLM firewall API when you think that incoming traffic, you can classify it as AI traffic, you can apply rules that are specific to AI traffic. You can do safety check. You can do block lists. You can do DLP. The data plane itself also has filters.

The Envoy proxy has filters, both L4 which is TCP, and L7 HTTP filters. There are two types of them, one of them is linked to Envoy and that changes ownership. If as a cloud provider we link, then it means it could be only our filters, and if a customer runs it on its own, then it’s theirs. We cannot mix and match. WebAssembly filters, it’s a runtime that you can actually have both first-party and third-party code loaded into the data plane. Google heavily invests into open-source proxy Wasm project, and actually we released a product. These filters could be changed, they could be either request based, response based, or request-response depending on how you need to process it. All of that is configured by Traffic Director.

Talking about Traffic Director, it’s xDS Server. What it does, it combines two things. It combines very quickly changing dynamic configuration, for example, the weights and the health with static configuration as how you provision a particular networking equipment. That magic we put behind Traffic Director, we call it GSLB, the Google Global Service Load Balancer. It’s a global optimized control plane. It’s the same algorithm that Google uses to send traffic for Search, YouTube, Gmail, anything you do with Google uses this load balancer behind it.

Global optimizes RTTs and the capacity of the backend. It finds the best path and the best weight to send to. It also has a centralized health checking so you don’t need to do n-squared health checking from the data plane, because one time we actually noticed that if you do n-squared health checking you’re going to end up with 80% of throughput through the data center being just health check only, leaving only 20% for actual traffic. Removing that 80% overhead is great. Also, it’s integrated with autoscaling so when traffic burst occurs you don’t scale up step by step, you can just scale up in a single step because you know how much traffic is coming, and in the meantime, traffic is being redirected to the closest. Traffic Director handles here policy orchestration, because when administrator creates policy it delivers it to Traffic Director, and then the Traffic Director provisions all the data plane with this policy where they enforced.

2. Everything as a Service

The second principle is everything as a service. This is actually a diagram of a real service. It’s cloud service internal. Think about how do we manage such a service? There are different colors, they mean something. There are different boxes, they mean something. The lines are all over. How do you reason about such an application? How do you apply governance? How do you orchestrate policy across this? How do you manage these small independent services or how do you group them into different groups? One Network helps here.

Every of these microservices is imagined as a service endpoint, and it enables this policy to orchestrate and group these service endpoints and orchestrate over them without actually touching services themselves, so everything is done on a network. There are three types of service endpoints. There’s a classic service endpoint of customer workload, where you take a load balancer, you put it in front, you got a service endpoint. You hide, for example, shopping cart service in two different regions behind it. That’s typically how service is being materialized.

The second is a newer one where there is a relationship between a producer and consumer. For example, you have a SaaS provider who builds a SaaS on GCP and then expose it to the consumer via a single service endpoint that is materialized via this product that Google Cloud has called Private Service Connect. There’s a separation of ownership, and the producer doesn’t have to expose their architecture out to the consumer. Consumer doesn’t know about all this stuff, that is producer running. The only thing they see is a service that is endpoint, and they can operate it. In this case you can think about the producer being a third party that is outside of your company or even your shared service.

If you have a shared service within your company and you want multiple teams to use it, this is a type of architecture you want to do because you want to separate your own implementation from consumption, and then allow every customer or consumer to put their policy on a service endpoint. You can expose a service endpoint through a consumer or expose a single service or as many as you want. There’s also headless services. These are typically defined by service meshes where they’re within a single trust domain.

In this case, services are just materialized as abstractions because there is no gateway here, there’s no load balancer. Each of them is just a bunch of IP ports of the backends. An example of this is AI obviously. We’re looking at a model as a service endpoint. Over here the producers are model creators, and the consumers are GenAI application developers. The producer, for example, the inference stack is hidden behind a Private Service Connect, so nobody even knows what it’s doing there. Then a different application connects to this particular service endpoint.

3. Unify All Paths and Support All Environments

The third one is to unify paths and environment. Why would we want to do that? It’s to apply uniform policies across services. To unify paths, we first have to identify paths. You can see here eight paths that we identified. This is a generalization actually, there are lots more, but we generalized them to eight.

Then, for each of them we identify a network infrastructure that you have to implement the path to apply the policy. You can see there is both external load balancer for internet traffic, internal load balancer for internal traffic, service meshes, egress proxy, even mobile. Let’s start looking at one at a time. GKE gateway and load balancer, typically that’s how services are materialized. What we did is we involved Envoy which was original deployment to become a managed load balancer, and we spent more than a year of hardening it in open source to become available for internet traffic. We also have global and regional deployment.

Then global deployments are used for customers who have global audience or who actually care for cross-regional capacity reuse or in general they need to move around, versus regional deployments for customers who care about data residency, especially data in transit or looking at the regionalization as the isolation and reliability deployment. We provide both. It’s all connected to all runtimes.

The second deployment here is a service mesh. Istio is probably now the most used service mesh. The most interesting part about them is that they very clearly define what is service-to-service communication needs. It needs service discovery. It needs traffic management. It needs security. It needs observability. Once you separate this into these independent areas, it’s easy to plug each of them independently. For Google product, we have cloud service mesh, which is Istio based, but it’s backed by Traffic Director and also by gateway APIs as well as Istio APIs. It works for VMs, container, serverless. That is out.

Then, Google had service mesh more than 20 years ago, since forever, before service meshes were a thing. The difference between Google service mesh and Istio or any other service meshes, that we had it proxyless. We had a proprietary protocol called Stubby. We had a control plane that goes to Stubby. We provision Stubby with the configuration and everything. It basically was a service mesh the same way as you see it now. We expose this proxyless service mesh notion to our customers and to open source, where gRPC used the same APIs to the control plane, as Envoy.

Obviously, that allows us to reduce consumption because you have extra proxy. It’s basically super flat without having any maintenance, because you don’t need to install a proxy, there is no lifecycle management, there is no overhead here. Similar but a little bit different deployment architecture is GKE data plane v2, which is based on Cilium and eBPF in a kernel. It simplifies GKE deployment networking. Scalability is improved because there is no sidecar. Security is always on, and built-in observability. For L7 features, it automatically redirects it to the L7 load balancer in the middle.

Mobile, actually we didn’t productize. This is a concept but we ran it for a couple years, and it’s very interesting. This extends One Network all the way to mobile, and it brings very interesting behaviors, because first, as opposed to the workloads or computer centric, cannot actually have persistent connection to the control plane due to power consumption. The handshake is a little bit different. Also, that would require Traffic Director to build a cache to be able to support. We tried on 100 million devices, or a simulation of devices, and so that actually worked very nicely. It uses Envoy Mobile. This is evolution of Envoy proxy that is typically used, to Envoy Mobile, which is a library that is being linked into the mobile application.

One of the interesting use cases here, is, if you have 100 million devices and one of them goes rogue or you need to figure out, having a control plane would allow you to identify that particular one, deliver a configuration to that particular one, and get the observability or whatever you need, or shut down that particular one. The value is there. The second project that is also a work in progress is control plane federation. Now you can think about multi-cloud or the on-premises where a customer outside of GCP is running pretty much similar deployment but it’s not GCP managed. Now you’re running your own Envoy or you’re running your own gRPC proxyless mesh with the local Istio control plane. In this architecture, we are using local Istio control plane to do this dynamic configuration, to make sure that the health of the backends is being propagated, so if the connection between Traffic Director and the local xDS control plane breaks, then the deployment on-prem or the cloud can function just fine, until they reconnect.

Then, you still have a single pane of glass and you can manage multiple numbers of this on-prem or multi-cloud deployments or point of sale deployments. You can imagine those can go into the thousands from a single place. Bringing it all together, this is what it looks like. We already looked at this picture. You have all the load balancers and the mesh, and they go across environment including mobile and multi-cloud.

4. Service Extension Open Ecosystem

That was the backbone. How do we use the backbone to enable policy-driven architecture? We introduced the notion of service extension. For each API that we discussed before, whether it’s external AuthZ to do allow and deny, whether it’s external processor, at every point, there is a possibility of plugging these policies in. The examples here are, for example, a customer wants to have its own AuthZ. They don’t like AuthZ that we provide to them. They will be able to plug in. Or, another example is Apigee, which is our product for API management. Having service extensions changes paradigm of how API management is actually done, because previously you would need a dedicated API gateway to do API management.

Over here, API management becomes ambient. It’s going to be available because One Network is so big, and you can plug at any point. The same API management will be available at any point, whether it’s on edge, service-to-service communication, on egress, on mesh. You have this change from a point solution to the ambient presence of the policies or the value services. Another example here is a third-party WAF. We have our own WAF, but our competitors can bring their own WAF. It’s open ecosystem. The customer may be able to pick and choose which WAF to use on the same infrastructure. They don’t have to plug additional things in and then try to tie it together. It’s all available.

The One Network architecture is all there. Before, we discussed how it looks at one point, and now you can see how you can plug this at any point everywhere, whether it’s routing policies, whether it’s security services, whether it’s API management or traffic services. How does it work, actually? We have three types of service extensions. One of them is service plugins, Wasm-based. It just went public preview. Then there is Callout. That is serverless, essentially. You give the code, and we run it for you.

Typically, people like to have it at the edge, where you can immediately do header manipulation and other small functions. Then you have Service Callouts, which are essentially SaaS that have been plugged in. Over here, there’s no restriction on size, ownership. It’s just a callout to a service. Then, what we call PDP proxy. It is an architecture that allows plugging multiple policies behind a proxy for the caching. Not just for the caching, and then you can do manipulation of this policy and that policy, then do this. It’s like operating on multiple policies.

Each of these policies are represented as a service tool, and they’re managed by AppHub, which is our services directory. Going into the future, we’re looking into managing all of these through the marketplace and have lifecycle management, because there’s going to be a lot of them. The question is going to be how one is going to pick this WAF versus that WAF. You need to have a recommendation system. You need to worry about which one to choose.

5. Apply and Enforce Uniform Policies

The last, but not the least, is how do you apply and enforce uniform services? We actually came up with a notion of four types of orchestration policies. One of them is creation of segmentation. If you have a flat network and you want to segment it, you can segment it by exposing one service and not publishing others. That causes traffic to go through a chokepoint, and over there, you can apply policies. It’s easy to do because everything is a service, so now it’s easy to manipulate which services are visible, which services are not visible.

The second one is to apply a policy to all paths. What actually happened is that every application has multiple paths leading to it. For example, it’s a very common deployment to have internet traffic and internal traffic going to the same service, to the same endpoint. When something happens, how do you quickly apply policy to all paths? Because you’re protecting this application behind it, the workload behind it, versus worrying about, did I cover all paths or not? You should give away that knowledge to the programmatic system that can go and orchestrate over the paths.

The third one is to apply policy across an application. One application defines a perimeter, a boundary in which all these services and workloads reside. A typical business boundary, for example, an e-commerce application contains a frontend, a middle-tier, a database. Then a different application contains catalog and other things. Then, one application can call into the service of another application. Within a given application, a security administrator can say, on the boundary, there are these policies.

For example, nobody but this service can talk to the internet. Everything inside of the application cannot talk to the internet. That means the policy needs to be applied to each of the workloads, for example, not to have public IPs or not to allow egress traffic. The fourth one is to deliver policy at the service level. That is management at scale. Imagine that you need to provision every VM with a firewall or some configuration. With that, you have a thousand of them. Instead, you can group this VM into a single service, set up a policy on the service, and then orchestrate it on each individual backend.

This is how the policy enforcement and policies administration is done through this concept of policy administration point, policy decision point, and policy enforcement point. We spoke about One Network data planes that are policy enforcement point. The policy providers provide service extension and One Network administration points. Basically, what it does, it allows customers to provide policy of larger granularity of this group, for example, of application or workloads, to allow for the orchestration. Let’s take a couple examples. How does it work? There’s the notion of service drain in Google.

Imagine it’s 3:00 at night and you got paged. What are you going to do? You drain first and think second. That is how Google SREs operate. What does drain mean? You administratively move traffic away from where you got paged or from that service. What are you going to drain? You can drain a microservice. You can drain in a zone or in a region. The microservice could be VMs, could be containers, could be serverless, a set of microservices. The traffic just moves away administratively. Nothing gets brought down. It’s still up, it just no longer receives service. Your mitigation is done because, actually, traffic moves. Hopefully, the other regions are not affected. You’re free to debug whatever happened with the running deployment. Then once you’ve found a problem, you undrain, slowly trickle traffic back, more and more, and get back to a working setup, and wait until the next 3 a.m. page happens.

How does it work with One Network? You can see traffic incoming through different data planes, through application load balancer, through service mesh, with gRPC proxyless mesh, or with the Envoy mesh. They’re all going for region 1 now. Then we apply the drain via xDS API, and traffic moved across all of them at the same time. Over here, we just showed you fully moved, but you can imagine moving 10% of traffic, moving 20%, however many percentages of traffic you need to move. Or you can drain right away.

Another example is CI/CD Canary releases, where we want to direct traffic to a new version. You can see here there are different clients that are actually humans that go through the application load balancer through some website. There’s a call center that is going through internal load balancer, point of sale going through the Envoy sidecar service mesh, and even multi-cloud on-prem going, for example, from the proxyless mesh. There are two versions of wallet service. There’s v1 and v2. We provision at the top, and it delivers configuration, and off we go. The traffic moved to the v2.

One Network of Tomorrow

One Network of tomorrow, where we are today, bringing it all together. It’s the same picture. We’re basically done with the central part. Multi-cloud: we have connectivity, and we extended it to the multi-cloud. We are working on the federation. We have edge, obviously. We are working on mobile. It is a multi-year investment. The Envoy’s One Proxy project started in 2017. One Network started in 2020. Our senior level executives committed to a long-term vision. The effort spans more than 12 teams. We have, so far, delivered 125 individual projects. Majority of Google Cloud network infrastructure supports One Network. Because it’s all open-source based, then open-source systems are available to be plugged in there.

Is One Network Right for You?

Is One Network right for you? The most important thing to consider is, do you have executive support or not to do something like that? I wouldn’t recommend anybody to do it on their own. Otherwise, the organizational goals need to be also considered. Is the policy something that your company worries about a lot? Is the compliance something that is super important? Multi-cloud strategy, developer efficiency that is related to the infrastructure. That is something important to consider when embarking on such a huge project. Plan for long-term vision, but execute on short-term wins.

That basically turned out to be a success story because we went without this big outcome. We were just doing one project at a time and improving networking one project at a time, closing bad holes in a Swiss cheese. We didn’t talk much about generative AI here. That’s why we decided to ask Gemini to write a poem for One Network and draw the image. Here it is. It’s actually a pretty nice poem. I like it. Feel free to read it.

Questions and Answers

Participant: With all these sidecar containers, Envoy running on all these services, what kind of latency is introduced as a result of adding all these little hops?

Berenberg: They are under a microsecond, when they’re local. It’s how network operates. We didn’t introduce anything new.

Participant: Today there are firewalls, load balancers, but you’re also now adding an additional layer of proxy beside each service, which doesn’t exist today.

Berenberg: No, we didn’t. What we did, we normalized all the load balancers to be Envoy-based. We normalized all the service meshes to be Envoy-based plus gRPC-based. Whatever people were running, we continued to run. We just normalized what kind of network equipment we are running.

Participant: For organizations that don’t already have or use service mesh, introducing this where before I can communicate with service A to service B, is just service A to service B. Now it’s service A, proxy, proxy, service B.

Berenberg: That’s correct. That’s considering Envoy proxy, as a sidecar it introduces latency. That’s why proxy was gRPC. It doesn’t introduce any latency because it only has a sidecar channel to the control plane. You don’t have to change context to anything. It’s under 1 microsecond, I believe so.

See more presentations with transcripts

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.

Allspring Global Investments Holdings LLC Sells 187919 Shares of MongoDB, Inc. (NASDAQ:MDB)

MMS Founder
MMS RSS

Allspring Global Investments Holdings LLC trimmed its holdings in shares of MongoDB, Inc. (NASDAQ:MDBFree Report) by 98.3% in the 1st quarter, according to its most recent disclosure with the SEC. The fund owned 3,196 shares of the company’s stock after selling 187,919 shares during the period. Allspring Global Investments Holdings LLC’s holdings in MongoDB were worth $564,000 as of its most recent SEC filing.

Other hedge funds and other institutional investors also recently made changes to their positions in the company. Norges Bank purchased a new position in MongoDB in the 4th quarter worth about $189,584,000. Marshall Wace LLP purchased a new stake in MongoDB in the fourth quarter worth approximately $110,356,000. D1 Capital Partners L.P. bought a new stake in MongoDB in the 4th quarter worth approximately $76,129,000. Franklin Resources Inc. raised its stake in MongoDB by 9.7% during the 4th quarter. Franklin Resources Inc. now owns 2,054,888 shares of the company’s stock valued at $478,398,000 after buying an additional 181,962 shares during the last quarter. Finally, Pictet Asset Management Holding SA lifted its holdings in shares of MongoDB by 69.1% during the 4th quarter. Pictet Asset Management Holding SA now owns 356,964 shares of the company’s stock valued at $83,105,000 after buying an additional 145,854 shares during the period. Hedge funds and other institutional investors own 89.29% of the company’s stock.

MongoDB Price Performance

MDB opened at $217.12 on Thursday. The firm has a market capitalization of $17.74 billion, a PE ratio of -190.46 and a beta of 1.41. The business’s 50-day moving average is $197.71 and its two-hundred day moving average is $214.60. MongoDB, Inc. has a 1 year low of $140.78 and a 1 year high of $370.00.

<!—->

MongoDB (NASDAQ:MDBGet Free Report) last released its quarterly earnings results on Wednesday, June 4th. The company reported $1.00 earnings per share (EPS) for the quarter, topping the consensus estimate of $0.65 by $0.35. MongoDB had a negative net margin of 4.09% and a negative return on equity of 3.16%. The business had revenue of $549.01 million for the quarter, compared to analyst estimates of $527.49 million. During the same period in the previous year, the company earned $0.51 earnings per share. The business’s revenue for the quarter was up 21.8% on a year-over-year basis. As a group, sell-side analysts expect that MongoDB, Inc. will post -1.78 EPS for the current year.

Insider Activity

In other MongoDB news, Director Hope F. Cochran sold 1,174 shares of the company’s stock in a transaction on Tuesday, June 17th. The shares were sold at an average price of $201.08, for a total value of $236,067.92. Following the sale, the director directly owned 21,096 shares of the company’s stock, valued at approximately $4,241,983.68. This represents a 5.27% decrease in their ownership of the stock. The transaction was disclosed in a document filed with the Securities & Exchange Commission, which can be accessed through this link. Also, Director Dwight A. Merriman sold 2,000 shares of the firm’s stock in a transaction dated Thursday, June 5th. The stock was sold at an average price of $234.00, for a total value of $468,000.00. Following the sale, the director directly owned 1,107,006 shares in the company, valued at approximately $259,039,404. This trade represents a 0.18% decrease in their ownership of the stock. The disclosure for this sale can be found here. Over the last three months, insiders have sold 32,746 shares of company stock valued at $7,500,196. Company insiders own 3.10% of the company’s stock.

Wall Street Analysts Forecast Growth

Several research analysts have recently weighed in on MDB shares. Barclays lifted their price target on shares of MongoDB from $252.00 to $270.00 and gave the stock an “overweight” rating in a research note on Thursday, June 5th. Monness Crespi & Hardt raised MongoDB from a “neutral” rating to a “buy” rating and set a $295.00 price target for the company in a report on Thursday, June 5th. Citigroup reduced their price target on MongoDB from $430.00 to $330.00 and set a “buy” rating for the company in a research report on Tuesday, April 1st. UBS Group lifted their price objective on MongoDB from $213.00 to $240.00 and gave the company a “neutral” rating in a research report on Thursday, June 5th. Finally, Cantor Fitzgerald boosted their price objective on MongoDB from $252.00 to $271.00 and gave the company an “overweight” rating in a research note on Thursday, June 5th. Eight analysts have rated the stock with a hold rating, twenty-six have assigned a buy rating and one has issued a strong buy rating to the stock. According to data from MarketBeat, MongoDB currently has an average rating of “Moderate Buy” and a consensus target price of $282.39.

Check Out Our Latest Report on MongoDB

MongoDB Profile

(Free Report)

MongoDB, Inc, together with its subsidiaries, provides general purpose database platform worldwide. The company provides MongoDB Atlas, a hosted multi-cloud database-as-a-service solution; MongoDB Enterprise Advanced, a commercial database server for enterprise customers to run in the cloud, on-premises, or in a hybrid environment; and Community Server, a free-to-download version of its database, which includes the functionality that developers need to get started with MongoDB.

See Also

Institutional Ownership by Quarter for MongoDB (NASDAQ:MDB)



Receive News & Ratings for MongoDB Daily – Enter your email address below to receive a concise daily summary of the latest news and analysts’ ratings for MongoDB and related companies with MarketBeat.com’s FREE daily email newsletter.

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.

How Pair Programming Enhanced Development Speed, Focus, and Flow

MMS Founder
MMS Ben Linders

Ola Hast and Asgaut Mjølne Söderbom gave a talk about continuous delivery with pair programming at QCon London. Their team uses pair and mob programming with TDD; there are no solo tasks or separate code reviews. This approach boosts code quality, reduces waste, and enables the sharing of knowledge. Frequent breaks help to maintain focus and flow.

The team does code reviews together, rather than sending pull requests back and forth, Mjølne Söderbom explained:

When Ola and I started on the same team in 2021, we decided to work together on everything. Not all agreed, so some of the team still worked alone, but if Ola or I were involved, they had no option – then it was pair programming.

Mjølne Söderbom mentioned that it is very powerful to have at least two team members who want to do pair programming. It can be difficult to try to convince a whole team by yourself. They also use pair programming heavily in the onboarding of team members. After a while, everyone understood this was the way to go, he said.

All tasks are suitable for working together, and no one ever sits alone with responsibility for a given task, Mjølne Söderbom said. A task always has at least two persons involved. If one is unavailable, the other can code alone, and then we just sync together again (for instance, if one person has to go to a meeting).

There are four developers in his team, a perfect size according to Mjølne Söderbom. If all four are available, they split into two pairs. Sometimes they do a mob with all four, if they are working on something new, or where they need to make bigger decisions. This way, they can spread knowledge before splitting up. If there are three, they always work as a mob, he said.

Mjølne Söderbom explained that they switch driver and navigator every 7 minutes, and every 10 minutes when in pairs:

When we are in the office, we use a cheap kitchen timer to keep track of the time. A few other teams on our floor have bought the same timer now, so it’s kind of funny when you hear the timer go off all the time! We also pair program when someone is remote, usually just with screen sharing in MS Teams.

If they have to switch machines when changing driver, they have a few aliases to quickly commit and push to git, Mjølne Söderbom said. They all have different keyboards and keymap setups, and sometimes it is just easier to move to another desk when switching. Aliases over the github cli also help us quickly create and approve/merge a pull request when they are done, he mentioned.

They do TDD on everything, and they love it, Mjølne Söderbom said. They spend no additional time on review since it is all done as part of the process. Since they pair and do TDD on absolutely all code, they do all review and architecture decisions as they go, he explained:

Some still claim that pair programming is only suitable for certain, often complex tasks. We learned that this is not the case; all tasks are suitable for working together on. In the long run, this drives speed and knowledge sharing in a completely different manner than before.

He has always been interested in clean code and code quality, and pair programming goes hand in hand with this, Mjølne Söderbom said. He cannot think of any other methods or tools that provide higher quality than working together, he concluded.

InfoQ interviewed Ola Hast and Asgaut Mjølne Söderbom about how they work in their team.

InfoQ: What’s your approach for measuring and reducing waste?

Ola Hast: The thing about working together with pairing is that when things like builds or processes take time, one starts talking about it. The first time is often fine, but when something takes time several times in a row, then it becomes a problem. We then start talking about solutions and workarounds.

Very often, what takes too long is a gut feeling, and handovers are often not noticed until you experience them firsthand. If you cannot do a repeating task without involving someone outside of the team, or if a specific task in the team requires a specific person, then this is a problem that causes delays and waste.

When people are working alone, we see a much higher acceptance of waiting, slow builds, and so forth. Working together really pushes us naturally to reduce this waste.

InfoQ: Does code need to be approved before it goes into production?

Asgaut Mjølne Söderbom: All forms of working in isolation (alone) mean that you have someone else to review and approve your code. Most companies have reviews as a compliance requirement. When pair programming, this is already done as part development.

InfoQ: How important is it to take breaks?

Hast: Working together, especially when you are really focused, is quite intense. So taking proper breaks where you are away from screens and keyboards is important.

Go for a walk around the block, get some fresh air. Whatever you do, don’t try to do other stuff, like checking email or Slack.

About the Author

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.

MongoDB Target of Unusually High Options Trading (NASDAQ:MDB) – MarketBeat

MMS Founder
MMS RSS

MongoDB, Inc. (NASDAQ:MDBGet Free Report) saw unusually large options trading on Wednesday. Stock investors acquired 23,831 put options on the stock. This represents an increase of 2,157% compared to the average daily volume of 1,056 put options.

MongoDB Stock Performance

MDB stock opened at $217.12 on Thursday. MongoDB has a 52 week low of $140.78 and a 52 week high of $370.00. The firm’s fifty day moving average is $197.71 and its 200 day moving average is $214.60. The firm has a market cap of $17.74 billion, a P/E ratio of -190.46 and a beta of 1.41.

MongoDB (NASDAQ:MDBGet Free Report) last released its quarterly earnings data on Wednesday, June 4th. The company reported $1.00 earnings per share for the quarter, beating analysts’ consensus estimates of $0.65 by $0.35. The firm had revenue of $549.01 million during the quarter, compared to analyst estimates of $527.49 million. MongoDB had a negative return on equity of 3.16% and a negative net margin of 4.09%. The business’s quarterly revenue was up 21.8% compared to the same quarter last year. During the same quarter in the prior year, the firm posted $0.51 earnings per share. As a group, analysts forecast that MongoDB will post -1.78 EPS for the current year.

Insider Buying and Selling

In related news, Director Dwight A. Merriman sold 2,000 shares of the business’s stock in a transaction on Thursday, June 5th. The stock was sold at an average price of $234.00, for a total transaction of $468,000.00. Following the completion of the transaction, the director directly owned 1,107,006 shares of the company’s stock, valued at $259,039,404. The trade was a 0.18% decrease in their position. The transaction was disclosed in a document filed with the SEC, which is available through this hyperlink. Also, CEO Dev Ittycheria sold 25,005 shares of the business’s stock in a transaction on Thursday, June 5th. The stock was sold at an average price of $234.00, for a total value of $5,851,170.00. Following the transaction, the chief executive officer directly owned 256,974 shares of the company’s stock, valued at $60,131,916. The trade was a 8.87% decrease in their ownership of the stock. The disclosure for this sale can be found here. Over the last three months, insiders sold 32,746 shares of company stock worth $7,500,196. Company insiders own 3.10% of the company’s stock.

Institutional Investors Weigh In On MongoDB

Hedge funds and other institutional investors have recently added to or reduced their stakes in the company. Vanguard Group Inc. lifted its position in MongoDB by 6.6% in the 1st quarter. Vanguard Group Inc. now owns 7,809,768 shares of the company’s stock valued at $1,369,833,000 after acquiring an additional 481,023 shares in the last quarter. Franklin Resources Inc. raised its position in MongoDB by 9.7% in the 4th quarter. Franklin Resources Inc. now owns 2,054,888 shares of the company’s stock worth $478,398,000 after buying an additional 181,962 shares during the last quarter. UBS AM A Distinct Business Unit of UBS Asset Management Americas LLC raised its position in MongoDB by 11.3% in the 1st quarter. UBS AM A Distinct Business Unit of UBS Asset Management Americas LLC now owns 1,271,444 shares of the company’s stock worth $223,011,000 after buying an additional 129,451 shares during the last quarter. Geode Capital Management LLC lifted its stake in MongoDB by 1.8% during the 4th quarter. Geode Capital Management LLC now owns 1,252,142 shares of the company’s stock valued at $290,987,000 after acquiring an additional 22,106 shares during the period. Finally, Amundi lifted its stake in MongoDB by 53.0% during the 1st quarter. Amundi now owns 1,061,457 shares of the company’s stock valued at $173,378,000 after acquiring an additional 367,717 shares during the period. 89.29% of the stock is owned by hedge funds and other institutional investors.

Analysts Set New Price Targets

A number of equities research analysts have recently issued reports on the stock. Truist Financial reduced their price target on shares of MongoDB from $300.00 to $275.00 and set a “buy” rating for the company in a research report on Monday, March 31st. JMP Securities reiterated a “market outperform” rating and set a $345.00 price target on shares of MongoDB in a research report on Thursday, June 5th. Rosenblatt Securities dropped their target price on shares of MongoDB from $305.00 to $290.00 and set a “buy” rating for the company in a report on Thursday, June 5th. Scotiabank raised their target price on shares of MongoDB from $160.00 to $230.00 and gave the company a “sector perform” rating in a report on Thursday, June 5th. Finally, Wedbush restated an “outperform” rating and set a $300.00 target price on shares of MongoDB in a report on Thursday, June 5th. Eight equities research analysts have rated the stock with a hold rating, twenty-six have issued a buy rating and one has given a strong buy rating to the company. Based on data from MarketBeat, the stock presently has a consensus rating of “Moderate Buy” and an average price target of $282.39.

Check Out Our Latest Report on MDB

About MongoDB

(Get Free Report)

MongoDB, Inc, together with its subsidiaries, provides general purpose database platform worldwide. The company provides MongoDB Atlas, a hosted multi-cloud database-as-a-service solution; MongoDB Enterprise Advanced, a commercial database server for enterprise customers to run in the cloud, on-premises, or in a hybrid environment; and Community Server, a free-to-download version of its database, which includes the functionality that developers need to get started with MongoDB.

See Also

This instant news alert was generated by narrative science technology and financial data from MarketBeat in order to provide readers with the fastest and most accurate reporting. This story was reviewed by MarketBeat’s editorial team prior to publication. Please send any questions or comments about this story to contact@marketbeat.com.

Before you consider MongoDB, you’ll want to hear this.

MarketBeat keeps track of Wall Street’s top-rated and best performing research analysts and the stocks they recommend to their clients on a daily basis. MarketBeat has identified the five stocks that top analysts are quietly whispering to their clients to buy now before the broader market catches on… and MongoDB wasn’t on the list.

While MongoDB currently has a Moderate Buy rating among analysts, top-rated analysts believe these five stocks are better buys.

View The Five Stocks Here

Ten Starter Stocks For Beginners to Buy Now Cover

Just getting into the stock market? These 10 simple stocks can help beginning investors build long-term wealth without knowing options, technicals, or other advanced strategies.

Get This Free Report

Like this article? Share it with a colleague.

Link copied to clipboard.

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.

Traders Buy High Volume of Call Options on MongoDB (NASDAQ:MDB) – MarketBeat

MMS Founder
MMS RSS

MongoDB, Inc. (NASDAQ:MDBGet Free Report) saw unusually large options trading on Wednesday. Stock investors acquired 36,130 call options on the stock. This is an increase of 2,077% compared to the average volume of 1,660 call options.

MongoDB Trading Up 3.9%

MongoDB stock opened at $217.12 on Thursday. The firm has a 50-day moving average of $197.71 and a 200-day moving average of $214.60. The firm has a market cap of $17.74 billion, a P/E ratio of -190.46 and a beta of 1.41. MongoDB has a 1-year low of $140.78 and a 1-year high of $370.00.

MongoDB (NASDAQ:MDBGet Free Report) last announced its earnings results on Wednesday, June 4th. The company reported $1.00 EPS for the quarter, beating analysts’ consensus estimates of $0.65 by $0.35. MongoDB had a negative net margin of 4.09% and a negative return on equity of 3.16%. The firm had revenue of $549.01 million for the quarter, compared to analysts’ expectations of $527.49 million. During the same quarter in the previous year, the business posted $0.51 EPS. The company’s quarterly revenue was up 21.8% on a year-over-year basis. Research analysts forecast that MongoDB will post -1.78 earnings per share for the current fiscal year.

Insiders Place Their Bets

In related news, Director Hope F. Cochran sold 1,174 shares of the company’s stock in a transaction dated Tuesday, June 17th. The shares were sold at an average price of $201.08, for a total transaction of $236,067.92. Following the transaction, the director directly owned 21,096 shares in the company, valued at approximately $4,241,983.68. This represents a 5.27% decrease in their position. The sale was disclosed in a document filed with the SEC, which is accessible through this hyperlink. Also, Director Dwight A. Merriman sold 820 shares of the company’s stock in a transaction dated Wednesday, June 25th. The shares were sold at an average price of $210.84, for a total transaction of $172,888.80. Following the transaction, the director owned 1,106,186 shares in the company, valued at $233,228,256.24. The trade was a 0.07% decrease in their ownership of the stock. The disclosure for this sale can be found here. Insiders have sold 32,746 shares of company stock valued at $7,500,196 over the last ninety days. 3.10% of the stock is owned by insiders.

Hedge Funds Weigh In On MongoDB

Several institutional investors and hedge funds have recently bought and sold shares of MDB. HighTower Advisors LLC boosted its holdings in MongoDB by 2.0% during the 4th quarter. HighTower Advisors LLC now owns 18,773 shares of the company’s stock valued at $4,371,000 after acquiring an additional 372 shares during the period. Jones Financial Companies Lllp raised its stake in shares of MongoDB by 68.0% during the fourth quarter. Jones Financial Companies Lllp now owns 1,020 shares of the company’s stock valued at $237,000 after acquiring an additional 413 shares in the last quarter. 111 Capital purchased a new position in MongoDB during the 4th quarter valued at about $390,000. Park Avenue Securities LLC increased its position in MongoDB by 52.6% during the 1st quarter. Park Avenue Securities LLC now owns 2,630 shares of the company’s stock valued at $461,000 after purchasing an additional 907 shares during the period. Finally, Cambridge Investment Research Advisors Inc. increased its position in MongoDB by 4.0% during the 1st quarter. Cambridge Investment Research Advisors Inc. now owns 7,748 shares of the company’s stock valued at $1,359,000 after purchasing an additional 298 shares during the period. 89.29% of the stock is currently owned by hedge funds and other institutional investors.

Analysts Set New Price Targets

MDB has been the topic of several research reports. William Blair reissued an “outperform” rating on shares of MongoDB in a report on Thursday, June 26th. Wedbush restated an “outperform” rating and issued a $300.00 price target on shares of MongoDB in a report on Thursday, June 5th. Citigroup lowered their price target on shares of MongoDB from $430.00 to $330.00 and set a “buy” rating on the stock in a report on Tuesday, April 1st. Macquarie reiterated a “neutral” rating and issued a $230.00 price target (up from $215.00) on shares of MongoDB in a research note on Friday, June 6th. Finally, Mizuho lowered their price target on MongoDB from $250.00 to $190.00 and set a “neutral” rating for the company in a report on Tuesday, April 15th. Eight equities research analysts have rated the stock with a hold rating, twenty-six have assigned a buy rating and one has issued a strong buy rating to the company’s stock. Based on data from MarketBeat, the stock currently has an average rating of “Moderate Buy” and a consensus target price of $282.39.

Read Our Latest Stock Report on MDB

About MongoDB

(Get Free Report)

MongoDB, Inc, together with its subsidiaries, provides general purpose database platform worldwide. The company provides MongoDB Atlas, a hosted multi-cloud database-as-a-service solution; MongoDB Enterprise Advanced, a commercial database server for enterprise customers to run in the cloud, on-premises, or in a hybrid environment; and Community Server, a free-to-download version of its database, which includes the functionality that developers need to get started with MongoDB.

Further Reading

This instant news alert was generated by narrative science technology and financial data from MarketBeat in order to provide readers with the fastest and most accurate reporting. This story was reviewed by MarketBeat’s editorial team prior to publication. Please send any questions or comments about this story to contact@marketbeat.com.

Before you consider MongoDB, you’ll want to hear this.

MarketBeat keeps track of Wall Street’s top-rated and best performing research analysts and the stocks they recommend to their clients on a daily basis. MarketBeat has identified the five stocks that top analysts are quietly whispering to their clients to buy now before the broader market catches on… and MongoDB wasn’t on the list.

While MongoDB currently has a Moderate Buy rating among analysts, top-rated analysts believe these five stocks are better buys.

View The Five Stocks Here

Ten Starter Stocks For Beginners to Buy Now Cover

Just getting into the stock market? These 10 simple stocks can help beginning investors build long-term wealth without knowing options, technicals, or other advanced strategies.

Get This Free Report

Like this article? Share it with a colleague.

Link copied to clipboard.

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.