Mobile Monitoring Solutions

Search
Close this search box.

Google Open Sources 27B Parameter Gemma 2 Language Model

MMS Founder
MMS Anthony Alford

Article originally posted on InfoQ. Visit InfoQ

Google DeepMind recently open-sourced Gemma 2, the next generation of their family of small language models. Google made several improvements to the Gemma architecture and used knowledge distillation to give the models state-of-the-art performance: Gemma 2 outperforms other models of comparable size and is competitive with models 2x larger.

Gemma 2 improves on the first generation Gemma architecture by incorporating ideas from Google’s flagship model Gemini, including a Grouped-Query Attention (GQA) mechanism and a mix of global attention and local sliding window attention. Google trained three sizes of Gemma 2: with two billion, nine billion, and 27 billion parameters respectively. The two smaller models were trained using knowledge distillation, with a larger language model used as a teacher. When evaluated on LLM benchmarks such as MMLU, GSM8K, and Winogrande, the 27B parameter Gemma 2 model outperformed the baseline Qwen1.5 32B model, and was “only a few percent below” the much larger 70B parameter Llama 3. According to Google, 

We show that distillation is an effective method for training these models, and the benefits distillation confers over raw text training. Specifically, we show how training over output probabilities can produce superior results over purely next token prediction. We hope that releasing these models to the community will unlock access to capabilities previously only seen in large-scale LLMs and fuel future waves of research and development.

The Gemma 2 release continues the industry trend of small, openly-available language model families, such as Microsoft’s Phi and Meta’s Llama. These models have incorporated architecture improvements like GQA as well as high-quality training data to achieve better performance than would be expected for a small model.

Besides evaluating Gemma 2 against common benchmarks, Google also submitted instruction-tuned versions of the 27B and the 9B model to the Chatbot Arena, where models are pitted against each other in “blind side by side evaluations” by human judges. Gemma 2 27B is currently the highest ranked open model, edging out Llama 3 70B. The 9B version is also doing well, and according to Google, “strongly outperforms all other models in the same range of parameters.”

AI researcher Sebastian Raschka commented on Google’s Gemma 2 research paper in a thread on X. Raschka highlighted several noteworthy features, but also said, “It would be interesting to see a comparison with the more recent Qwen 2 model.” In a discussion about Gemma 2 on Hacker News, several praised the model’s performance. One noted:

It’s multilingual. Genuinely. Compared my results with some people on reddit and the consensus is that the 27B is near perfect in a few obscure languages and likely perfect in most common ones. The 9B is not as good but it’s still coherent enough to use in a pinch. It’s literally the first omni-translation tool that actually works that you can run offline at home. I’m amazed that Google mentioned absolutely nothing about this in their paper.

Users can access Gemma 2 models over the web via Google’s AI Studio or in Google Cloud Platform’s Vertex AI. The 9B and 27B Gemma 2 models are available for download from Huggingface and Kaggle, and Google claims the 2B model will be available soon. The models are released under a “commercially-friendly” Apache 2.0 license. Google also published a cookbook with “guides and examples” for using Gemma 2.

About the Author

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Presentation: Developer Experience in the Age of Generative AI

MMS Founder
MMS Asanka Abeysinghe Glenn Engstrand Jemma Hussein Allen Eric M

Article originally posted on InfoQ. Visit InfoQ

Transcript

Losio: In this session we are going to be chatting about developer experience in the age of generative AI.

I would really like just to give a couple of words about the topic of this roundtable, and what do we mean by developer experience. What’s the difference now in the age of what we call generative AI? Panelists will discuss some common challenges that all of us as developers, as software engineers face that might interrupt our development flows, slow things down. All the reasons we have to slow down our productivity. We’ll discuss as well tools that are available to us, and methodology, things that can help us on a day-to-day basis. How does generative AI influence that? We have seen in the last few years many new tools, I will let the panelists discuss them, many new options, many new trends. We’ll see how these tools, which tools as well, which knowledge and resources can help in the daily development work. How can they improve our developer experience?

Background, and Professional Journey

My name is Renato Losio. I’m a principal cloud architect. I’m a practitioner myself. I’m not an expert in generative AI. I’m an AWS Data Hero. Also, I’m an editor at InfoQ. I’m joined by four experts coming from different companies, sectors, and background. They will help us understand the best practices and challenges of improving software development, in a world really disruptive at the moment by new technologies.

Minick: Eric Minick. I’m at Harness today. I’ve really spent most of my career in the continuous integration, continuous delivery, and now the DevOps space. A lot of that’s been focused on working with a lot of different companies on how they streamline how code gets out the door. Worked with a lot of companies, seen a lot of things from the outside. That’ll be the perspective I bring.

Allen: I’m Jemma Hussein Allen. My background is in software engineering. My career has been mainly focused on software and platform engineering, and CI/CD side of things and, of course, DevOps. I’ve worked in a range of industries, financial services, publishing, media. These days, I’m a technical lead with a focus on automation and platform engineering.

Engstrand: I’m Glenn Engstrand, currently at EarnIn. The title is usually engineer. The role is usually coding architect or mentor. My focus is MarTech, B2B, B2C, healthcare, and fintech. What I’m passionate about is cloud native, 12 factor, SOLID, CI/CD, and DevSecOps as it relates to DevEx. The reason why I’m here is because I wrote an article for InfoQ called, “Experimenting with LLMs for Developer Productivity.”

Abeysinghe: I’m Asanka Abeysinghe, CTO at WSO2, a technology company mainly focusing on developer tooling. I’m coming from application architecture and application engineering background. I’m the author of cell-based architecture and the platformless manifesto. Still, I do coding, so I know the developer pain and what they require at the moment.

What is Developer Experience?

Losio: We want to talk about developer experience in the age of AI. Actually, what is developer experience for you?

Abeysinghe: The answer is subject to what the developer is looking for, and what kind of environment that developer is working on. I will take an analogy like a driver, if you take a developer as a driver, then whatever the vehicle that you get, you should be able to drive. It can be an experienced driver, or it can be a new driver, it doesn’t matter, you should be able to move your vehicle. Not every driver got offroad, most of them have a standard vehicle. There should be a proper environment for them to drive as well. It’s similar in the developer space, that developers should be able to do their day-to-day work based on the environment that they have, and then how they can have a frictionless flow, because as a developer, the most important thing is you design, you code, you test, then you push the stuff, you run, go toward that particular iterative process. That is the fun part of development. Then, again, if you look at the development environment, then there are a lot of blockers. For me, the developer experience is how we can make that experience smooth, and then make them productive to deliver what they are expecting from them. It can be an individual developer, or it can be a team of developers that are focusing on some delivery or an objective that they want to achieve.

Minick: We talk about interrupting flow, and it gets down to even how are meetings scheduled and everything else surrounding it. Maybe the meetings are stoplights in our car scenario.

Developer Experience, and Generative AI (GenAI) Trends

Losio: I wonder, as you mentioned in your article about generative AI, do you see any major trend, or any trend actually in developer experience where generative AI is not the main focus, or if you think that generative AI is going to change everything, so that’s the only focus at the moment.

Engstrand: At the risk of sounding like an AI cheerleader, I can certainly talk about what I would call pre-LLM trends that support developer experience. Frankly, AI is blown up so hard, and like every company that I am familiar with, either directly or indirectly through my network, are just focusing so much on how they can sprinkle AI into whatever it is they offer. Anybody who tells you AI, we don’t do that, and they are a modern tech company, is lying. They’re all doing it. Pre-LLM trends for me that really support the developer experience, is what I would call cloud native. That’s a very broad brush. That’s a lot of things. Cloud native is basically all the things I think that you guys have already been talking about. Automating it in a way that is productive increases developer productivity, as they go from requirements to release to production. Cloud native does a lot of that. Then the other thing is monorepo. At the risk of picking a fight in this room, I won’t be pro monorepo or anti monorepo. What I will say is that monorepo became a thing in order to address some DevEx pain points. We can leave it for an open debate whether or not it introduced more DevEx pain points or actually netted a decrease in pain points.

Losio: Do you see anything that is not generative AI related at the moment as a trend or do you tend to rate it as just generative AI?

Allen: I think generative AI is going to be a big thing as it already is, as everyone’s said. You’ve got the Copilot. You’ve got coding systems, which you can use to help speed up development. If you’re learning a new language, for example, you can ask GPT, can you give me some tools to recommend how I can learn this language? Or, what are the processes you think I should do to learn this language, and things like that. Learning new things, and supporting existing development. Of course, you’ve got the code generation side, say, for example, generating variable declarations and things like that, which that’s basic. It’s pretty boring, no one really wants to do. If the code assistant can do that, great. You save 5 minutes and less boredom for you. I think that GenAI is really going to support developer experience moving forward.

Coding Assistants Powered by GenAI

Losio: I think going home on this theme now is, yes, I think we all tend to agree that generative AI is changing things, but how it is actually changing the developers. For example, I see that Jemma mentioned already some coding assistant. Are coding assistants based on generative AI now a must for a developer? Any practitioner attending this panel should think, either I have it already, or should I go out of this panel and, first thing, buy a license or any way get one, or there’s still room for improvement there?

Abeysinghe: I’ll come there from where I stopped earlier. It’s basically what we spoke about, even developer experience, it’s about reducing this cognitive load. I think Glenn clearly explained about the complexity, with cloud native computing, and then the technologies associated with that. It’s really complicated. You can run something in your PC, but then again, how will you run a production grade system with all this complexity? Bringing that cognitive load reduce is the main focus. Then, to do that, I think generative AI is providing a lot of flexibility and support. I think the way I look at it, how you take generative AI, it’s your dancing partner. Where you want to dance, you can control the partner and then get the help. It’s the same way I think generative AI can help the developers. I see it in two angles, especially when it comes to enterprise development. One is, use generative AI to have AI augmented software engineering. That’s where from designing to coding to testing, and in the operational aspects, how you can use generative AI to support. The second part for the developers, how you can build AI driven applications for the end users. Again, it’s not simple, because you need to look at a lot of stuff, like the security, data security, privacy, all these things need to be considered, as well as how you can give a competitive advantage with the experience that you’re providing for your end user, is another thing that the application developer should focus on. That is where I think that generative AI can play a bigger role in modern developers as a supportive thing that they can get rid of the cognitive load.

Minick: I think that makes a lot of sense. One of the tensions that I’m seeing though, particularly as you move into code generation, is we’ve already seen security exploits based on hallucinations coming out of coding assistants. Coding assistant imagines a library exists. Someone notices that, creates a poisoned library, and posts it. I think at this point it’s just some proof-of-concept work. You know it’s going to have the same hallucination for someone else. They’ll download it. They’ll use it. We’ve seen this. In that world, where you’ve got this dancing partner, you’ve got your Copilot, you’ve got your pair programmer who doesn’t work for your company, didn’t take any of the security training, all of that. How as a large organization, as an enterprise, you can adopt this technology safely, is a huge topic that I’m seeing out there. We know it’s great for the developer experience but we also can get hacked. How do we adopt this without getting ourselves in a lot of trouble, is attention I’m seeing a lot in the executive world.

Is GenAI Augmenting Developers?

Losio: That’s actually a very good point, because I actually had that experience as well, as a cloud architect. Coming from the AWS world, I started to play with Amazon Q of this world thinking, it can help a lot in many things. Then I realized that on the security side, of course, they put so many guardrails, and rightly so, that it’s almost frustrating that any question that might have a security implication now basically, the answer is, “I cannot help advise you on that.” I’m really curious to see what is the direction we’re going towards? In that sense, it’s quite an interesting world to be into. That’s my experience, as a practitioner, is one side really, a friend that can help me in development. I’m curious to see how developers adapt, because I haven’t used them that much. A bit of fear of missing out. I was wondering if you have any feeling about if software developers, if the community is adapting somehow to this, am I changing as a software developer, apart from maybe trying to avoid the most boring tasks or anything else changing?

Allen: I think it’s a case of learning to use the new tools. Obviously, people get into a pattern day-to-day. In the past, it would be obviously that. Now it’s actually, I’m going to go and ask this AI tool or whatever. It’s weaving those into the day-to-day to reduce developer cognitive load. Pointing to the security side of things, that means that humans will definitely still be needed in the future. I see some concerns around AI is going to overtake all software development, it will write the code. As you pointed out, the security and that side of things, unless there’s a sandbox, say something has already got that logic baked in, then it will always need human review.

Engstrand: I got to agree with Jemma 100% on that. Actually, I think, the way she frames it is very smart. I encourage all developers who are starting to approach LLMs and AI to frame it the same way. These LLMs, they’re not your dancing partner. It’s not really a separate intellect that you’re going to work with. It’s a tool, a very sophisticated but soulless tool that you can use to improve your productivity. If for a minute, you can trust its output, you’re sadly mistaken. LLMs are great when it comes to contextually aware, consistent, relevant, but they know nothing about correctness, and never will. Please do not think of it as something you can trust. It’s definitely a tool to help you, the end.

Abeysinghe: I don’t completely agree on that, Glenn, because even if you Google something, it’s up to you to verify, what you find. It’s about the productivity. As an example, take a standard integration pattern. The pattern doesn’t change, even if you work for company A or company B. Rather than you figuring it out, you can get the basic stuff done by getting the help, and it can create a code snippet for you. It’s up to you to make it productize by bringing your knowledge as well as bringing the enterprise architecture guidelines, and then make it more production ready code. That’s where I use the analogy of a helper on certain things that you can increase your productivity, most of the repetitive tasks. The creativity will be there with the developer, as well as developer is environment and the enterprise aware, so they can bring that notion to the work that they do. Again, verifying. Verifying is up to the developer, but I think a lot of things that you can offload and be more productive if you’re smart. That’s how I see it.

Pitfalls of AI Coding Assistants

Losio: That’s an interesting point as well, comparing it to a Google Search, or any search you did before. I think that the main difference is it’s much easier now to just click and accept a piece of code. It’s more tempting to think that it looks nicer, it looks almost done. Where before you still had to maybe copy something, but it didn’t take milliseconds to get it from Google Search results to your own deployment. Now it might be really a short journey to get something done. That, of course, is the power but as well is one of the limitations. I don’t want to just talk about limitations. We already talked about hallucination. There are different tools out there. I don’t want to just focus on Copilot, Amazon Q developer, whatever they have. Apart from hallucination, what are common pitfalls from the developer point of view? When I use them, we said already, you shouldn’t trust them. On a day-to-day basis, do they improve my developer experience, because, basically, I’m faster getting some code drafted there, or there’s more than that?

Minick: I think it’s great to be able to generate boiler plate, or if you’re stuck, to get a suggestion going, to get something. We’ve got a question saying, am I just doing code review now? Doing a little prompt engineering, and then I do code review, and then I move on? That’s never been the funnest part of the job for me. I don’t trust the developers to just do great code review all the time. That seems a little dicey. I do think there’s something on platform teams, the cloud architects, everyone else, to build in some safety nets. If I go, it looks good to me. It runs. That’s all good, but it happened to get some evil package. Something should be doing some supply chain analysis, some security scan, something, keep me safe, because the worst developer experience is getting dragged in front of the security team. No one likes that. I do think it’s incumbent on the rest of us to support the developers with a really robust safety net, and probably the stuff we should have been doing in the last 10 years. As the act of just cranking out some code gets a little easier here, particularly on days where we’re tired and probably not doing the reviews we ought to, we got to give these folks really good safety nets, so that we can take advantage of it and go fast.

Abeysinghe: I think it’s up to the developer to decide whether you are becoming the code reviewer or you are becoming the coder or the architect, because the complexity we see inside the enterprises is too high. You can’t just ask an AI engine to generate everything. It’s a helper. Then the differentiator for the organization coming from the software that you build, and that’s where the innovation comes in. That’s where the developers can be really innovative, and then use the AI as a way to expedite that, not completely depending on that.

Allen: I was just thinking in terms of the tools to make sure things are secure. Things like internal developer platforms, and SOLID, CI/CD pipelines to make sure that everything is code scanned. It can pick up any issues in a Terraform configuration or an application library. As long as those safety nets are there, that obviously helps people onboard the coding assistant because it means that it’s a safer way if you’re throwing things.

The Role of Human Devs vs. Coding Assistants

Losio: One of the risks is that you don’t want that just there’s no code review, and the code is just generated, and whatever. I was wondering things that a company may have at the moment that are different, more conservative approach where you basically use it to do, for example, unit tests, or tasks that I’m not saying are tedious, but maybe the assistant can help you in handling, but the core part of your code, the logic is designed by human beings.

Engstrand: The way I see it, there are six use cases that people use LLMs for. They use it for code search. I think Asanka has already talked about that, as a replacement for Google Search to learn how to use an API or something like that. They use it for debugging, where they say, here’s a block of code. I know it has a certain bug in it. Here’s the symptom or the diagnostic, help me find and fix it. They use it for code review. There’s a cool open source GitHub Action out there called AI Code Reviewer, very interesting. Check it out. They use it for code analysis, of course. Here’s a block of code, tell me what it’s doing. They use it for code generation, I think. We’re going to talk about that a lot, where you say, please write a block of code that does the following. I think folks have talked about already Copilot and Amazon Q. That’s a code completion, a tighter integration with your IDE, where the LLM is prompted, given what I’ve typed so far, finish the line or add the next block, or whatever. Those are typically how developers use LLM today.

We’ve already talked about review fatigue as a common pitfall. The other common pitfall I’ve seen is prompt leaking. We’ve talked about security a lot already. Let me just go ahead and remind the developers, if you’re just using the free account, or you’ve just got an individual API key for ChatGPT, or Gemini, don’t use it for work. Because if you start sharing code with it, that’s code that maybe you wrote, for your company, they see that code is proprietary. In the public models, your prompting is used to fine-tune the foundation model, which is shared by everybody. There’s a technique called prompt leaking where information about that prompt can get exposed to people outside of the company. If you want to use it for corporate, that’s fine. There’s ways to get corporate licenses, where all that is securely handled. Don’t use the free versions for corporate work. That would be another common pitfall that I would advise against.

Losio: Actually, a very good point, in a sense that, out of this goal, yes, is cool to test it. Of course, there are free licenses, free options in every single one of those platforms, that’s more intended for personal tests than really corporate use.

GenAI’s Role in Reducing the Learning Curve

One thing that I’m quite curious, if you see as well the direction, I’ve always wondered, as a developer, I’ve always been fascinated by the number of languages, the number of new features, the number of versions, how generative AI is going to change that. In the sense that, is a new language going to be part of it, and how quickly and how hard because, of course, if it becomes much easier using Java, Rust, or whatever language you prefer, but that is popular that a model is well trained on it. What is the attraction to changes to that? How is the attraction to jump on something new, simpler, better, worse, or whatever, that is already now due to the thing that people are not that familiar to a new language. That’s always constrained to the new language, a new feature of any language. I wonder how generative AI is going to change that.

Abeysinghe: I think the learning curve we can cut down, because of the samples and then documentation that you can refer through these AI tools. Then like, as an example, Copilot is helping a lot for a new developer who’s new to a language. There will be languages introduced always that we see. I don’t think there will be specific languages because generative AI will introduce. There’s another change happening, especially for the semi-technical and non-technical users who can use natural language to generate some of the stuff. As an example, the citizen developer type of integrators, who just like to do simple integrations, they used to use graphical tooling. Those kinds of stuff can change with natural language that they can tell exactly what they need to connect and these tooling can support. That is what I see in the market. I think Jemma brought a really good point, we should focus it at some point about the internal developer platforms.

Concerns with GenAI

Losio: One of the points was, we need to revise the code that the generative AI brings us, or whatever. Part of making a better developer experience, I’m not saying it’s simple to code something, but it’s definitely the process to write a simple piece of code much easier than it used to be. Asanka, as well, you mentioned, how with a new language I have to start from scratch, it’s a bit easier if someone is helping you with the basic syntax, if I know the fundamentals of programming. At the same time, I wonder 10, 15 years ago, we were discussing, for example, for managed databases, that one of the major problems of managed databases by itself that we’re bringing the option to people like myself, that was not a database administrator, to play with a database in production. The problem was not the tool, but the problem was that I’m not expert enough to manage a production database, but somehow that helps me in going in that direction. I wonder if part of the problem with getting generative AI in this space is not just that we need to check the code we have. It’s more like it’s opening up doors that in one sense is great, but at the same time is opening up doors to people who maybe don’t have that experience to review the code, they can still go ahead. I don’t know if it’s part of life, or if it’s a challenge we have to handle, or how we can handle it.

Engstrand: There’s already been a lot of academic papers published about the use of AI or LLMs in education. I’m concerned about it, because in that case, the student is using the AI to learn about the topic, but is the AI really a knowledgeable teacher? Who are you learning from if you’re learning from the AI? It remains to be seen what a couple of generations of students coming out of an LLM enhanced education degree, what they’re going to be able to do or what they’re going to look like, what have you. I too have concerns about that. If the LLM is the expert in the room, I don’t see how that’s going to work well.

Allen: In terms of education, yes, I can definitely support it. There are some big challenges around that. Also, certain things like the actual exams would have to change. If people are using code generation tools, then go to an exam without a code generation tool, or an interview, for example, it can be very difficult. Currently, at the moment, interviews, you can’t use code generation tools, you actually have to know it from your head. Things like that. The assessment criteria would have to change as well. I think one other point, going back to the code generation. People mentioned using natural language to generate code. There are certain tools as well that can transfer code from one language or translate it from one coding language into another as well, which also helps adoption of new tools. If a new tool is only available in one language and not the other, then you can use existing code as a starting point, once you’ve translated it into the new coding language.

Losio: That’s an interesting area of work. I’ve been playing as well with just updating my major version, or even really changing it from code to code. I don’t know if that’s the direction of what’s going to happen.

Minick: One thing that I think it’s really risky is, as we start saying, this is my Copilot, does it become the autopilot? Do I lean on this for most of the code that I write? I think we can look to the lessons from aircraft on autopilot. If autopilot is used too much, the skills of the pilots to react to bad situations decline, and we get disasters out of that. They’ve learned that one of the things you have to do is set aside a certain percentage of your time to fly manually, even when autopilot would be perfectly appropriate, you’ve got to do it yourself, or your skills will atrophy. I don’t think we’ve begun to think through a lot of these sorts of challenges. Is that something you guys are wrestling with?

Abeysinghe: I think, again, finding the balance, finding the middle part. Then, yes, you have to practice on certain stuff, otherwise you will lose that skill. If we go back to the same analogy of driving, it can happen as well. If you’re always using a self-driving car, then you might lose your driving skills. I think you have to practice. I think it will apply for everywhere as well. Then, again, when something new comes, it is cool. I’ll give you an example. Even these AI generated graphics, when it came, it was pretty cool. Now for some reason, even if I see a slide deck with AI generated images, I feel it’s really boring. Because you will find how inaccurate some of the stuff is, as well as it doesn’t have that creativity that we see when somebody draws a diagram or somebody draws a picture and then have that feeling, doesn’t come out of some of these things. I think it’s like that. When it’s new, everybody jumps into it. I think we should be smart enough to identify that middle part.

Coding Assistants: Coworker or Junior Devs?

Losio: Actually, I would like to move actually a bit away from the coding assistant. As for today, do you see a Copilot or whatever tool you want to use, as a coworker that you brainstorm with, or a junior developer that you give the most boring part of the task or the simplest task?

Minick: I like thinking of it as a tool. It’s hard to think of it as a junior developer, because it can do some really clever stuff. It can also do the most idiotic stuff. It is spectacular on both ends of the spectrum in a way that a person is not. It’s an odd one.

Allen: Both, depending on what I want to do. If I don’t want to generate a whole set of variable declaration, then, great, get it to do that. If I want to find out what’s the fastest way of sorting this way, or I need to do a certain particular type of thing, then it can give firm suggestions for that. That wouldn’t necessarily do all of the work, but it could help give some suggestions for solutions.

Engstrand: The term AI, Artificial Intelligence, I think it was coined in the 1950s, John McCarthy, inventor of Lisp, I think. At about that same time, a different group of people, they advocated for something called cybernetics. You’ve probably never heard of it, because cybernetics has been relegated to the historical dustbin of obscurity. One of the terms they coined was IA, intelligence amplification. That’s what I see, believe it or not, LLMs more in that light. It’s not so much that it’s an intelligence that one day will ask for freedom and strike for higher wages, but rather a tool that I can use to amplify my intelligence. I loved how Eric said it. How can this thing be a human intelligence if one request, it’s brilliant, and the next request, it’s severely retarded? It’s not a relatable intellect at all. It’s still very good. I totally agree with what Eric said. I love his analogy with flight pilots. Also, what Asanka said, turn it off, don’t use it all the time. Just use it maybe for the dollar stuff, like unit tests, and then turn it off when you have to do the central core of your new platform goodness.

What is Platformless?

Losio: I’d like to really then shift the topic to more platform, platformless. What do we mean by that? Are we going in that direction to help developer experience?

Abeysinghe: Even Jemma coined this thing about the internal developer platforms. Platformless is actually a term that we coined, because we saw there’s a vacuum in the market, because people are really worried about the platform, and focus is about the platform. Because of that they are not getting the correct output. People are not building stuff on top of the platform, rather, they are building the platform. How we can change that. Basically, how we can have an efficient platform engineering practice, which delivers a platformless experience for the user. That’s where platformless is coming. It’s not that platform is disappearing, it’s basically losing the focus, and in tech, less means more, like serverless, there are servers, but the user doesn’t see. Wireless, there are wires at some point, but then the end user they don’t see the wires. Similarly, what we see within an enterprise, the platform should be invisible for the end users. There should be a platform team, or they should have a prebuilt platform that they’re using. They should focus on the stuff that they build, like the API, services, and business services, and applications that they build. That’s where the platformless concept is coming. To deliver that you need a platform. That’s where the term internal developer platform, very popular these days. Unfortunately, the definition of internal developer platform, and what delivers through internal developer platforms is not what exactly developers are looking at. Because most of the frameworks and then internal developer platforms are focusing on is the operational and delivery aspect of this software development lifecycle. That’s very important. On top of that, you need the software engineering practices, as well. What type of middleware that they can use. What type of best practices that they can have. How you will control the communication, and whatever is required for them to do application development should be supported by the internal developer platform. That’s where you can provide a platformless experience for your end users. End users are the developers in this case. That’s where our platformless concept and platforms are connected.

Minick: I think there’s the complementary notion of the internal developer portal. Trying to say, here are a bunch of very simple to consume services. Here’s how I search through my APIs. If there are standard middleware, if there are standard Terraform templates, if there are standard environments that I can generate, standard action, standard automations, let’s have those really easy to access. I wanted this 6, 7 years ago at a large tech company, it was lovely. It feels like those are spreading through the industry now, becoming more common out there. Having a really simple, straightforward portal on top of the platform, so I don’t have 97 pages to go to is really nice. I think this notion that I shouldn’t be building my platform all the time, I love that. If we get to a world where generating some code is relatively easy, and developers are spending all their time fiddling around with platforms, like we automated away the best part of people’s jobs, not the worst. What did we do here? I think having that lovely platform to build on is exactly where central groups, platform teams, cloud architecture teams should be focused.

Abeysinghe: I think platforms should be a product, not a project. This is a mistake most of the enterprises are making that they think it’s a fixed budgeted, fixed time thing. It has to be a product that should iterate with the technology changes, as well as developer needs are changing as well. That’s how the central cloud platform engineering team should treat it as a product and keep on improving that.

Minick: If you have a product manager on that team, their customer is the developer.

Allen: I definitely agree with the platform as a products way of working, especially the feedback, I think is really key in terms of developer experience. Making sure you get feedback from developers on how they feel using your platform, and if there are any improvements that can be made. Because obviously, it’s a big thing for the developer experience as they’re using the platform every day. That’s what they need to deliver their work. It’s really important to get that feedback side of things.

What is a Good Developer Experience?

Losio: I always wonder, when I think about developer experience, how it’s changed in the years. If you go back and ask a person, if 10 years, 20 years ago, developer experience would have been significantly worse, significantly better. Who knows? What is basically a good developer experience today? What should be, not the ultimate goal, more so in an enterprise scenario, what do you see as the goal for a company to provide a good developer experience?

Abeysinghe: I think it’s really hard to define that. As we started the conversation, everybody contributed and explained that it’s subject to. I think from the organization point of view, one way to look at it, look for the flow efficiency. Flow efficiency computed by the productivity time versus the wait time. If the wait time is low and the productivity time is high, that is one way of looking at it. Then, mean time to repair, mean time to debug, those kinds of metrics can be used to see whether the developers are productive. End of the day, that you can take a look at the productivity, how happy the developers are, and then how often that you are meeting the deadlines, and how often you push the code and then push the stuff into production, these iterative cycles, how short they are. As well as how quickly as a technical team that you can respond to the business. Because business can come up with different type of marketing plans, sales activities, but to do that you need to deliver the products and the software that you build, so how quickly you can react to those business changes. I think those are the things that you can use. I don’t think there’s a cookie cutter approach or a silver line here.

Engstrand: First, you talked about, have things improved? Fifteen years ago, I was the lead engineer over a real-time communications infrastructure for a certain company. If I wanted to do a deployment to production, I would have to SSH into every box and copy the JAR over and restart the process. That’s right. Because of how disruptive that was, I had to start doing that at 11 p.m. It would usually take me about 2 hours. Consequently, the invention of Kubernetes has been the greatest thing that’s ever happened in my entire life. The tooling, Terraform, Helm, maybe a little Ansible, if you have to, is just amazing. I also agree completely, 110% with I think both Asanka, and Jemma talked about this. You’re going to be very tempted, and I’ve been in some shops that did this where the cloud ops team built their own little microservice that ran all that Terraform and Helm for you. It’s a product now that they have to support, but they don’t view it that way. The developers who coded it moved on. Now nobody knows how it works. Now there’s not a lot of empowerment at that point. It’s all kind of, we’re too scared to touch it. Maybe it makes more sense to go to a third-party platform that that’s their job, that’s where they get their money. That’s where their revenue comes from. Of course, they’re going to keep supporting it. That’s probably a good idea in the long run.

Minick: You remind me of the Kelsey Hightower quote, “Everyone just wants a PaaS, but they want to build it themselves.”

DevEx in the Future, with GenAI

Losio: Extrapolating the quick development of generative AI, where do you see the developer experience in a year from now? Where do you see the developer experience 5 years from now?

Allen: The more that generative AI is used, the more humans and developers will become the supervisors and the guidance of that AI, basically. Developers obviously are always going to be needed. The actual role may change into writing less of the lower-level stuff, and moving to more high-level supervisory and guidance role.

Minick: That sounds about right. I’m fairly jaded. I’m trying to imagine the ways we’re going to take steps back, which I’ve seen us do over again. At the same time, I know we’re going to deliver innovation faster in 5 years than we do today. You go back 20 years ago, it was annual releases, 15 years, it was quarterly, every 6 months or something. Now we got like the largest banks in the world are out there deploying to production every couple minutes. It’s beautiful. I think we’ll keep moving in that direction. Everyone will move a little bit faster. GenAI will be a part of that. We just got to make sure that developers are spending their time doing fun, innovative work, and not doing all the glue. That’s the challenge.

Abeysinghe: I think it’s a little hard to predict in this space, because things are moving really fast. Innovation is happening. I think it’s up to us to adapt quickly. Then look at everything from the value creation point of view. What’s the value that you can add to your team, and to your product, and to your organization? We need to figure it out. It’s really hard to predict about the experience, because things are moving really fast.

Losio: I understand as well this thing, what we are going to be in 5 years’ time for something like generative AI, if we just look back 1 year, or 2 years, it’s crazy how things have changed. I’m not saying it’s a hype, or whatever it is, but things can shift and maybe the priorities are going to be very different.

Measuring How GenAI will Increase Developer Productivity

It also sounds like we all agree that generative AI has the potential to improve productivity. Have you found tangible ways to measure that?

We always want to measure how we increase productivity. I think it impacts anything related to developer experience. We like to say that we improve developer experience, but we want to measure how we improve developer experience, as well.

Engstrand: In the article I wrote for InfoQ, I did an experiment. I used the same prompt, which is, given this code, like service code, which has exposed three routable endpoints. Given this unit test for one of those, write the unit test for the other two. I subjected it to all the usual ChatGPT, Gemini, Code Llama 70B, and CodeWhisperer, and stuff like that. Then what I did was I took the output, and I saved that. Then I fixed it, because it had bugs in it. Then I did a Myers diff algorithm on what it generated versus what it actually took to get the unit test to be running correctly. This was a Java source base, so I could use the JaCoCo tooling to look at code coverage. In the article, you can see it, I compare those numbers. Myers diff algo in terms of what you had to do to correct it, and change in code coverage. Those are two metrics you can use. Are they perfect? Of course not. When is any kind of quantitative metric going to, 100% accurately cover something that’s qualitative in nature? You do have to be data driven. You do have to go from metrics. Try those two.

Abeysinghe: I think I mentioned roughly about that in a previous answer as well, like the flow efficiency, mean time to repair, mean time to detect. If you can embed these things into the internal developer platform, then it will be really easy. Then you can capture this data, and then have a proper dashboard to measure the productivity. I think that will be one way of looking at how these tools are helping.

Minick: I think Dr. Forsgren, and team, there was a nice paper out of Microsoft on the SPACE framework. The paper is worth a read. It talks about a lot of different things you can measure to try to get a cohesive picture on developer productivity and satisfaction. That thing’s fantastic. There’s a breed of tools out there. If you Google software engineering insights, you’ll find a bunch. We have one. They’ll help measure a lot of these kinds of flow trends and some of the SPACE framework metrics to try to get some quantitative views on that. I think you want to pair some of that that’s coming out of the tools with some survey data. If you want to get really serious about developer satisfaction and productivity.

Ethical Integrity of AI Generated Code

Losio: One topic we haven’t mentioned that much is ethical integrity and reliability. We mentioned actually reliability of AI generated code. Another part is the ethical integrity of the code. How can we validate that in a rigorous way? What direction should we go in that sense?

Allen: I’m not an expert on the ethical implications of AI. In terms of code generation, we spoke about security issues. That does speak to the ethical focusing. Then in terms of actually generating content. One thing Asanka mentioned was documentation. I can’t see how documentation may be ethically ambiguous. If you’re generating a story, for example, or social media content, for example, then, of course, you could put in inappropriate content for certain audiences and that sort of thing. Having the proper frameworks around there to make sure that the content is solid and appropriate for the audience is really important.

Engstrand: A previous employer of mine actually was out there first talking about algorithmic bias. They were one of the first companies that was talking about it. It’s very serious. The current Biden administration is focused on that as well. This is less about writing code. This is more about the use of AI in general. We use AI to shape our customer’s experience. Sometimes that means certain types of customers might have more access than others. We didn’t intend to do that. That’s just an unconscious bias that can happen. Especially if you use AI, like a recommendation engine, or something that approves whether or not you get the loan, or what kind of healthcare you can have access to, is pretty serious. It’s hard to measure. I’m less worried about whatever, Terminator environment where some super robot launches the nukes, and way more worried about if there are certain groups of people who perhaps don’t get the access they should or that other groups do, because of some unconscious bias that’s in the algorithm.

See more presentations with transcripts

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Amazon WorkSpaces Pools: Flexible and Tailored Virtual Desktop Environments

MMS Founder
MMS Steef-Jan Wiggers

Article originally posted on InfoQ. Visit InfoQ

Amazon has announced a new feature for Amazon WorkSpaces called WorkSpaces Pools. This feature provides non-persistent virtual desktops across a group of users. Administrators can manage a portfolio of persistent and non-persistent desktops through a Graphic User Interface, command line, or API-powered tools.

With the feature, users will receive a fresh desktop based on the latest configuration each time they log in, ensuring a consistent experience, according to the company. It supports various use cases, including remote work, shared service centers, and educational settings. Moreover, Administrators can customize desktop configurations, control compute resources, and scale pools to match user needs, providing a flexible and tailored virtual desktop environment.

Jeff Barr, a Chief Evangelist for AWS, writes:

As the pool administrator, you have full control over the compute resources (bundle type) and the initial configuration of the pool’s desktops, including the set of applications available to the users. In addition, you can configure the pool to accommodate the size and working hours of your user base, and you can optionally join the pool to your organization’s domain and active directory.

Application settings persistence to save application customizations and Windows settings on a per-user basis between sessions (Source: AWS News blog post)

In a Reddit thread, a question was raised about whether WorkSpace pools are the same as another AWS offering, AppStream 2.0, which is a fully managed AWS End-User Computing (EUC) service designed to stream software-as-a-service (SaaS) applications and convert desktop applications to SaaS without rewriting code or refactoring the application. A respondent commented:

Pools and AppStream 2.0 in desktop mode are the exact same thing. Biggest advantage is with pools, you can run M365 apps.

Barr wrote in the AWS news blog post:

You can use an existing custom WorkSpaces image, create a new one, or use one of the standard ones. You can also include Microsoft 365 Apps for Enterprise on the image. 

Other Hyperscalers like Microsoft offer an equivalent service called Windows Virtual Desktop, which anyone can access anywhere. Yet there isn’t a direct equivalent product for the Google Cloud Platform (GCP). However, organizations can use various 3rd party products to configure Desktop-as-a-Service, such as itopia, Citrix Virtual Apps and Desktops, and Nutanix Frame. These services can provide functionalities similar to those of Amazon WorkSpaces Pools.

The states that WorkSpaces Pools is available in all commercial AWS Regions where WorkSpaces Personal is available, except Israel (Tel Aviv), Africa (Cape Town), and China (Ningxia). In addition, the pricing details of WorkSpaces Pools are available on the pricing page.

About the Author

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Shareholder Alert: Robbins LLP Informs Stockholders of the Class Action Filed Against MongoDB

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

SAN DIEGO, July 15, 2024 (GLOBE NEWSWIRE) — Robbins LLP  informs investors that a shareholder filed a class action on behalf of all investors who purchased or otherwise acquired MongoDB, Inc. (NASDAQ: MDB) securities between August 31, 2023 and May 30, 2024. MongoDB is an American software company that designs, develops, manufactures, and sells developer data platforms and integrated services systems through its document-oriented database program.

This page requires Javascript.

Javascript is required for you to be able to read premium content. Please enable it in your browser settings.

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

La mostra fotografica “Macelleria Palermo” di Franco Lannino e Michele Naccari offre uno sguardo crudo e realistico sulla violenza mafiosa che ha segnato Palermo negli anni Ottanta e Novanta. Attraverso 44 scatti in bianco e nero, la mostra documenta scene di delitti mafiosi, rappresentando un viaggio visivo e testuale nella sanguinosa storia della città.

Macelleria Palermo sarà allestita presso il Castello di Carini dal 18 luglio al 18 agosto 2024, e sarà arricchita da effetti multimediali per un’esperienza immersiva altamente coinvolgente. L’esposizione non è solo un tributo ai tragici eventi del nostro recente passato, ma anche un richiamo alla memoria collettiva siciliana più dolorosa e profonda, affinché le nuove generazioni comprendano l’entità degli orrori che Palermo ha vissuto. Le fotografie, crude, forti, brutali mostrano l’immediato dei più clamorosi omicidi avvenuti a Palermo, catturando momenti di delirio e di sangue, ma anche di tenerezza e umanità.

Valentina Mignano, ideatrice della mostra insieme alla Jonathan Livingston Odv, descrive le immagini come un ritratto senza filtri dei metodi propri della criminalità organizzata, un percorso visivo che scende dettagli (dall’incaprettamento alle esecuzioni più atroci) di cosa nostra. Un documento che mostra Palermo, nelle parole di Franco Lannino, come una “macelleria” segnata da violenza e omertà. La mostra offre anche un’audioguida gratuita, che fornisce ulteriori dettagli sui momenti rappresentati. Particolare rilievo verrà dato alla fotografia che mostra un carabiniere che reca in mano verso l’ignoto la famosa borsa di Paolo Borsellino, scattata dal fotoreporter Lannino in via D’Amelio esattamente 32 anni orsono.

La mostra è stata allestita negli ultimi tempi in varie regioni italiane, dal Friuli alla Lombardia, passando per la Calabria, suscitando il vivo interesse dei visitatori della penisola. Il Castello di Carini si pone adesso come sede d’eccezione per gli scatti di Lannino e Naccari, non solo per la bellezza e per la struggente storia che lo segna da secoli, ma anche come luogo che crea continuità con il tema dell’esposizione, infatti il maniero arabo-normanno custodisce al proprio interno un’ala in cui sono custoditi i “Pupi antimafia” del puparo Angelo Sicilia (da Pio La Torre a Peppino Impastato, passando per il giudice Livatino). Durante l’apertura della mostra, i visitatori avranno l’opportunità di riflettere su un capitolo doloroso della storia italiana, ricordando le vittime della mafia e onorando la memoria di coloro che hanno combattuto contro questa piaga sociale.

La mostra “Macelleria Palermo” è un’importante occasione per riflettere e mantenere viva la memoria storica, e rappresenta un potente strumento educativo e di sensibilizzazione per il pubblico (si consiglia la visione a un solo pubblico adulto).

Al vernissage, che avrà luogo il 18 luglio alle ore 17.30, oltre al fotoreporter Franco Lannino, saranno presenti il prof. Michele Cometa, docente di Storia della cultura e Cultura visuale all’Università di Palermo, il presidente della Commissione Antimafia Antonello Cracolici, Alessandra Dino: studiosa dei fenomeni della criminalità organizzata (Università di Palermo), la scrittrice Gemma Mannino Contin e il prof. Giovì Monteleone Sindaco di Carini Monteleone, l’ Assessore Beni e Attività Culturali, Politiche Sociali del Comune di Carini Dott. Salvatore Badalamenti

Grazie al lavoro dell’ Jonathan Livingston ONG, in occasione del vernissage sarà restituito alla città un nuovo spazio: la sala Meeting sita nell’atrio del Castello, rimodernata, ampliata e pronta per un rilancio culturale, un nuovo spazio per eventi espositivi di alto profilo culturale.

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Research Analysts Offer Predictions for MongoDB, Inc.’s Q3 2025 Earnings (NASDAQ:MDB)

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

MongoDB, Inc. (NASDAQ:MDBFree Report) – Equities research analysts at Capital One Financial cut their Q3 2025 earnings per share (EPS) estimates for MongoDB in a research note issued on Thursday, July 11th. Capital One Financial analyst C. Murphy now expects that the company will earn ($0.61) per share for the quarter, down from their previous estimate of ($0.49). The consensus estimate for MongoDB’s current full-year earnings is ($2.67) per share. Capital One Financial also issued estimates for MongoDB’s Q4 2025 earnings at ($0.61) EPS and FY2026 earnings at ($2.44) EPS.

MongoDB (NASDAQ:MDBGet Free Report) last issued its earnings results on Thursday, May 30th. The company reported ($0.80) earnings per share (EPS) for the quarter, meeting analysts’ consensus estimates of ($0.80). MongoDB had a negative net margin of 11.50% and a negative return on equity of 14.88%. The business had revenue of $450.56 million for the quarter, compared to analysts’ expectations of $438.44 million.

A number of other research firms have also recently commented on MDB. Guggenheim raised MongoDB from a “sell” rating to a “neutral” rating in a research note on Monday, June 3rd. Morgan Stanley reduced their target price on MongoDB from $455.00 to $320.00 and set an “overweight” rating on the stock in a research note on Friday, May 31st. JMP Securities cut their price target on shares of MongoDB from $440.00 to $380.00 and set a “market outperform” rating for the company in a research report on Friday, May 31st. KeyCorp cut their price target on shares of MongoDB from $490.00 to $440.00 and set an “overweight” rating for the company in a research report on Thursday, April 18th. Finally, Loop Capital cut their price target on shares of MongoDB from $415.00 to $315.00 and set a “buy” rating for the company in a research report on Friday, May 31st. One investment analyst has rated the stock with a sell rating, five have assigned a hold rating, nineteen have assigned a buy rating and one has issued a strong buy rating to the company’s stock. Based on data from MarketBeat.com, the company currently has a consensus rating of “Moderate Buy” and a consensus target price of $355.74.

Check Out Our Latest Report on MDB

MongoDB Price Performance

Shares of MongoDB stock traded down $0.78 during trading on Monday, hitting $252.40. The stock had a trading volume of 781,930 shares, compared to its average volume of 1,541,593. The firm has a market cap of $18.51 billion, a price-to-earnings ratio of -90.44 and a beta of 1.13. MongoDB has a 12 month low of $214.74 and a 12 month high of $509.62. The business’s 50-day moving average is $282.93 and its 200 day moving average is $355.06. The company has a current ratio of 4.93, a quick ratio of 4.93 and a debt-to-equity ratio of 0.90.

Insider Buying and Selling at MongoDB

In other MongoDB news, Director Hope F. Cochran sold 1,174 shares of the firm’s stock in a transaction that occurred on Monday, June 17th. The shares were sold at an average price of $224.38, for a total transaction of $263,422.12. Following the completion of the transaction, the director now owns 13,011 shares of the company’s stock, valued at $2,919,408.18. The sale was disclosed in a legal filing with the Securities & Exchange Commission, which is available through the SEC website. In related news, Director Hope F. Cochran sold 1,174 shares of the firm’s stock in a transaction that occurred on Monday, June 17th. The shares were sold at an average price of $224.38, for a total value of $263,422.12. Following the sale, the director now owns 13,011 shares in the company, valued at $2,919,408.18. The sale was disclosed in a filing with the Securities & Exchange Commission, which is available at the SEC website. Also, Director John Dennis Mcmahon sold 10,000 shares of the business’s stock in a transaction on Monday, June 24th. The shares were sold at an average price of $228.00, for a total transaction of $2,280,000.00. Following the sale, the director now directly owns 20,020 shares of the company’s stock, valued at approximately $4,564,560. The disclosure for this sale can be found here. Over the last quarter, insiders sold 35,179 shares of company stock valued at $9,535,839. Corporate insiders own 3.60% of the company’s stock.

Institutional Investors Weigh In On MongoDB

A number of hedge funds have recently modified their holdings of MDB. DNB Asset Management AS boosted its holdings in MongoDB by 26.2% in the 4th quarter. DNB Asset Management AS now owns 19,503 shares of the company’s stock worth $7,974,000 after buying an additional 4,050 shares during the period. Exchange Traded Concepts LLC boosted its holdings in MongoDB by 36.3% in the 4th quarter. Exchange Traded Concepts LLC now owns 4,593 shares of the company’s stock worth $1,878,000 after buying an additional 1,223 shares during the period. Diversified Trust Co boosted its holdings in MongoDB by 3.9% in the 4th quarter. Diversified Trust Co now owns 3,454 shares of the company’s stock worth $1,412,000 after buying an additional 130 shares during the period. Wedmont Private Capital bought a new position in MongoDB in the 4th quarter worth $234,000. Finally, Beacon Capital Management LLC boosted its holdings in MongoDB by 1,111.1% in the 4th quarter. Beacon Capital Management LLC now owns 109 shares of the company’s stock worth $45,000 after buying an additional 100 shares during the period. Institutional investors own 89.29% of the company’s stock.

About MongoDB

(Get Free Report)

MongoDB, Inc, together with its subsidiaries, provides general purpose database platform worldwide. The company provides MongoDB Atlas, a hosted multi-cloud database-as-a-service solution; MongoDB Enterprise Advanced, a commercial database server for enterprise customers to run in the cloud, on-premises, or in a hybrid environment; and Community Server, a free-to-download version of its database, which includes the functionality that developers need to get started with MongoDB.

Recommended Stories

Earnings History and Estimates for MongoDB (NASDAQ:MDB)

Before you consider MongoDB, you’ll want to hear this.

MarketBeat keeps track of Wall Street’s top-rated and best performing research analysts and the stocks they recommend to their clients on a daily basis. MarketBeat has identified the five stocks that top analysts are quietly whispering to their clients to buy now before the broader market catches on… and MongoDB wasn’t on the list.

While MongoDB currently has a “Moderate Buy” rating among analysts, top-rated analysts believe these five stocks are better buys.

View The Five Stocks Here

The Best High-Yield Dividend Stocks for 2024 Cover

Looking to generate income with your stock portfolio? Use these ten stocks to generate a safe and reliable source of investment income.

Get This Free Report

Like this article? Share it with a colleague.

Link copied to clipboard.

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Shareholder Alert: Robbins LLP Informs Stockholders of the Class Action Filed Against … – 新浪香港

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

MongoDB, Inc. Class Action Lawsuit

Shareholder sues MongoDB, Inc. for false statements about its business prospects
Shareholder sues MongoDB, Inc. for false statements about its business prospects

SAN DIEGO, July 15, 2024 (GLOBE NEWSWIRE) — Robbins LLP informs investors that a shareholder filed a class action on behalf of all investors who purchased or otherwise acquired MongoDB, Inc. (NASDAQ: MDB) securities between August 31, 2023 and May 30, 2024. MongoDB is an American software company that designs, develops, manufactures, and sells developer data platforms and integrated services systems through its document-oriented database program.

For more information, submit a form, email attorney Aaron Dumas, Jr., or give us a call at (800) 350-6003.

The Allegations: Robbins LLP is Investigating Allegations that MongoDB, Inc. (MDB) Misled Investors Regarding its Business Prospects

The complaint alleges that during the class period defendants disseminated materially false and misleading statements and/or concealed material adverse facts related to MongoDB’s sales force incentive restructure, including: a significant reduction in the information gathered by their sales force as to the trajectory for the new Atlas enrollments without upfront commitments; reduced pressure on new enrollments to grow; and a significant loss of revenue from unused commitments. Such statements absent these material facts caused investors to purchase MongoDB’s securities at artificially inflated prices.

The complaint continues that investors began to question the veracity of defendants’ public statements on March 7, 2024, during MongoDB’s earnings call following a same day press release announcing its fiscal year 2024 earnings. Defendants purportedly announced an anticipated near zero revenue from unused Atlas commitments in fiscal year 2025, a decrease of approximately $40 million in revenue, attributed to the Company’s decision to change their sales incentive structure to reduce enrollment frictions. Additionally, MongoDB provided estimated growth for the fiscal year 2025 of just 14% growth, compared to projected 16% for the previous year, which had resulted in an actualized 31% growth. On this news, the price of MongoDB’s common stock declined from a closing market price of $412.01 per share on March 7, 2024, to $383.42 per share on March 8, 2024.

Plaintiff alleges that notwithstanding the March 7 disclosures, defendants continued to mislead investors as they continued to create the false impression that they possessed reliable information pertaining to the Company’s projected revenue outlook and anticipated growth while also minimizing risk from seasonality and macroeconomic fluctuations.

The truth finally emerged on May 30, 2024, when MongoDB again announced significantly reduced growth expectations, this time cutting fiscal year 2025 growth projections further, again attributing the losses to the Company’s decision to change their sales incentive structure to reduce enrollment frictions, along with some allegedly unanticipated macro headwinds. On this news, the price of MongoDB’s common stock declined from $310.00 per share on May 30, 2024, to $236.06 per share on May 31, 2024, a decline of nearly 24%.

What Now: You may be eligible to participate in the class action against MongoDB, Inc. Shareholders who want to serve as lead plaintiff for the class must file their motions with the court by September 9, 2024. A lead plaintiff is a representative party who acts on behalf of other class members in directing the litigation. You do not have to participate in the case to be eligible for a recovery. If you choose to take no action, you can remain an absent class member. For more information, click here.

All representation is on a contingency fee basis. Shareholders pay no fees or expenses.  

About Robbins LLP: Some law firms issuing releases about this matter do not actually litigate securities class actions; Robbins LLP does. A recognized leader in shareholder rights litigation, the attorneys and staff of Robbins LLP have been dedicated to helping shareholders recover losses, improve corporate governance structures, and hold company executives accountable for their wrongdoing since 2002. Since our inception, we have obtained over $1 billion for shareholders.

To be notified if a class action against MongoDB, Inc. settles or to receive free alerts when corporate executives engage in wrongdoing, sign up for Stock Watch today.

Attorney Advertising. Past results do not guarantee a similar outcome.

A photo accompanying this announcement is available at https://www.globenewswire.com/NewsRoom/AttachmentNg/301d3128-eb31-4e62-ad71-82fc624dfb5e


Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


MONGODB, INC. (NASDAQ: MDB) INVESTOR ALERT – The Bakersfield Californian

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

NEW YORK, July 15, 2024 (GLOBE NEWSWIRE) — Bernstein Liebhard LLP announces that a shareholder has filed a securities class action lawsuit on behalf of investors (the “Class”) who purchased or acquired the securities of MongoDB, Inc. (“MongoDB”) (NASDAQ: MDB) between August 31, 2023 and May 30, 2024, inclusive (the “Class Period”).

This page requires Javascript.

Javascript is required for you to be able to read premium content. Please enable it in your browser settings.

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


MongoDB (MDB) Faces Securities Class Action Following Investor Scrutiny Over Sales …

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

SAN FRANCISCO, July 15, 2024 (GLOBE NEWSWIRE) — SAN FRANCISCOHagens Berman urges MongoDB, Inc. (NASDAQ: MDB) investors who suffered substantial losses to submit your losses now.

Class Period: Aug. 31, 2023 – May 30, 2024
Lead Plaintiff Deadline: Sept. 9, 2024
Visit: www.hbsslaw.com/investor-fraud/mdb
Contact the Firm Now: MDB@hbsslaw.com 
  844-916-0895


Class Action Lawsuit Against MongoDB, Inc. (MDB):

MongoDB Inc., a provider of document-database software, now faces an investor class action after revising its fiscal year 2025 revenue guidance twice in recent months.

Previously, MongoDB management expressed confidence in its ability to achieve its FY 2025 revenue targets. This confidence stemmed from initiatives such as restructuring sales force incentives and reducing pressure on upfront customer commitments. Additionally, management emphasized efforts to mitigate risks associated with seasonal trends and broader economic factors.

However, on March 7, 2024, MongoDB surprised investors with a downward revision of its full-year guidance. The company attributed the decrease to a restructuring of its sales force compensation model, which it said led to a decline in upfront customer commitments and revenue from multi-year licensing deals. Notably, management acknowledged this change could have resulted in higher guidance if sales capacity had not been impacted. This news led to a significant decline in MongoDB’s stock price.

Despite these revisions, management maintained its view that the overall business environment for FY 2025 would be similar to the previous year.

Then, on May 30, 2024, MongoDB further revised its FY 2025 guidance downward. This time, the company cited macroeconomic factors affecting customer adoption of its MongoDB Atlas product, along with the ongoing impact of the sales force restructuring. These disclosures resulted in another substantial drop in the company’s share price.

The cumulative effect of these revisions has eroded shareholder value by an estimated $7.5 billion.

Investors have now brought suit against MongoDB and its most senior executives alleging they gave the false impression that they possessed reliable information pertaining to the Company’s projected revenue outlook and anticipated growth, while also minimizing risk from seasonality and macroeconomic fluctuations. The complaint alleges that when making these statements, executives knew that macro headwinds were worsening and that MongoDB’s sales force incentive restructure were reducing new enrollments.

“We’re investigating whether MongoDB may have downplayed known risks posed by its sales incentive restructure and macroeconomic factors,” said Reed Kathrein, the Hagens Berman partner leading the investigation.

If you invested in MongoDB and have substantial losses submit your losses now »

If you’d like more information and answers to frequently asked questions about the MongoDB case and our investigation, read more »

About Hagens Berman
Hagens Berman is a global plaintiffs’ rights complex litigation firm focusing on corporate accountability. The firm is home to a robust practice and represents investors as well as whistleblowers, workers, consumers and others in cases achieving real results for those harmed by corporate negligence and other wrongdoings. Hagens Berman’s team has secured more than $2.9 billion in this area of law. More about the firm and its successes can be found at hbsslaw.com. Follow the firm for updates and news at @ClassActionLaw

Contact:
Reed Kathrein, 844-916-0895


Primary Logo

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


MONGODB, INC. (NASDAQ: MDB) INVESTOR ALERT: Investors With Large Losses in …

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

NEW YORK, July 15, 2024 (GLOBE NEWSWIRE) — Bernstein Liebhard LLP announces that a shareholder has filed a securities class action lawsuit on behalf of investors (the “Class”) who purchased or acquired the securities of MongoDB, Inc. (“MongoDB”) (NASDAQ: MDB) between August 31, 2023 and May 30, 2024, inclusive (the “Class Period”).

For more information, submit a form at MongoDB, Inc. Shareholder Class Action Lawsuit, email Investor Relations Manager Peter Allocco at pallocco@bernlieb.com, or call us at (212) 951-2030.

According to the lawsuit, MongoDB made false and misleading statements related to MongoDB’s sales force incentive restructure.

If you wish to serve as lead plaintiff for the Class, you must file papers by September 9, 2024. A lead plaintiff is a representative party acting on other class members’ behalf in directing the litigation. Your ability to share in any recovery doesn’t require that you serve as lead plaintiff. If you choose to take no action, you may remain an absent class member.

All representation is on a contingency fee basis. Shareholders pay no fees or expenses.

Since 1993, Bernstein Liebhard LLP has recovered over $3.5 billion for its clients. In addition to representing individual investors, the Firm has been retained by some of the largest public and private pension funds in the country to monitor their assets and pursue litigation on their behalf. As a result of its success litigating hundreds of class actions, the Firm has been named to The National Law Journal’s “Plaintiffs’ Hot List” thirteen times and listed in The Legal 500 for sixteen consecutive years.

ATTORNEY ADVERTISING. © 2024 Bernstein Liebhard LLP. The law firm responsible for this advertisement is Bernstein Liebhard LLP, 10 East 40th Street, New York, New York 10016, (212) 779-1414. Prior results do not guarantee or predict a similar outcome with respect to any future matter.

Contact Information:

Peter Allocco
Investor Relations Manager
Bernstein Liebhard LLP
https://www.bernlieb.com
(212) 951-2030
pallocco@bernlieb.com


Primary Logo

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.