Category: Uncategorized

MMS • Supratip Banerjee Soumyadip Chowdhury Ana Medina Uma Mukkara
Article originally posted on InfoQ. Visit InfoQ

Transcript
Losio: In this session, we’ll chat about resilience and more so chaos engineering in what we call a Kubernetes world.
Just a couple of words about the topic, what resilience and chaos engineering means in a Kubernetes world. Chaos engineering is not something new. Has been popular for many years, but container orchestration presents specific challenges. For example, we brought together experts: expert positions, five different ones from different backgrounds, different industries, different continents, to understand and to discuss together what’s the best practice for chaos engineering and chaos testing.
My name is Renato Losio. I’m a cloud architect. Chaos engineering is a topic I’m super interested in, but it’s not my area. I’m really keen to listen to the amazing panelists we have to know more about this topic. I’m an editor here at InfoQ. We are joined by five experts coming from different industries, different companies, different backgrounds.
Medina: My name is Ana Margarita Medina. I’m a Senior Staff Developer Advocate with ServiceNow Cloud Observability, formerly known as Lightstep. I got started in the topic of chaos engineering in 2016. I had the pleasure of working at Uber as a site reliability engineer, and I got a chance to work on the tooling that they had for chaos engineering, being on-call, maintaining it, and educating folks on how to use that. Then I went on to work for a chaos engineering vendor for four years, Gremlin. Getting folks started with chaos engineering, giving them material, getting them their first chaos game days and stuff like that, which was a lot of fun. Now I focus more on the world of observability, but it’s always really nice to stay close to the world of reliability and tell folks like, you got to validate things with something like chaos engineering.
Banerjee: This is Supratip Banerjee. I work currently with EXFO as a senior architect. I have around two decades of industry experience. My specializations are around designing large scale and enterprise applications, DevOps ecosystem, cloud computing. I’m also an AWS Community Builder. I love exploring new technologies and tools. I came across chaos engineering in one of my previous employer, I was facing downtime issues with cloud based and also on-premise applications. I was looking for tools and technologies for resilience, which is one of the very important non-functional requirements. There I got introduced to chaos engineering concept. Afterwards, I tried Chaos Monkey, Chaos Toolkit, AWS FIS, all these different tools to take care of that.
Mukkara: I work as the head of chaos engineering at Harness. I came to Harness through the acquisition of my company, ChaosNative, which I created to build an enterprise version of LitmusChaos. That was my entry into the world of chaos engineering back in 2017. We were trying to test the resilience of some of the storage products, especially in Kubernetes. That was something new at that time, how to test the resilience of Kubernetes workloads. I ended up creating a new tool called LitmusChaos, and then open sourcing it, donating it to CNCF. Now it’s in incubating level. It’s been about 6 to 7 years of a journey in the world of chaos engineering, still learning from the large enterprises on what are the real challenges in adopting scaling. Really get something out of chaos engineering at the end of the day.
Losio: I’m sure we’ll come back to open-source tools we can use to address all the challenges we’re going to discuss.
Roa: I am Yury Niño. I work as application and monetization engineer in Google Cloud. About my professional journey. I have a bachelor’s in systems engineering, a master’s in computer science. I have worked as a software engineer, solutions architect, and SRE for probably 12 years. My first discovery of chaos engineering was 8 years ago. When I was working as a software engineer at Scotiabank. I had a challenge with resilience and the performance of [inaudible 00:07:08] that is a common architecture in our banks in Colombia. That was deployed in an on-premise infrastructure, and I used a circuit breaker to solve it. Chaos engineering was key in the process. I discovered chaos engineering at this moment. Since that moment, I have been involved in many initiatives related to that in my country, and in other places. Since I am a professor also at a university in Colombia, I’ve been researching how chaos engineering, human factors, and observability could be useful in solving challenging issues with the performance and reliability of the applications.
Chowdhury: My name is Soumyadip. I’m currently working in Red Hat for the last three years. I’m working in Red Hat as a senior software engineer. I mostly work on the backend cloud native, Kubernetes, and Open Shift. I came to know about chaos engineering for one requirement of our project, like to test the resilience of the microservices, what can be the hypothesis of the system. Then I explored few other tools, like Chaos Monkey, Chaos Mesh. I think Chaos Mesh is now a CNCF project. I have used that tool to build the ecosystem. There are a few other tools that I was also exploring, like Kraken and all. This is how I started my chaos journey. It’s been three years in the chaos domain. Yes, still learning.
What is Chaos Engineering?
Losio: I think I heard already a few words that are coming multiple times from tools that have been used, Chaos Monkey, and whatever else. We would like to focus first on Kubernetes and what the specific challenges are in the Kubernetes world for chaos engineering. Just maybe, if anyone of you wants to give a couple of words first, what we mean by chaos engineering, what we don’t mean by chaos engineering, because I heard so many definitions. I’ve brought myself to define it. Do you want to give just a very short intro, what we mean, usually, by chaos engineering, without really focusing on the topic?
Mukkara: I’ll probably just start by saying what is not a real chaos engineering, that automatically cause what’s chaos engineering. Chaos engineering is not just introducing faults. Chaos engineering is really about introducing faults and observing the system against your expectations. You need to know, define the steady state hypothesis, or what it means for the system to be resilient. Once you have a clear definition of it, then you start breaking the system or introducing some faults into the system, and start scoring against your expectation. Then start noting down your observations. Then use them however you want. Use them to take a decision, whether I move my deployment from left to right, or just create a bug and then stop everything or give some feedback. You can use it in many ways, but chaos engineering is really about introducing faults and observing against your expectations. That’s really what it is.
Medina: It’s very much doing the fault injection, but in a very thoughtful manner. You don’t really want to be doing it in a random way, without communicating or anything like that. I think that’s one of the biggest misconceptions sometimes. Of course, when you observe it, that’s probably one of the best bets you can have. You want to know what the baseline of your system is, and you want to see how the chaos affects it as it goes on. Some folks forget that little part of like, it goes through the entire process of seeing how it would actually be like in a real-life system.
Common Pitfalls in Chaos Engineering
Losio: Before going really deep in which tools, which technology, and how to really address in the Kubernetes world, I had one more question I’m really interested in. What are the common pitfalls developers should be aware of when dealing with chaos engineering? Until now, we say what we mean by chaos engineering, it’s like, for someone like myself that I’m familiar with the topic, but I’ve never really put myself into it. I’ve never played with Kubernetes and chaos engineering. What are common pitfalls we should really be aware of?
Roa: I would like to clarify that that is precisely a difference between chaos engineering and classic testing, that in that case, we have the possibility to observe the results and providing a resilience strategy.
Regarding the pitfalls, in my journey, I have made a lot of mistakes with chaos engineering. I could summarize them in three things, inadequate planning and preparation, because that is really important to have the proper tools and to have the steady state in your systems, and to have observability, of course. The first one is inadequate planning and preparation. The second one is starting with unrealistic or complex experiments. In my experience, it’s better to start with simple experiments, probably manual scripts or another strategy, but as you progress, provide more complex or more sophisticated tools. Third, misunderstanding the results. That is the reason that you need observability here, because that is really important to understand the results you are getting with the experiments. Regarding planning, I have seen that the lack of clear objectives, insufficient monitoring and observability, and neglected communication and collaboration, for the real chaos, form the perfect recipe for failure. Another pitfall includes ignoring negative results, not iterating or not improving, for example. I think that, in my experience, they are the common pitfalls I have seen in my implementation with other customers.
Banerjee: I just wanted to highlight a little bit on the clear objective part. I would actually tell the developers, or whoever is actually strategizing it, to first understand the business need. Whether it is the application or something else. There are different ways to understand what the customer or the business is needing. There are customer contracts available or SLAs, or SLOs, service level agreement, service level objectives. These documents are available where it is clearly mentioned by the customer, how much availability is expected out of the service provider. Maybe it’s 99% or maybe it’s 99.5%. We know from there, this is the first stage. We know that a 0.5% or 1% of downtime is probably ok, although it is not.
As per the contract, it is ok. Then from there, we can take the next step of understanding the customer more better, whether the customer gets frustrated or if they are ok with a little bit of downtime. Then we can step into the application. We can have a high-level understanding of what that downtime means to that customer, whether it means loss of a lot of millions or loss of reputation for that customer. That is understanding the objective. Then, go to the next step of being into the technical, like understanding the baseline metrics, and planning it properly, going before to production, maybe do it in QA environment.
Chowdhury: When we start as a developer, when we get the task to install some chaos thing in our cloud native, in our domain. Every time, if you start with a small component, let’s say we have 10 microservices, our goal is that without measuring what can be the call, without measuring the boundaries, we just inject the fault in most of the system. There can be a scenario where we may arise into less hypothesis or less analysis, like we are breaking everything at once. We should be very cautious, or very meaningful when we are injecting something in some system. What can be the consequences? Otherwise, in the normal time, like in the normal life, what we do in normal testing, it will be just like that. It may violate some key area. In chaos engineering, we get all those metrics, all the hypothesis for a specific microservice or a specific application. Those things might get violated when we start testing the entire system rather than going for one-by-one component.
Examples When Chaos Engineering Saved the Day
Losio: We just mentioned the importance for a company of maintaining a certain level of SLO and whatever. Apart from very famous ones, in your experience, what are examples that can help an attendee to understand, that usually, when I use a new tool, I want an example of, that company managed to save their infrastructure. Example about how really chaos engineering revealed a significant vulnerability or led to unexpected outcomes in a deployment. A Kubernetes one would be lovely, but not necessarily.
Mukkara: I’ll actually take an example of Kubernetes. That’s the topic here. Kubernetes chaos scenarios are slightly different, sometimes a lot different, compared to the traditional chaos engineering. For everyone who are not practicing chaos engineering in the recent times, chaos engineering is about Chaos Monkey, large systems breaking some of the deep infrastructures, and see the performance staying live. That’s the old method. Why chaos engineering is quite important in the recent times, is the way Kubernetes is designed, and the cloud native workloads operate. Kubernetes is a horizontally scalable, distributed system. Developers are at the center of building and maintaining it.
One example that I have seen, very early days, is an example of what can happen to your large deployment when a pod is deleted. Pod deletes are very common. Developers, yes, pods are always getting deleted. What happened in one of the early days was, on a hyperscaler system, you have autoscale enabled, which really means that when the traffic is coming in, you need more resources, spin up the nodes. If the system is not configured, and when a pod got deleted because of whatever reason, there are multiple reasons why a pod can get deleted. Pod got deleted, traffic is coming in. You need more resources, so a new node is fun. That was just not enough. Then we saw a system where a pod is deleted, and then tens, sometimes it went all the way to 100 nodes being spun just because one pod got deleted. That was a resilience issue, and the traffic was not being served.
What exactly happened in that system is these readiness probes were not configured. It’s a simple configuration issue. The system is thinking that the pod is there, but it’s not ready, so I need to get more pods, and the traffic is still coming in. I need to serve the traffic, so autoscaler is enabled. On one side, your system is not ready. On the other side, you are paying for 100 nodes. It’s a loss on both the sides. Having this kind of test for a developer, not assuming that it’s a code that can cause a problem, it can be a configuration that can cause a major problem as well. Developers have to keep an eye on how is my deployment going to be configured. Can I write some tests around misconfigurations around deployment, and then add chaos test on top of it, can make your solution really sturdy. There’s a saying in Kubernetes, build once and run it anywhere. Run it anywhere, you need to really test it. What can go wrong when run somewhere else? These are some of the examples. Chaos testing can really add value for most common cases as well.
Losio: Do you want to add anything specific? Do you have any suggestion to share or common example where basically chaos testing in a Kubernetes deployment can raise some vulnerability?
Roa: In my experience, for example, with e-commerce applications deployed on clusters, chaos engineering has been key in the identification of vulnerabilities. As Uma mentioned, these types of architectures need resilience all the time. It seems there are a lot of components that conforms the architectures. We have challenges related to, for example, unique failure points, or even providing observability. I see architectures based on microservice and Kubernetes all the time, and in these cases, customers implement patterns like retries and circuit breakers for providing resilience, which not always work well. Although the literature and the theory is clear about these patterns, in practice, we have a lot of issues with the implementations. I think to have tools and methodologies related to chaos engineering and related with testing on the infrastructure is key to test these patterns and to provide the proper solutions for that.
On the other side, I really value that chaos engineering provide the knowledge about the architecture that is useful for writing, for example, playbooks or runbooks for our operations engineers. I would like to mention that related to the value of the chaos engineering and tools like Litmus, Gremlin, and other tools in the market providing resilience for that. Specifically, an example, I remember I had an issue with an Envoy in an Apigee architecture in the past with our customer, and because the Envoy was creating a unique failure point, and we use chaos engineering for bombarding the services with a lot of requests, overcoming the system and causing a simulating failure. With the collected information, with the observability provided by this exercise, we were able to determine the configuration parameters for the circuit breakers and the components in the resilience architecture.
Losio: Anyone else wants to share examples that they had from real-life scenarios?
Medina: I had actually a few with doing chaos engineering Kubernetes, where you actually ended up finding out data loss would happen. One of the main ones that I vaguely remember was doing an implementation of Redis, just straight out of the box, where you were having through your primary database, and then you were also having your secondary pod. Just the way that you look at the architecture, you were pretty much like, with a hypothesis of no matter what happens to my primary pod, I’m still going to have complete consistency. We did a shutdown chaos engineering experiment on primary and all of a sudden you see that your secondary pause shows that the database is completely empty. This was just a straight out of the box configuration that documentation said it was going to be reliable.
When you look under the hood, like with observability and more debugging, you were noticing that you had your primary pod have the data, and then as it shuts down, secondary looks at primary and says, that’s becoming empty. I’m going to become empty too. All of a sudden you have a complete data loss. We ended up learning that there was a lot more configuration that you need to do with Redis, setting up Redis Sentinel in order for it to not have issues like that. You wouldn’t necessarily know that until you go ahead and you run an experiment like this, where it could be just a simple of what happens if my pod goes missing.
Unique Challenges of Chaos Engineering in a Kubernetes Environment
Losio: I was thinking, as a developer, what are the unique challenges of chaos engineering in a Kubernetes environment? Let’s say that I’m coming as experienced from a Chaos Monkey scenario, an autoscaling of EC2 instances. I’m now moving everything to Kubernetes. What’s unique, what’s different assuming that I’ve already a previous experience in chaos engineering?
Banerjee: I think Kubernetes acts in a very strong and intelligent way, and it has different functionalities. The resilience is very strong there. They have dynamic and ephemeral infrastructure. If a pod is going down, Kubernetes, it’s scheduled. They automatically restore it, and that gives us the chaos testing problem. Like Uma just mentioned that observability is very important. We do test and then we try to understand how the system is behaving. Kubernetes is very fast in that way, and it is difficult to understand how much impact that failure had, because just an example, that pod got quickly restored. That is one. Another example can be, microservices have very complex communication system. One microservice is calling another, and that is probably calling two more. The problem is, it has a very complex interdependence, and if one service fails, that can cascade to another in a very unpredictable way.
Again, which makes it very difficult to understand the impact of chaos testing. Another example would be the autoscaling, like you were mentioning, self-healing, all this mechanism that Kubernetes has. Kubernetes tend to scale if a number of APIs is getting called. For example, if it goes high, it will scale up a lot immediately, or it will scale down as well, based on the requirement of the API that is being called. It is very difficult to observe or understand how chaos testing is actually performing. These are some of the examples.
Chowdhury: I just want to add one thing that I have personally faced from my experience that like, let’s say, we have multiple environments, let’s say QA, Dev, stage, and prod. There are multiple different configurations, there are multiple workloads, there are different types of deployments. Somewhere we are going for multi-tenant solution. Somewhere we don’t have that because of resource limitations. The thing we have faced that, let’s say, how your chaos is performing in your staging environment, it’s not like your production environment. The hypothesis we are coming into, the analysis we are doing from the stage environment, or QA environment, that will not be 100% similar to the production environment. It might be complete opposite. It might be somewhere near to that. These are some limitations of chaos.
Mukkara: The difference in Kubernetes in my observation, compared to the legacy systems. When there is a change in a legacy system, it’s local. You go and test that local system, but in Kubernetes, Kubernetes itself is changing all the time. There is an upgrade that happened to the Kubernetes system, so you have to assume that I’m affected as an application. The question becomes, what test to run, and the number of tests that you end up running is almost whatever you have. I see that the number of chaos tests that you run for a Kubernetes upgrade is vastly different on Kubernetes versus a legacy system. That’s one change that we’ve been observing. A lot of resilience coverage needs to be given on Kubernetes. Don’t assume that a certain system is not affected. It may be affected. Kubernetes is complex.
Designing Meaningful Chaos Experiments
Losio: We just mentioned the challenges of Kubernetes itself, the challenges as well to try to run tests that are for different scenarios, dev, staging, production, and different hypotheses and different expectations. One thing that I always wonder, and that’s not specific just to Kubernetes, but in general, I think, is when designing chaos experiments, what are the factors you should consider to ensure that the tests are actually meaningful in the sense that they reflect your real-world scenario? Because it’s a bit the problem I have as well sometime when I build any kind of test. How do I make sure that what I’m doing is being a scenario? I’m thinking as well, maybe because I’m quite new in the Kubernetes world, I wonder how can you address that? I can write my list of tests, my test scenario that I want to check. How can I make sure that these are meaningful somehow, and they make sense, that I’m not just writing random tests and covering scenario. What are the challenges in chaos engineering in that sense?
Roa: The challenges related to that? I think, the dynamic nature of architectures. For example, in Kubernetes, we constantly change the pods, as Ana mentioned, being a scheduler was terminated. This dynamic represents a first challenge related to the nature of the architectures. Dynamic distributed systems, as Supratip mentioned, that Kubernetes applications are often composed by multiple microservices, spread across different nodes on clusters. The real scenarios, there are a lot of components connected, sending messages. That is really challenging because, for example, we have to test with chaos engineering. We have the possibility to inject a failure in a machine or in a node or in a database. When you have to test all architecture, for me, that is the first challenge. The complex interactions also, because Kubernetes involves numerous components, like post services, as I just mentioned. Injecting failures into a specific context can have cascading effects. That is the importance of observability, considering the cascading effects of other components, that is difficult to understand what happens in these specific points.
Finally, I think the ephemeral resources, because in Kubernetes, the resources are designed for being ephemeral, meaning that the clusters create and recreate infrastructure when you have, for example, an overload with the workload in the cloud, for example. It’s really difficult to test that, because you have to provide that test in a specific context. After an overload, you have another infrastructure and a new infrastructure recreated, since, above that. I think those are the challenges, and that they are things that you have to consider if you want to run chaos engineering in a real environment. In Google, we use disaster recovery testing. That is a practice similar to chaos engineering. In my other experiments in the university, for example, I have to create simulated exercises. That is, for me, the most challenging things to try to simulate these interactions and this dynamic in real scenarios.
Losio: What are the factors you consider to ensure that your tests are beautiful?
Banerjee: Be very careful while testing it on production. Understand the impact. It’s very important. Otherwise, you may just break the application that the live users are using. To understand it, we need to do few things like, we have to understand the relationship also between the infrastructure that we have. We have to start very small, not go a huge way and break everything. We also need to observe and document and analyze those results, so we know what is happening, observability, like everyone is saying. Then go to the next iteration and try to improve from there.
Simulating I/O Faults in K8s
Losio: Yury just mentioned about injecting types of fault simulation. Particularly, she mentioned as well the topic of I/O and ephemeral I/O. How can you simulate I/O faults on Kubernetes when playing with chaos engineering? Is it something you can do?
Chowdhury: In one of our architectures, we had some SSE, server-side events in place. That was basically our SOT, our source of truth. Once we started with injecting chaos in that main component, we had an operator in place. We didn’t expect that it will cost breaking few other microservices as well. It ended up breaking a lot of microservices. You are not sure every time, because SSE is something very lightweight and very not resourced, and we didn’t even assume that it could go to that extent where it will break the entire system. We have taken the measure for the Mongo or all the database things, but we didn’t think in terms of SSE, because once you crash any pod, you have to establish that request with another pod.
If that pod is not getting started, then whatever pods we have which is associated with the source of truth, or the operator, where we have the producer of the SSE, so that connection is not getting started. That’s why it ended up breaking all the microservices which was consuming that SSE, in the I/O input. This is something that I have faced where your connection is not getting triggered every time. If that’s a long pulling or that’s a constant connection, then it might be difficult scenarios for the users.
Where to Start, with Chaos Engineering
Losio: I would like to talk from the point of view of a software developer. I have my Kubernetes, whatever, on a cloud provider, using EKS or whatever else, on AWS, or Microsoft, or Google. I’m new to chaos tests. I want to start from scratch. In some way, I’m lucky I don’t have to manage anything. Before, we mentioned, maybe we want to start small. We want to start with breaking up our iterative approach, where we start not taking the biggest one. Is there any tool, anything I can use, open source, not open source? If I go out of this call and I want to start, where should I start? I haven’t done chaos engineering before, and that’s my Kubernetes life.
Medina: I think one of the first things that I would ask is very much of like, what budget and support do you have? Because some of it comes where it’s like, you have no budget. You’re doing this as a person of one, where it’s very much of like, you might need to go out into open-source tools. From open-source tools, we’ve mentioned LitmusChaos. We’ve mentioned Chaos Mesh. Those are the two that I’m most familiar with. Folks can get started with that if you’re in a non-Kubernetes environment. There’s concepts of Chaos Monkey, where you can just do things manually. Yury mentioned circuit breakers.
Coming from a place where I’ve worked with vendors, I also really like where vendors really provide you a lot of education, a lot of handholding, as you’re getting started with chaos engineering. Something like Harness, something like Gremlin, where folks are able to have a suite of tests that they can actually get started with, really is helpful. From there, of course, it really much varies into what type of environments you can have. Starting out with something like just doing some resource limits.
Losio: I was thinking really more as a developer, starting from scratch. I fully understand that we have a very different scenario. If you are in an enterprise, I’m probably not going to do it alone, and I probably need some support and training as well.
I/O Faults
Mukkara: I/O faults are definitely not easy, in two things. One is, you won’t get permission to introduce an I/O fault because you’re not the owner of, most likely the application and infrastructure as well. I/O faults are important, and those are the ones that cause the maximum damage, usually. For example, a database is not coming up on the other side, or database is corrupted, or disaster recovery is not working as expected, not coming up in time. Migrations are not happening. There are so many scenarios you want to test using these faults. One way is to follow the receiving side of the data. You have a database, and I am the consumer of the data, and then the data comes through your file system on your node. Database exists elsewhere, but it has to come through your file system, so there are faults available, such as, make the file attribute read only.
Database is all good, but still on your file system, you can mark it as read only. Then the volume that’s mounted on the database becomes read only. Then it triggers either, move on to another node, or sometimes it can cause disaster recovery as well. There are systems, for example, Harness Chaos. It has I/O faults that many people use. When it comes to the open-source tooling, I think LitmusChaos is a good tool. Please do use it. There are free enterprise tools available as well. Just like GitHub is free for everyone, to some extent, if it’s an open-source project, whatever.
Something similar, for example, Harness has a free plan, no questions asked, all enterprise features are available. Made it super easy. The catch only is that you can use a limited number of experiment runs in a month. To get started, you have everything on your fingertips made so easy. Those are some of the choices. Chaos practice is becoming common, and it’s almost becoming a need, not an option. You see multiple vendors giving you choices. It’s not a freemium model. It’s a real free model as well. We have offered a complete free service, no questions asked, no sales page. You can just go and get everything for free for a certain number of runs in a month.
Chaos Engineering Tools
Roa: When you are starting with chaos engineering, it’s really important to consider the chaos maturity model, because I think one first thing to consider is where you are. In this case, the maturity model is important because this tool provides the criteria to determine what you need in order to progress in adopting that. In the first one, a book published by Casey Rosenthal and Nora Jones, we have a chaos maturity model. Presently, I was reviewing a chaos maturity model published by Harness. That is a really good asset to assess: if you are a beginner, you are in the middle phase, or you are in a final phase. Regarding good tools, I like Litmus, because it provides a friendly console to run a chaos exercise in your infrastructure. Also, I would like to mention that in Google, we have the custom tooling that is not open, and that is not published in the market. Considering an open-source tool, I like Litmus and Gremlin. I think Gremlin includes a free tire that includes a simple exercise, but that is interesting and that is useful for starting in this adoption.
Losio: I wanted to ask basically as well, example of a paper, book, presentation, tool, whatever that every developer should not miss.
Banerjee: I just wanted to add two more tools that I have personally used. The first one is Chaos Monkey. This is not necessarily directly related to Kubernetes, but Chaos Monkey helps. It attacks the applications that I have personally used for Spring Boot. Spring Boot is generally used to write microservices these days in modern applications. They attack microservices running Spring Boot applications, and they simulate these kinds of errors. There are public cloud managed tools as well. One I have used is AWS FIS, fault injection service. There, you can also simulate, you can write your tests and stuff, and it will make your ECS, EC2, EKS, including Kubernetes services a little shaky, so you can test them as well.
Chowdhury: There are lot of enterprise tools, there are a lot of tools that you can do. If you have less idea on chaos engineering, if you have no idea, then, as Uma just said, Litmus is a great tool. Also, you can go for the Chaos Mesh. That is very easy to use for beginners. There you will get the dashboard. There you will get the CLI support and everything. You can directly write all those scripts and run your chaos in a controlled environment. Chaos Mesh is something that you can train, and the community support is also good.
Resources and Advice on Getting Started with Chaos Testing
Losio: I’m thinking as a cloud architect, cloud developer, Kubernetes expert, or whatever, I joined this roundtable, and you convinced me that it’s super important, actually, it’s a must now to introduce chaos testing for my workloads. I cannot have my cluster anymore running as I did until today. I kept my head under the sand until now, and I haven’t really thought about that. That’s cool. I want to go out of this session, and think I can do something tomorrow, do something, this iteration, something small, where should I tackle? I’m in the scenario of not a big enterprise, I’m in the scenario of, I have my Kubernetes cluster. I want to start to do something, to learn something. It can be as well, read one article. It can be, try to inject your first fault with a bash script. If you have an advice, something that a developer can do in half a day, not the next six months.
Mukkara: Tomorrow morning, don’t be in a rush to inject a fault. First thing that you can do if you really want to add to your resilience is to spend some time on observing what could be the top four steady state hypotheses points mean. First, note them down. Second point is really the service dependencies. What are my critical services? How are they interdependent on each other? The third point is, what went wrong recently in my system? What do I think as a cloud architect, that can go wrong? Then, take a test that can cause the least blast radius, don’t try to go and cause a disaster. You will face a lot of opposition. Winning your fellow teammates to introduce the practice of chaos engineering is one of the biggest challenges. What I’ve been advocating is, don’t try to prove that there is a problem. Try to prove that there is resilience. I go and break a system, break a component, and still prove that there is resilience, like, now you can be confident. Then add more resilience scenarios. Then, in that process, you might find more support. Yes, good, 10 of them are working, 11th one is not so much. Let me go look at it.
Losio: The key message is to start small and keep your radius small, don’t try to kill the entire workload in production to prove your point.
Medina: I think one of the first things you can do, don’t go and just break things the next day. I think you need to be really thoughtful about it. I actually really like the suggestion of coming up with those hypotheses points of where to start out. If you do have a chance, try to open up a test environment where you can actually run a chaos engineering experiment. Of course, set up some observability beforehand, come up with a hypothesis to go about it. I think it’s a really great way to have a learning point of like, let me come up with a hypothesis. Let me see what happens as the chaos enters the system. Then think of ways that I can make that system better. You can try to replicate something that you have seen happen internally as an incident, or any incident that has happened in the industry.
I think that’s also one other starting point that I put, like you might not have something that you can replicate, but you can look at other companies and see what other failures they’re having. If you’re thinking of like, what type of experiment can I start with? Just killing a pod is a great way to start. You can do this with some of the open-source tools, and you can even do this by yourself without having any of the tools available. I think those are some great ways that you can really get started without having too much of an overhaul.
Chowdhury: You can start with a pod fault, you can have your own namespace, and here you can inject your faults. If you are a backend developer, or if you are a developer, then you can also focus on the stress testing, because anyway, in a conventional application, also we do the performance stress testing, then you can also go for the stress testing as well. That will also give you a better idea about the resilience of your application and how your application performs in those scenarios.
Banerjee: First, understand why you are doing chaos testing. Don’t go with a haptic knowledge. Understand the objective, why we are talking about introducing chaos. We are failing it purposely. Understand why you are doing that, and then do it. The second point would be to have a baseline metric that is very specific to your application, to your system, exactly how much CPU, how much memory, or what is the latency, or how many requests your system can bear. Once you have that metric laid out, you know where to come back to, or you know what percentage you want to break on top of it. That is very important.
Roa: Another important thing, it’s important to have metrics and things to show what is the impact of chaos engineering in the company. It is really important to analyze the past incidents and logs in my experiences, that provides, for example, analyzing incidents, logs, monitoring data. Before to start, provide the things for recently creating these metrics. If you know your infrastructure, if you know what components are causing pains for your business, you have more tools to convince the executive, and the committees that are in charge to provide budget. Analyze your past incidents and logs, and know your infrastructure, and know how all components work together in order to provide, presently, the real value to this company. Because, although all architectures are composed by microservices, Kubernetes, and Redis database, each company is a different world, and with different pains, and with different challenges. You need to know your company before you start to do that.
See more presentations with transcripts

MMS • RSS
Posted on mongodb google news. Visit mongodb google news
BUSINESSNEXT has entered into a partnership with MongoDB to enhance autonomous operations in the banking and insurance sectors through the use of predictive and generative AI.
MongoDB, based in the US, is known for its software and data solutions that support innovation across various industries. The partnership aims to provide banks and financial institutions with the technological tools necessary to improve customer service and streamline operations.
By integrating BUSINESSNEXT’s expertise in financial services with MongoDB’s scalable, secure database platform, the collaboration aims to meet the growing demands of the industry. BUSINESSNEXT offers a suite of AI-driven solutions designed for autonomous banking, including modern customer relationship management (CRM) systems, digital journeys, lending platforms, and customer chatbots.
AI and data synergy for financial services
MongoDB’s document-based data model, which offers a flexible schema, complements BUSINESSNEXT’s AI capabilities. In essence, this partnership will enable banks to make better use of their data for more personalised customer interactions, optimiszed lending processes, and data-informed decision-making.
In the company press release, representatives from BUSINESSNEXT emphasised that the collaboration with MongoDB aligns with their vision of providing a modern platform for banks, citing the database’s robust encryption and ability to handle complex data as key strengths. Similarly, Officials from MongoDB noted that working with independent software vendors such as BUSINESSNEXT helps financial services firms accelerate modernisation and leverage AI to differentiate themselves in the market.
The most important outcomes expected from this partnership include improved banking operations, enhanced customer experiences, faster lending processes, better operational efficiency, and stronger risk management practices. Both companies view the collaboration as a major step forward in delivering value to financial institutions globally.
Other developments from BUSINESSNEXT
In August 2024, BUSINESSNEXT announced a partnership with Mannai InfoTech in order to drive digital transformation and development in the Qatar banking sector. Following this announcement, the partnership was expected to drive digital transformation and provide an improved customer experience platform for banking. The strategic deal was set to also transform the banking industry in the region of Qatar, while also delivering enhanced benefits, operational efficiencies, and business growth.
In addition, both Mannai InfoTech and BUSINESSNEXT continued to focus on meeting the needs, preferences, and demands of clients and customers in an ever-evolving market, while prioritising the process of remaining compliant with the regulatory requirements and laws of the local industry.
Article originally posted on mongodb google news. Visit mongodb google news

MMS • RSS
Posted on mongodb google news. Visit mongodb google news

MongoDB , Inc. (NASDAQ:MDB), a leading player in the database solutions market, is navigating a challenging landscape as it grapples with slowing growth and market headwinds. Recent analyst reports and financial performance have sparked a reassessment of the company’s near-term prospects, even as its long-term potential remains strong.
MongoDB operates in the software sector, specializing in next-generation database solutions. With a market capitalization of $17.19 billion and an enterprise value of $18.94 billion, the company has established itself as a formidable presence in the rapidly evolving database management market.
The company’s flagship product, Atlas (NYSE:ATCO), has been a key driver of growth. However, recent financial results have indicated a deceleration in Atlas revenue growth, prompting concerns among investors and analysts. Despite these challenges, MongoDB continues to be highly regarded by developers and is increasingly adopted by enterprises, underscoring its strong market position.
MongoDB’s financial performance in the first quarter of fiscal year 2025 (F1Q25) has been a focal point for analysts. The company reported non-GAAP earnings per share (EPS) of $0.51 on revenue of $451 million, surpassing consensus estimates but showing a year-over-year growth deceleration.
For the full fiscal year 2024, revenue is projected at $1.68 billion with an EV/Sales ratio of 11.3x. The company’s EBIT for FY 2024 is estimated at $270.4 million, translating to an EV/EBIT ratio of 70.0x. These figures reflect the company’s continued growth, albeit at a slower pace than previously anticipated.
The database management market, particularly the cloud segment, is expected to experience significant growth in the coming years. However, MongoDB faces several challenges in maintaining its previously high growth rates. Analysts point to a broad-based slowdown in software spending as a sector-wide issue, not just specific to MongoDB.
The company has revised its guidance for fiscal year 2025, with total revenue now expected to be between $1.88 billion and $1.90 billion. This adjustment reflects weaker new business and consumption trends, which have led to a reassessment of MongoDB’s growth trajectory.
Some analysts suggest that MongoDB’s issues may be more related to self-inflicted and transitory go-to-market (GTM) headwinds rather than macroeconomic factors alone. The company has responded by adjusting its incentive plan to focus on larger, higher-quality deals, which could potentially improve its growth prospects in the long term.
Despite the current challenges, MongoDB’s product offerings continue to receive positive reception from developers and enterprises alike. The company’s operational database product is highly regarded in the industry, which could serve as a foundation for future growth.
Analysts note that product tailwinds could start benefiting the company in the second half of fiscal year 2025. This potential for product-driven acceleration, coupled with MongoDB’s strong market position, suggests that the company may have opportunities to exceed growth expectations if the IT spending environment improves.
The recent slowdown in growth has led to a reassessment of MongoDB’s valuation multiples. Some analysts suggest that a return to 10x multiples might be seen as the peak, reminiscent of earlier times in the company’s history. This adjustment reflects the expectation that MongoDB will not return to its previous 30%+ year-over-year revenue growth rates in the near future.
Despite these challenges, many analysts maintain a positive long-term outlook on MongoDB. The company’s leadership in the next-generation database market, coupled with the overall growth potential of the sector, continues to be viewed favorably. However, the near-term focus remains on how effectively MongoDB can navigate the current market conditions and return to higher growth rates.
MongoDB faces significant headwinds in achieving its revised growth targets. The company has lowered its guidance for fiscal year 2025, reflecting weaker new business and consumption trends. The broader slowdown in software spending across the sector adds another layer of complexity to MongoDB’s growth challenges.
Analysts point out that even if Atlas New ARR grows by 13% this year, total revenue growth may only reach nearly 20%, with further upside being challenging. The impact of Atlas New ARR outperformance on FY25 revenue is limited, making it difficult for MongoDB to exceed the 20% growth target for the fiscal year.
Moreover, the company is grappling with what some analysts describe as self-inflicted and transitory go-to-market headwinds. These internal challenges, combined with the uncertain macroeconomic environment, create a significant hurdle for MongoDB in meeting its revised growth expectations.
The current slowdown in new business acquisition poses a potential threat to MongoDB’s long-term growth trajectory. As the company adjusts its strategy to focus on larger, higher-quality deals, there is a risk that this shift could limit its ability to capture a broader range of market opportunities.
The deceleration in Atlas New ARR growth, while not as severe as initially suggested by management’s commentary, still indicates a cooling of what has been a key growth driver for MongoDB. If this trend continues, it could have lasting implications for the company’s market position and financial performance.
Furthermore, the adjustment in valuation multiples suggested by some analysts reflects a recalibration of growth expectations. This shift in market perception could potentially impact MongoDB’s ability to attract investment and fund future innovations, which are crucial for maintaining its competitive edge in the rapidly evolving database solutions market.
Despite the current challenges, MongoDB’s position as a leader in next-generation database solutions provides a solid foundation for potential recovery. The company’s products continue to be highly regarded by developers and are increasingly adopted by enterprises, indicating a strong market demand for its offerings.
MongoDB operates in a large and growing database management market, with the cloud segment expected to experience significant expansion. This market opportunity, combined with the company’s established reputation and product strength, could serve as catalysts for a faster-than-expected recovery once market conditions improve.
Additionally, MongoDB’s focus on larger, higher-quality deals could potentially lead to more stable and predictable revenue streams in the long term. If successful, this strategy could not only help the company weather the current storm but also position it for accelerated growth as the market rebounds.
While current projections indicate slower growth, MongoDB has several factors that could potentially drive outperformance. The company’s product tailwinds, expected to start benefiting in the second half of fiscal year 2025, could provide a significant boost to revenue growth.
MongoDB’s strong leadership under President & CEO Dev Ittycheria and COO & CFO Michael Gordon is viewed positively by analysts. Their experience and strategic vision could be instrumental in navigating the company through current challenges and capitalizing on emerging opportunities.
Furthermore, if the IT spending environment improves more rapidly than anticipated, MongoDB could be well-positioned to capture increased demand. The company’s continued investment in product development and innovation could also lead to new offerings that drive revenue acceleration, potentially allowing MongoDB to exceed current growth expectations.
Strengths:
Weaknesses:
Opportunities:
Threats:
This analysis is based on information available up to June 3rd, 2024.
Gain an edge in your investment decisions with InvestingPro’s in-depth analysis and exclusive insights on MDB. Our Pro platform offers fair value estimates, performance predictions, and risk assessments, along with additional tips and expert analysis. Explore MDB’s full potential at InvestingPro.
Should you invest in MDB right now? Consider this first:
Investing.com’s ProPicks, an AI-driven service trusted by over 130,000 paying members globally, provides easy-to-follow model portfolios designed for wealth accumulation. Curious if MDB is one of these AI-selected gems? Check out our ProPicks platform to find out and take your investment strategy to the next level.
To evaluate MDB further, use InvestingPro’s Fair Value tool for a comprehensive valuation based on various factors. You can also see if MDB appears on our undervalued or overvalued stock lists.
These tools provide a clearer picture of investment opportunities, enabling more informed decisions about where to allocate your funds.
This article was generated with the support of AI and reviewed by an editor. For more information see our T&C.
Article originally posted on mongodb google news. Visit mongodb google news

MMS • Steef-Jan Wiggers
Article originally posted on InfoQ. Visit InfoQ
AWS recently announced that Amazon RDS for MySQL zero-ETL integration with Amazon Redshift is generally available. This feature enables near real-time analytics and machine learning on transactional data. It allows multiple integrations from a single RDS database and provides data filtering for customized replication.
The GA release of Amazon RDS for MySQL zero-ETL integration with Amazon Redshift follows the earlier releases of zero-ETL integration with Amazon Redshift for Amazon Aurora MySQL-Compatible Edition and preview releases of Aurora PostgreSQL-Compatible Edition, Amazon DynamoDB, and RDS for MySQL. With the GA release, users can expect features like configuring zero-ETL integrations with AWS CloudFormation, configuring multiple integrations from a source database to up to five Amazon Redshift data warehouses, and data filtering.
Matheus Guimaraes, a senior developer advocate at AWS, writes regarding the data filtering:
Most companies, no matter the size, can benefit from adding filtering to their ETL jobs. A typical use case is to reduce data processing and storage costs by selecting only the subset of data needed to replicate from their production databases. Another is to exclude personally identifiable information (PII) from a report’s dataset.
Users can create a zero-ETL integration to replicate data from an RDS database into Amazon Redshift, enabling near real-time analytics, ML, and AI workloads using Amazon Redshift’s built-in capabilities such as machine learning, materialized views, data sharing, federated access to multiple data stores and data lakes, and integrations with Amazon SageMaker, Amazon QuickSight, and other AWS services.
To create a zero-ETL integration by using the AWS Management Console, AWS Command Line Interface (AWS CLI), or an AWS SDK, users specify an RDS database as the source and an Amazon Redshift data warehouse as the target. The integration replicates data from the source database into the target data warehouse.
(Source: AWS Documentation)
In a medium blog post on Zero-ETL, Rajas Walavalkar, a technical architect at Quantiphi Analytics, explains why Zero-ETL Data pipelines can be beneficial to organizations:
- Real-Time Analytics: Businesses rely on real-time insights for timely decisions. Zero ETL enables near real-time analytics by transferring data directly from Aurora MySQL to Redshift, giving organizations a competitive edge.
- Data Freshness: Zero ETL maintains data freshness, which is crucial for accurate insights by ingesting data into Redshift without delay.
- Capturing Data History: Analyzing trends requires maintaining data history for the constant CRUD operations in operational databases.
- Scalability and Flexibility: Zero ETL architectures facilitate seamless scalability, allowing organizations to adapt to changing business needs without traditional ETL constraints.
Lastly, the zero-ETL integration is available for RDS for MySQL versions 8.0.32 and later, Amazon Redshift Serverless and Amazon Redshift RA3 instance types in supported AWS Regions.

MMS • RSS
Posted on mongodb google news. Visit mongodb google news
MBB Public Markets I LLC acquired a new stake in MongoDB, Inc. (NASDAQ:MDB – Free Report) in the second quarter, according to the company in its most recent 13F filing with the SEC. The institutional investor acquired 5,440 shares of the company’s stock, valued at approximately $1,360,000.
Other hedge funds have also added to or reduced their stakes in the company. Transcendent Capital Group LLC bought a new stake in MongoDB in the 4th quarter valued at $25,000. MFA Wealth Advisors LLC bought a new stake in shares of MongoDB in the second quarter valued at about $25,000. J.Safra Asset Management Corp boosted its position in shares of MongoDB by 682.4% during the second quarter. J.Safra Asset Management Corp now owns 133 shares of the company’s stock worth $33,000 after buying an additional 116 shares during the period. Hantz Financial Services Inc. bought a new position in shares of MongoDB during the second quarter worth about $35,000. Finally, YHB Investment Advisors Inc. acquired a new stake in MongoDB in the 1st quarter valued at approximately $41,000. Institutional investors and hedge funds own 89.29% of the company’s stock.
Wall Street Analyst Weigh In
Several brokerages have recently weighed in on MDB. Needham & Company LLC raised their price target on shares of MongoDB from $290.00 to $335.00 and gave the stock a “buy” rating in a research note on Friday, August 30th. Morgan Stanley boosted their price target on shares of MongoDB from $320.00 to $340.00 and gave the stock an “overweight” rating in a research report on Friday, August 30th. Citigroup lifted their target price on MongoDB from $350.00 to $400.00 and gave the company a “buy” rating in a research report on Tuesday, September 3rd. Sanford C. Bernstein raised their price target on MongoDB from $358.00 to $360.00 and gave the company an “outperform” rating in a research note on Friday, August 30th. Finally, Scotiabank upped their price objective on MongoDB from $250.00 to $295.00 and gave the stock a “sector perform” rating in a research note on Friday, August 30th. One investment analyst has rated the stock with a sell rating, five have assigned a hold rating and twenty have assigned a buy rating to the stock. Based on data from MarketBeat, MongoDB has a consensus rating of “Moderate Buy” and an average target price of $337.56.
Read Our Latest Research Report on MongoDB
MongoDB Stock Performance
MongoDB stock traded down $0.89 during midday trading on Monday, hitting $268.64. The company’s stock had a trading volume of 473,680 shares, compared to its average volume of 1,463,355. The company has a debt-to-equity ratio of 0.84, a current ratio of 5.03 and a quick ratio of 5.03. The company has a market cap of $19.71 billion, a price-to-earnings ratio of -95.01 and a beta of 1.15. MongoDB, Inc. has a 1-year low of $212.74 and a 1-year high of $509.62. The company has a 50-day moving average price of $261.22 and a two-hundred day moving average price of $293.46.
MongoDB (NASDAQ:MDB – Get Free Report) last issued its quarterly earnings results on Thursday, August 29th. The company reported $0.70 earnings per share (EPS) for the quarter, beating the consensus estimate of $0.49 by $0.21. MongoDB had a negative return on equity of 15.06% and a negative net margin of 12.08%. The business had revenue of $478.11 million during the quarter, compared to analysts’ expectations of $465.03 million. During the same period in the previous year, the company posted ($0.63) earnings per share. The firm’s revenue for the quarter was up 12.8% compared to the same quarter last year. As a group, sell-side analysts predict that MongoDB, Inc. will post -2.44 earnings per share for the current year.
Insider Buying and Selling
In other news, CRO Cedric Pech sold 273 shares of the business’s stock in a transaction that occurred on Tuesday, July 2nd. The shares were sold at an average price of $265.29, for a total value of $72,424.17. Following the completion of the transaction, the executive now directly owns 35,719 shares in the company, valued at approximately $9,475,893.51. The sale was disclosed in a filing with the Securities & Exchange Commission, which is available through this link. In related news, CAO Thomas Bull sold 1,000 shares of the firm’s stock in a transaction on Monday, September 9th. The stock was sold at an average price of $282.89, for a total transaction of $282,890.00. Following the sale, the chief accounting officer now owns 16,222 shares of the company’s stock, valued at approximately $4,589,041.58. The transaction was disclosed in a document filed with the SEC, which is available at this hyperlink. Also, CRO Cedric Pech sold 273 shares of MongoDB stock in a transaction dated Tuesday, July 2nd. The shares were sold at an average price of $265.29, for a total value of $72,424.17. Following the sale, the executive now directly owns 35,719 shares in the company, valued at $9,475,893.51. The disclosure for this sale can be found here. Over the last quarter, insiders have sold 21,005 shares of company stock worth $5,557,746. Company insiders own 3.60% of the company’s stock.
MongoDB Company Profile
MongoDB, Inc, together with its subsidiaries, provides general purpose database platform worldwide. The company provides MongoDB Atlas, a hosted multi-cloud database-as-a-service solution; MongoDB Enterprise Advanced, a commercial database server for enterprise customers to run in the cloud, on-premises, or in a hybrid environment; and Community Server, a free-to-download version of its database, which includes the functionality that developers need to get started with MongoDB.
See Also
Before you consider MongoDB, you’ll want to hear this.
MarketBeat keeps track of Wall Street’s top-rated and best performing research analysts and the stocks they recommend to their clients on a daily basis. MarketBeat has identified the five stocks that top analysts are quietly whispering to their clients to buy now before the broader market catches on… and MongoDB wasn’t on the list.
While MongoDB currently has a “Moderate Buy” rating among analysts, top-rated analysts believe these five stocks are better buys.

Market downturns give many investors pause, and for good reason. Wondering how to offset this risk? Click the link below to learn more about using beta to protect yourself.
Article originally posted on mongodb google news. Visit mongodb google news
‘Mia Khalifa Expert’ Claim on CV Leads Ex-Google Employee to 29 Interviews – Times Now

MMS • RSS
Posted on mongodb google news. Visit mongodb google news

Ex-Google Worker Lists ‘Mia Khalifa Expert’ on CV, Lands 29 Job Offers
A former Google employee, Jerry Lee carried out a unique experiment to see just how much attention recruiters actually pay to CV details. Based in New York, Lee inserted outrageous claims into his CV to see how far his Google experience would take him in the job market, despite the presence of clear red flags. Lee, who reportedly worked at Google for three years as a Strategy and Operations Manager, mixed ludicrous achievements into an otherwise typical resume. Some of the odd additions included “expert in Mia Khalifa” and “set the fraternity record for most vodka shots in one night.” He then submitted this altered CV to potential employers and waited to gauge their reactions.
The results of Lee’s experiment shocked many. Over a six-week period, despite his resume containing nonsensical and inappropriate achievements, he received 29 interview invitations. Well-known companies such as MongoDB and Robinhood even reached out to him for interviews, according to a video Lee shared on Instagram.
In the post, Lee documented his findings and shared three key lessons from his experiment. First, he stressed the importance of having a well-organised and concise resume, noting that it plays a crucial role in making a positive impression. “Focus on strong bullet points, clear job titles, and the impact you’ve made,” he advised job-seekers. “Periods and font sizes are fine details, but it’s the big stuff that gets you noticed.”
Next, he pointed out that while having experience at a well-known company like Google may draw attention, it’s equally important to show clear and measurable achievements. “Big names catch eyes, but don’t sweat it if you haven’t worked at a ‘big name’—just make sure your achievements pop with quantifiable results. It’s about the skills you bring to the table, not just where you polished them,” Lee explained in his Instagram post.
Lastly, he underlined the value of a simple, structured CV template, which recruiters prefer because it allows them to quickly identify the key information they need. He urged job seekers to adopt this approach to make the recruitment process smoother and more effective.
Article originally posted on mongodb google news. Visit mongodb google news
Java News Roundup: Proposed Schedule for JDK 24, SecurityManager Disabled, Commonhaus Foundation

MMS • Michael Redlich
Article originally posted on InfoQ. Visit InfoQ

This week’s Java roundup for September 23th, 2024 features news highlighting: the proposed release schedule for JDK 24; JEP 475, Late Barrier Expansion for G1, promoted from Candidate to Proposed to Target for JDK 24; JEP 486, Permanently Disable the Security Manager, promoted from its JEP Draft 8338625 to Candidate status; and Quarkus joining the Commonhaus Foundation.
OpenJDK
JEP 475, Late Barrier Expansion for G1, was promoted from Candidate to Proposed to Target for JDK 24. This JEP proposes to simplify the implementation of the G1 garbage collector’s barriers, which record information about application memory accesses, by shifting their expansion from early in the C2 JIT’s compilation pipeline to later. The goal is to reduce the execution time of C2 when using the G1 collector. The review is expected to conclude on October 2, 2024.
JEP 486, Permanently Disable the Security Manager, has been promoted from its JEP Draft 8338625 to Candidate status. This JEP proposes to permanently disable the SecurityManager
class since it was deprecated with JEP 411, Deprecate the Security Manager for Removal, delivered in JDK 17. While it was possible for developers to still enable the SecurityManager
class while it has been deprecated, this functionality will be removed as the next step for ultimate removal.
JDK 24
Build 17 of the JDK 24 early-access builds was made available this past week featuring updates from Build 16 that include fixes for various issues. More details on this release may be found in the release notes.
Mark Reinhold, Chief Architect, Java Platform Group at Oracle, formally proposed the release schedule for JDK 24 as follows:
- Rampdown Phase One (fork from main line): December 5, 2024
- Rampdown Phase Two: January 16, 2025
- Initial Release Candidate: February 6, 2025
- Final Release Candidate: February 20, 2025
- General Availability: March 18, 2025
The review period for this proposed schedule is expected to conclude on October 2, 2024.
For JDK 24, developers are encouraged to report bugs via the Java Bug Database.
Spring Framework
Versions 3.4.0-M2, 3.3.3 and 3.2.8 of Spring Shell have been released featuring support for JEP 454, Foreign Function & Memory API, delivered in JDK 22, via JLine, the Java library for handling console input. These releases build on Spring Boot versions 3.4.0-M3, 3.3.4 and 3.2.10, respectively. Further details on these releases may be found in the release notes for version 3.4.0-M2, version 3.3.3 and version 3.2.8.
Quarkus
Red Hat has released version 3.15 of Quarkus, a new long-term support release provides dependency upgrades and resolutions to notable issues such as: a class loading failure from the findFunctions()
method defined in the AzureFunctionsProcessor
class; and a bidirectional streaming failure in the Dev UI console. The Quarkus team has stated that new features will be delivered in Quarkus 3.16, scheduled for the end of October 2024. More details on this release may be found in the release notes.
Open Liberty
IBM has released version 24.0.0.10-beta of Open Liberty featuring: beta support for JDK 23; and improved handling of SameSite cookies by allowing SameSite=None
on incompatible clients. Details on how to set a SameSite cookie with Open Liberty may be found on this website.
WildFly
The first beta release of WildFly 34 delivers bug fixes, dependency upgrades and enhancements such as: a relocation of dependency JARs from the OpenTelemetry module to their own respective modules to minimize the size of the OpenTelemetry module; and a simplification of installing singleton service for a deployment that was once very cumbersome. Further details on this release may be found in the release notes.
Apache Software Foundation
Maintaining alignment with Quarkus, the release of Camel Quarkus 3.15.0, composed of Camel 4.8.0 and Quarkus 3.15.0, provides resolutions to notable issues such as: a deprecation of the Kotlin and Kotlin DSL extensions because they only provide a Kotlin function wrapper around the configure()
method defined in the RouteBuilder
abstract class; and a ClassNotFoundException
from the SmallRye FallbackFunction
class due to Quarkus having upgraded to SmallRye 6.4.0. More details on this release may be found in the release notes.
LangChain4j
Version 0.35.0 of LangChain for Java (LangChain4j) features new integrations: chat and embedding models from GitHub Models; document loader from Google Cloud Storage; scoring model from Google Vertex AI Ranking API; scoring model from ONNX Reranker; embedding store from Tablestore; and embedding and scoring models from Voyage AI. Other notable changes include: support for embedding models, the ability to count tokens and enumerated structured outputs from Google AI; and support for observability in Ollama. Further details on this release may be found in the release notes.
JBang
Version 0.119.0 of JBang provides bug fixes and a new feature in which junctions can now be created on WindowsOS that resolves an issue where executing the jbang jdk default {version}
command would fail. More details on this release may be found in the release notes.
Java Operator SDK
The release of Java Operator SDK 4.9.5 features bug fixes and some refactoring that includes: change the package access of the asBoolean()
method, defined in the BooleanWithUndefined
enum, to public
; a rename and deprecation of the defaultNonSSAResource()
method, defined in the ConfigurationService
interface, to defaultNonSSAResources()
; and change the shouldUseSSA()
method, also defined in the ConfigurationService
interface, to use types as opposed to instances and corresponding tests. Further details on this release may be found in the release notes.
Commonhaus Foundation
The Commonhaus Foundation, a new non-profit organization dedicated to the sustainability of open source libraries and frameworks, has announced that Quarkus has joined the foundation this past week. In a blog post published in late July 2024, Max Rydahl Andersen, Distinguished Engineer at Red Hat, described their transition to the foundation, writing:
Quarkus will continue to innovate and evolve. We are dedicated to making Quarkus the best framework for Java development. This transition will enable us to welcome more contributions from a diverse range of developers and organisations. We are actively working on upcoming releases and are eager to hear your ideas and feedback.
They join notable projects such as: Hibernate, JReleaser, JBang, OpenRewrite, SDKMAN, EasyMock, Objenesis and Feign.
Introduced to the Java community at Devnexus in April 2024, the foundation provides succession planning and fiscal support for self-governing open-source projects.
RefactorFirst
Jim Bethancourt, principal software consultant at Improving, an IT services firm offering training, consulting, recruiting, and project services, has released version 0.5.0 of RefactorFirst, a utility that prioritizes the parts of an application that should be refactored. This release delivers: support for JDK 21; performance improvements on large codebases with a high number of commits; and the addition of a simple HTML resort that may be used in GitHub Actions. More details on this release may be found in the release notes.
Gradle
Gradle 8.10.2, the second maintenance release, ships with resolutions to notable issues: a failure to update the Gradle wrapper in version 8.10.1; a failure using a build with the Kotlin Mutliplatform plugin and a reused daemon; and the configureEach(Action)
method, defined in the DefaultTaskCollection
class, on a task set cannot be executed in the current context. Further details on this release may be found in the release notes.

MMS • RSS
Posted on mongodb google news. Visit mongodb google news
.pp-multiple-authors-boxes-wrapper {display:none;}
img {width:100%;}
As data management grows more complex and modern applications extend the capabilities of traditional approaches, AI is revolutionising application scaling.

In addition to freeing operators from outdated, inefficient methods that require careful supervision and extra resources, AI enables real-time, adaptive optimisation of application scaling. Ultimately, these benefits combine to enhance efficiency and reduce costs for targeted applications.
With its predictive capabilities, AI ensures that applications scale efficiently, improving performance and resource allocation—marking a major advance over conventional methods.
Ahead of AI & Big Data Expo Europe, Han Heloir, EMEA gen AI senior solutions architect at MongoDB, discusses the future of AI-powered applications and the role of scalable databases in supporting generative AI and enhancing business processes.
AI News: As AI-powered applications continue to grow in complexity and scale, what do you see as the most significant trends shaping the future of database technology?
Heloir: While enterprises are keen to leverage the transformational power of generative AI technologies, the reality is that building a robust, scalable technology foundation involves more than just choosing the right technologies. It’s about creating systems that can grow and adapt to the evolving demands of generative AI, demands that are changing quickly, some of which traditional IT infrastructure may not be able to support. That is the uncomfortable truth about the current situation.
Today’s IT architectures are being overwhelmed by unprecedented data volumes generated from increasingly interconnected data sets. Traditional systems, designed for less intensive data exchanges, are currently unable to handle the massive, continuous data streams required for real-time AI responsiveness. They are also unprepared to manage the variety of data being generated.
The generative AI ecosystem often comprises a complex set of technologies. Each layer of technology—from data sourcing to model deployment—increases functional depth and operational costs. Simplifying these technology stacks isn’t just about improving operational efficiency; it’s also a financial necessity.
AI News: What are some key considerations for businesses when selecting a scalable database for AI-powered applications, especially those involving generative AI?
Heloir: Businesses should prioritise flexibility, performance and future scalability. Here are a few key reasons:
- The variety and volume of data will continue to grow, requiring the database to handle diverse data types—structured, unstructured, and semi-structured—at scale. Selecting a database that can manage such variety without complex ETL processes is important.
- AI models often need access to real-time data for training and inference, so the database must offer low latency to enable real-time decision-making and responsiveness.
- As AI models grow and data volumes expand, databases must scale horizontally, to allow organisations to add capacity without significant downtime or performance degradation.
- Seamless integration with data science and machine learning tools is crucial, and native support for AI workflows—such as managing model data, training sets and inference data—can enhance operational efficiency.
AI News: What are the common challenges organisations face when integrating AI into their operations, and how can scalable databases help address these issues?
Heloir: There are a variety of challenges that organisations can run into when adopting AI. These include the massive amounts of data from a wide variety of sources that are required to build AI applications. Scaling these initiatives can also put strain on the existing IT infrastructure and once the models are built, they require continuous iteration and improvement.
To make this easier, a database that scales can help simplify the management, storage and retrieval of diverse datasets. It offers elasticity, allowing businesses to handle fluctuating demands while sustaining performance and efficiency. Additionally, they accelerate time-to-market for AI-driven innovations by enabling rapid data ingestion and retrieval, facilitating faster experimentation.
AI News: Could you provide examples of how collaborations between database providers and AI-focused companies have driven innovation in AI solutions?
Heloir: Many businesses struggle to build generative AI applications because the technology evolves so quickly. Limited expertise and the increased complexity of integrating diverse components further complicate the process, slowing innovation and hindering the development of AI-driven solutions.
One way we address these challenges is through our MongoDB AI Applications Program (MAAP), which provides customers with resources to assist them in putting AI applications into production. This includes reference architectures and an end-to-end technology stack that integrates with leading technology providers, professional services and a unified support system.
MAAP categorises customers into four groups, ranging from those seeking advice and prototyping to those developing mission-critical AI applications and overcoming technical challenges. MongoDB’s MAAP enables faster, seamless development of generative AI applications, fostering creativity and reducing complexity.
AI News: How does MongoDB approach the challenges of supporting AI-powered applications, particularly in industries that are rapidly adopting AI?
Heloir: Ensuring you have the underlying infrastructure to build what you need is always one of the biggest challenges organisations face.
To build AI-powered applications, the underlying database must be capable of running queries against rich, flexible data structures. With AI, data structures can become very complex. This is one of the biggest challenges organisations face when building AI-powered applications, and it’s precisely what MongoDB is designed to handle. We unify source data, metadata, operational data, vector data and generated data—all in one platform.
AI News: What future developments in database technology do you anticipate, and how is MongoDB preparing to support the next generation of AI applications?
Heloir: Our key values are the same today as they were when MongoDB initially launched: we want to make developers’ lives easier and help them drive business ROI. This remains unchanged in the age of artificial intelligence. We will continue to listen to our customers, assist them in overcoming their biggest difficulties, and ensure that MongoDB has the features they require to develop the next [generation of] great applications.
(Photo by Caspar Camille Rubin)

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.
Explore other upcoming enterprise technology events and webinars powered by TechForge here.
Article originally posted on mongodb google news. Visit mongodb google news

MMS • RSS
Posted on mongodb google news. Visit mongodb google news
Bank of Montreal Can grew its stake in shares of MongoDB, Inc. (NASDAQ:MDB – Free Report) by 13.9% in the 2nd quarter, according to its most recent filing with the Securities & Exchange Commission. The firm owned 76,073 shares of the company’s stock after buying an additional 9,270 shares during the period. Bank of Montreal Can owned 0.10% of MongoDB worth $19,028,000 as of its most recent SEC filing.
Several other institutional investors also recently made changes to their positions in the stock. Transcendent Capital Group LLC acquired a new position in MongoDB in the 4th quarter worth $25,000. MFA Wealth Advisors LLC bought a new stake in shares of MongoDB in the 2nd quarter worth about $25,000. J.Safra Asset Management Corp increased its position in shares of MongoDB by 682.4% in the 2nd quarter. J.Safra Asset Management Corp now owns 133 shares of the company’s stock worth $33,000 after purchasing an additional 116 shares during the last quarter. Hantz Financial Services Inc. bought a new stake in shares of MongoDB in the 2nd quarter worth about $35,000. Finally, YHB Investment Advisors Inc. bought a new stake in shares of MongoDB in the 1st quarter worth about $41,000. 89.29% of the stock is owned by hedge funds and other institutional investors.
Insider Activity at MongoDB
In related news, CRO Cedric Pech sold 273 shares of the company’s stock in a transaction that occurred on Tuesday, July 2nd. The shares were sold at an average price of $265.29, for a total value of $72,424.17. Following the completion of the sale, the executive now owns 35,719 shares in the company, valued at approximately $9,475,893.51. The sale was disclosed in a legal filing with the SEC, which can be accessed through this hyperlink. In other MongoDB news, CRO Cedric Pech sold 273 shares of the company’s stock in a transaction that occurred on Tuesday, July 2nd. The shares were sold at an average price of $265.29, for a total value of $72,424.17. Following the transaction, the executive now owns 35,719 shares in the company, valued at approximately $9,475,893.51. The transaction was disclosed in a document filed with the SEC, which can be accessed through this link. Also, CAO Thomas Bull sold 138 shares of the company’s stock in a transaction that occurred on Tuesday, July 2nd. The shares were sold at an average price of $265.29, for a total transaction of $36,610.02. Following the completion of the transaction, the chief accounting officer now owns 17,222 shares in the company, valued at approximately $4,568,824.38. The disclosure for this sale can be found here. In the last 90 days, insiders sold 21,005 shares of company stock worth $5,557,746. 3.60% of the stock is owned by corporate insiders.
Analysts Set New Price Targets
A number of brokerages recently weighed in on MDB. Mizuho boosted their price objective on shares of MongoDB from $250.00 to $275.00 and gave the company a “neutral” rating in a research report on Friday, August 30th. DA Davidson boosted their price objective on shares of MongoDB from $265.00 to $330.00 and gave the company a “buy” rating in a research report on Friday, August 30th. Stifel Nicolaus upped their target price on shares of MongoDB from $300.00 to $325.00 and gave the stock a “buy” rating in a research report on Friday, August 30th. Guggenheim upgraded shares of MongoDB from a “sell” rating to a “neutral” rating in a research report on Monday, June 3rd. Finally, Needham & Company LLC upped their target price on shares of MongoDB from $290.00 to $335.00 and gave the stock a “buy” rating in a research report on Friday, August 30th. One research analyst has rated the stock with a sell rating, five have issued a hold rating and twenty have issued a buy rating to the stock. According to data from MarketBeat.com, the company has an average rating of “Moderate Buy” and a consensus target price of $337.56.
View Our Latest Stock Analysis on MDB
MongoDB Stock Down 1.1 %
Shares of NASDAQ:MDB opened at $269.53 on Friday. The company has a debt-to-equity ratio of 0.84, a quick ratio of 5.03 and a current ratio of 5.03. The company has a market cap of $19.77 billion, a PE ratio of -95.92 and a beta of 1.15. MongoDB, Inc. has a 52 week low of $212.74 and a 52 week high of $509.62. The firm has a 50 day simple moving average of $261.22 and a 200 day simple moving average of $294.53.
MongoDB (NASDAQ:MDB – Get Free Report) last issued its quarterly earnings results on Thursday, August 29th. The company reported $0.70 EPS for the quarter, beating the consensus estimate of $0.49 by $0.21. The business had revenue of $478.11 million during the quarter, compared to the consensus estimate of $465.03 million. MongoDB had a negative net margin of 12.08% and a negative return on equity of 15.06%. The firm’s quarterly revenue was up 12.8% on a year-over-year basis. During the same quarter in the prior year, the firm earned ($0.63) EPS. As a group, research analysts expect that MongoDB, Inc. will post -2.44 earnings per share for the current year.
About MongoDB
MongoDB, Inc, together with its subsidiaries, provides general purpose database platform worldwide. The company provides MongoDB Atlas, a hosted multi-cloud database-as-a-service solution; MongoDB Enterprise Advanced, a commercial database server for enterprise customers to run in the cloud, on-premises, or in a hybrid environment; and Community Server, a free-to-download version of its database, which includes the functionality that developers need to get started with MongoDB.
Read More
Want to see what other hedge funds are holding MDB? Visit HoldingsChannel.com to get the latest 13F filings and insider trades for MongoDB, Inc. (NASDAQ:MDB – Free Report).
Receive News & Ratings for MongoDB Daily – Enter your email address below to receive a concise daily summary of the latest news and analysts’ ratings for MongoDB and related companies with MarketBeat.com’s FREE daily email newsletter.
Article originally posted on mongodb google news. Visit mongodb google news

MMS • RSS
Posted on mongodb google news. Visit mongodb google news
Clearline Capital LP bought a new stake in MongoDB, Inc. (NASDAQ:MDB – Free Report) during the second quarter, according to the company in its most recent Form 13F filing with the Securities & Exchange Commission. The institutional investor bought 12,342 shares of the company’s stock, valued at approximately $3,085,000.
Several other institutional investors and hedge funds have also bought and sold shares of MDB. Transcendent Capital Group LLC bought a new position in MongoDB in the 4th quarter worth $25,000. MFA Wealth Advisors LLC purchased a new stake in MongoDB in the second quarter worth about $25,000. J.Safra Asset Management Corp raised its stake in shares of MongoDB by 682.4% in the second quarter. J.Safra Asset Management Corp now owns 133 shares of the company’s stock worth $33,000 after buying an additional 116 shares during the period. Hantz Financial Services Inc. purchased a new position in shares of MongoDB during the 2nd quarter valued at about $35,000. Finally, YHB Investment Advisors Inc. bought a new position in shares of MongoDB during the 1st quarter valued at approximately $41,000. 89.29% of the stock is owned by hedge funds and other institutional investors.
Insider Buying and Selling
In other MongoDB news, CAO Thomas Bull sold 1,000 shares of the business’s stock in a transaction dated Monday, September 9th. The shares were sold at an average price of $282.89, for a total value of $282,890.00. Following the completion of the sale, the chief accounting officer now directly owns 16,222 shares of the company’s stock, valued at approximately $4,589,041.58. The transaction was disclosed in a filing with the SEC, which is available through this link. In other MongoDB news, CAO Thomas Bull sold 1,000 shares of the firm’s stock in a transaction on Monday, September 9th. The stock was sold at an average price of $282.89, for a total transaction of $282,890.00. Following the transaction, the chief accounting officer now directly owns 16,222 shares in the company, valued at approximately $4,589,041.58. The transaction was disclosed in a document filed with the Securities & Exchange Commission, which is available at the SEC website. Also, Director Dwight A. Merriman sold 2,000 shares of the company’s stock in a transaction on Friday, August 2nd. The stock was sold at an average price of $231.00, for a total value of $462,000.00. Following the completion of the sale, the director now owns 1,140,006 shares of the company’s stock, valued at $263,341,386. The disclosure for this sale can be found here. Insiders have sold 21,005 shares of company stock valued at $5,557,746 in the last 90 days. Company insiders own 3.60% of the company’s stock.
Analyst Upgrades and Downgrades
A number of equities research analysts have commented on the stock. JMP Securities restated a “market outperform” rating and set a $380.00 target price on shares of MongoDB in a research note on Friday, August 30th. Morgan Stanley upped their price objective on MongoDB from $320.00 to $340.00 and gave the stock an “overweight” rating in a research note on Friday, August 30th. Wells Fargo & Company lifted their target price on MongoDB from $300.00 to $350.00 and gave the company an “overweight” rating in a research note on Friday, August 30th. Mizuho upped their price target on MongoDB from $250.00 to $275.00 and gave the stock a “neutral” rating in a research note on Friday, August 30th. Finally, DA Davidson lifted their price objective on shares of MongoDB from $265.00 to $330.00 and gave the company a “buy” rating in a research report on Friday, August 30th. One equities research analyst has rated the stock with a sell rating, five have given a hold rating and twenty have issued a buy rating to the company. Based on data from MarketBeat, the company has a consensus rating of “Moderate Buy” and an average price target of $337.56.
Read Our Latest Stock Report on MongoDB
MongoDB Stock Down 1.1 %
Shares of NASDAQ MDB opened at $269.53 on Friday. The company has a market cap of $19.77 billion, a P/E ratio of -95.92 and a beta of 1.15. The firm has a 50-day simple moving average of $261.22 and a two-hundred day simple moving average of $294.53. MongoDB, Inc. has a 1 year low of $212.74 and a 1 year high of $509.62. The company has a debt-to-equity ratio of 0.84, a current ratio of 5.03 and a quick ratio of 5.03.
MongoDB (NASDAQ:MDB – Get Free Report) last announced its quarterly earnings data on Thursday, August 29th. The company reported $0.70 EPS for the quarter, beating the consensus estimate of $0.49 by $0.21. MongoDB had a negative return on equity of 15.06% and a negative net margin of 12.08%. The company had revenue of $478.11 million for the quarter, compared to the consensus estimate of $465.03 million. During the same quarter in the prior year, the company posted ($0.63) EPS. MongoDB’s quarterly revenue was up 12.8% on a year-over-year basis. Research analysts anticipate that MongoDB, Inc. will post -2.44 earnings per share for the current year.
MongoDB Company Profile
MongoDB, Inc, together with its subsidiaries, provides general purpose database platform worldwide. The company provides MongoDB Atlas, a hosted multi-cloud database-as-a-service solution; MongoDB Enterprise Advanced, a commercial database server for enterprise customers to run in the cloud, on-premises, or in a hybrid environment; and Community Server, a free-to-download version of its database, which includes the functionality that developers need to get started with MongoDB.
See Also
This instant news alert was generated by narrative science technology and financial data from MarketBeat in order to provide readers with the fastest and most accurate reporting. This story was reviewed by MarketBeat’s editorial team prior to publication. Please send any questions or comments about this story to contact@marketbeat.com.
Before you consider MongoDB, you’ll want to hear this.
MarketBeat keeps track of Wall Street’s top-rated and best performing research analysts and the stocks they recommend to their clients on a daily basis. MarketBeat has identified the five stocks that top analysts are quietly whispering to their clients to buy now before the broader market catches on… and MongoDB wasn’t on the list.
While MongoDB currently has a “Moderate Buy” rating among analysts, top-rated analysts believe these five stocks are better buys.

Growth stocks offer a lot of bang for your buck, and we’ve got the next upcoming superstars to strongly consider for your portfolio.
Article originally posted on mongodb google news. Visit mongodb google news