Month: April 2025

MMS • Aditya Kulkarni
Article originally posted on InfoQ. Visit InfoQ

Styrolite is an open-source, low-level container runtime to address security and usability issues in Linux containerization. Developed by Edera, Styrolite differentiates itself by offering a programmatic API that enables developers to create and manage containers in a controlled and secure manner.
Ariadne Conill, Founder and Distinguished Engineer at Edera, announced Styrolite in a blog post. Elaborating on the need for a new low-level container runtime, Conill stated that existing low-level container runtimes such as Bubblewrap and util-linux’s unshare are either too reliant on complex command-line interfaces or lack required programming control. This makes them error-prone and hard to integrate into modern, security-focused platforms.
On the other end, there are high-level solutions such as Kubernetes Container Runtime Interface (CRI), which are too abstract for low-level container management. To fill this gap, a new low-level runtime that allows engineers to spawn and manage containers with greater precision and reliability is needed.
While Linux namespaces are foundational to containers, they were never intended as hard security boundaries. This leads to more container vulnerabilities escaping through the ecosystem. Building on this limitation, Styrolite provides a stronger security foundation for containerized workloads.
Under the hood, Styrolite leverages the Linux unshare(2) syscall to create isolated environments by disassociating processes from host namespaces. Using this approach, engineers can get granular control over which namespaces are unshared and how resources are exposed to containers. The API provides clear specification of root filesystems, executables, arguments, working directories, and namespaces, making container setup less error-prone than manual CLI scripting.
The tech community on Hacker News was quick to take note of this announcement. One of the HN users asked what Edera developers do differently with Styrolite, considering it still uses Linux namespaces. An Edera developer with an HN handle denhamparry responded,
…we use Styrolite to run containers with Edera Protect. Edera Protect creates Zones to isolate processes from other Zones so that if someone were to break out of a container, they’d only see the zone processes. Not the host operating system or the hardware on the machine. The key difference here between us and other isolation implementations is that there is no performance degradation, you don’t have to rebuild your container images, and that we don’t require specific hardware (e.g. you can run Edera Protect on bare metal or on public cloud instances and everything else in-between).
Another conversation thread in the same post compared gVisor and Edera Protect features.
Within Edera Protect, Styrolite is helpful in securing microservices, enabling fine-grained container isolation for security-sensitive workloads. Engineers can also use Styrolite to build isolated, resource-controlled environments for continuous integration and delivery pipelines.
For further information, interested readers can navigate to the Styrolite GitHub repository.

MMS • Sergio De Simone
Article originally posted on InfoQ. Visit InfoQ

Cloudflare has launched a managed service for using retrieval-augmented generationin LLM-based systems. Now in beta, CloudFlare AutoRAG aims to make it easier for developers to build pipelines that integrate rich context data into LLMs.
Retrieval-augmented generation can significantly improve how accurately LLMs answer questions involving proprietary or domain-specific knowledge. However, its implementation is far from trivial, explains Cloudflare product manager Anni Wang.
Building a RAG pipeline is a patchwork of moving parts. You have to stitch together multiple tools and services — your data storage, a vector database, an embedding model, LLMs, and custom indexing, retrieval, and generation logic — all just to get started.
To make matters worse, the whole process must be repeated each time your knowledge base changes.
To improve on this, Cloudflare AutoRAG automates all steps required for retrieval-augmented generation: it ingests the data, automatically chunks and embeds it, stores the resulting vectors in Cloudflare’s Vectorize database, performs semantic retrieval, and generates responses using Workers AI. It also monitors all data sources in the background and reruns the pipeline when needed.
The two main processes behind AutoRAG are indexing and querying, explains Wang. Indexing begins by connecting a data source, which is ingested, transformed, vectorized using an embeddings model, and optimized for queries. Currently, AutoRAG supports only Cloudflare R2-based sources and can to process PDFs, images, text, HTML, CSV, and more. All files are converted into structured Markdown, including images for which a combination of object detection and vision-to-language transformation is used.
The querying process starts when an end user makes a request through the AutoRAG API. The prompt is optionally rewritten to improve its effectiveness, then vectorized using the same embeddings model applied during indexing. The resulting vector is used to search the Vectorize database, returning the relevant chunks and metadata that help retrieve the original content from the R2 data source. Finally, the retrieved context is combined with the user prompt and passed to the LLM.
On Linkedn, Stratus Cyber CEO Ajay Chandhok noted that “in most cases AutoRAG implementation requires just pointing to an existing R2 bucket. You drop your content in, and the system automatically handles everything else”.
Another benefit of AutoRAG, says BBC senior software engineer Nicholas Griffin, is that it “makes querying just a few lines of code”.
Some skepticism surfaced on X, where Poojan Dalal pointed out that “production grade scalable RAG systems for enterprises have much more requirements and components than just a single pipeline” adding that it’s not just about semantic search.
Engineer Pranit Bauva, who successfully used AutoRAG to create a RAG app, also pointed out several limitations in its current form: few options for embedding and chunking, slow query rewriting, and an AI Gateway that only works with Llama models—possibly due to an early-stage bug. He also noted that retrieval quality is lacking and emphasized that, for AutoRAG to be production-ready, it must offer a way to evaluate whether the correct context was retrieved to answer a given question.

MMS • RSS
Posted on mongodb google news. Visit mongodb google news

Investing.com — MongoDB (NASDAQ:MDB), once a high-flying software darling, is facing calls from investors for a shake-up, as shares remain down more than 50% from their 52-week high and full-year guidance reveals a sharp deceleration in growth. With pressure building, investors are increasingly hoping that a large activist investor will step in to force operational changes and potentially push the company to explore a sale of the business.
The database platform provider reported solid fiscal Q4 results, beating earnings and revenue expectations with EPS of $1.28 versus consensus estimates of $0.66 and revenue of $548.4 million against $520.5 million expected. However, the market’s focus quickly shifted to FY 2026 guidance, which disappointed on both the top and bottom line. Revenue is projected at $2.24 billion to $2.28 billion, below the $2.32 billion analysts had forecast, while full-year EPS is expected to come in between $2.44 and $2.62, well under the $3.39 consensus estimate.
The stock reaction has been stark. Despite MongoDB’s long record of innovation and substantial revenue growth since its 2017 IPO, the market is now digesting the reality that the company is transitioning from a high-growth narrative to a more mature, slower-growth business model. And with that transition, investors say, should come a reassessment of cost structure and strategy.
One area of particular focus is MongoDB’s operating expenses, which remain steep relative to its current cash flow profile. The company spent nearly $600 million on research and development in fiscal 2025, roughly four times the $150 million it generated in operating cash flow. General and administrative costs totaled an additional $220 million. Investors see an opportunity for significant margin improvement through more aggressive cost controls.
Randian Capital, an investor who has followed MongoDB since its early days as a public company, pointed to this misalignment between growth and expense as a key issue, in exclusive comments made to Investing.com. “MDB is spending almost $600mm per year on R&D, relative to a company that generated $150mm in cash from operations in 2025,” Randian wrote. The firm believes “the time is right for MDB to cut costs meaningfully across R&D and the $220mm in annual G&A costs.”
Beyond cost discipline, investors, such as Randian, believe MongoDB should consider strategic alternatives, including a possible sale. With a growing list of slowing software businesses becoming acquisition targets, some argue that MongoDB’s product and market position make it highly attractive to both strategic buyers and private equity. Large tech players such as Amazon (NASDAQ:AMZN), Oracle (NYSE:ORCL), IBM (NYSE:IBM), and SAP have been floated as potential suitors, and an LBO has also been seen as a viable option.
“MongoDB should explore a sale process,” Randian added, noting that “MDB presents a rare case of a business that has a large cost cut opportunity and clear visibility of many years of growth ahead.”
While MongoDB’s leadership under CEO Dev Ittycheria has garnered praise for guiding the company from a niche open-source project to a widely adopted enterprise platform, the business has entered a more mature phase. FY 2026 marks what may be the first year of consistent low double-digit revenue growth after years of 30%-plus expansion. For some investors, that inflection point makes the case for external involvement to reassess capital allocation and long-term positioning.
A properly executed turnaround, paired with a potential monetization event, could help rebuild investor confidence, many argue. MongoDB’s highly differentiated technology, particularly its appeal to developers working on flexible, scalable applications, remains valuable in a software market looking for durable platforms.
For now, no activist investor has taken a public stake, but the conditions —profitability potential, underperformance, and strategic interest —are increasingly aligned. With renewed scrutiny on costs and a growing call to evaluate all options, the company may soon be forced to respond to the pressures building from its investor base.
Related articles
Investors eye activist intervention as MongoDB struggles with growth and valuation
Barclays upgrades NetApp on growth in cloud, recurring revenue
Stanley Black & Decker to raise prices and move supply chains due to U.S. tariffs
Article originally posted on mongodb google news. Visit mongodb google news
Presentation: Cloud Attack Emulation: Leveraging the Attacker’s Advantage for Effective Defense

MMS • Kennedy Torkura
Article originally posted on InfoQ. Visit InfoQ

Transcript
Torkura: I’m going to be talking on a very nice topic, cloud attack emulation, leveraging the attacker’s advantage for effective defense. I’m one of the founders and the CTO at Mitigant. We’re a cloud security company based in Potsdam. Potsdam is very close to Berlin. I’ve spent about 12 years in cybersecurity. I’ve done academic research, also worked in different companies. Also, I’m one of the pioneers of what we call security chaos engineering. A lot of the concepts are based on this idea. I’m also a five-time member of the AWS Community Builders program.
The agenda is pretty much straightforward. We’re going to be looking at the following points, what I refer to as the attacker’s perspective. Then different aspects of cloud attack emulation. We’ll just go through it and see how we can apply the idea of cloud attack emulation to these concepts, and also see the limitations in what we do today. Then we also look at threat-informed defense. Then we conclude.
I want to start this talk with a quote from Sun Tzu. A very popular philosopher. He said that if you know the enemy and know yourself, you need not fear the result of a hundred battles. If you know yourself but not the enemy, for every victory gained, you will also suffer a defeat. If you know neither the enemy nor yourself, you will succumb in every battle.
Essentially, I know we are not fighters here, we are not in the army or in the military, but over the years, a lot of people have used this saying and a lot of teachings from his writings, and they’ve used it to improve themselves. In terms of cybersecurity, there’s a lot that we can learn from here, because, essentially, the moment we deploy things on the internet, on the flip side, there are people whose job is to make sure or to try to attack these resources, this infrastructure, and essentially, at the end of the day, we find ourselves in a battle. We are trying to protect stuff, and on the other end, there are people who actually want to get into this stuff, they want to take over it, they want to compromise it. There’s a lot to learn from here. The central part which I want you to take away is, it’s very important for us, if we want to do a good job of defending our cyber resources, to take the time to understand the attacker, to always have the attacker at the center of what we are doing.
The Attacker’s Perspective
The attacker’s perspective, what is it? I think it’s very important to strive to understand your infrastructure, to view it from an attacker’s standpoint, because, usually, you have infrastructure, you deploy it, and obviously, most of the time, we are looking at it from a user’s standpoint. You want the users to be happy. You want them to have the best experience.
On the other hand, there are attackers who are on the opposite side, they want to deny this goal that you have in mind. There’s another thing I also want to talk about, the assume breach mindset. This mindset is very popular, a lot of people talk about it. It’s about you being able to assume, rather than saying that your infrastructure cannot be breached, rather than assuming that it’s rock solid, it’s bulletproof, people who have this mindset, they act from the belief that they want to prove that they have not been attacked. It’s an evidence-based approach, very similar to other physical sciences like medicine, so they want to get a proof that they are not attacked before they actually agree that they are not yet attacked.
When we look at what we’ve been doing in cybersecurity domain, somehow, we do these things, we look at it, we look at our infrastructure from an attacker’s perspective. Sometimes we do this mechanically, and so we don’t really do it efficiently. What I call passive is where we do threat modeling exercises where we come together, it’s like a game, you try to understand your infrastructure, you try to identify risks, gaps, and so forth. It’s very similar to tabletop exercises. In the end, a lot of it is very qualitative in nature. You’re biased. Most of the time people come out and say, yes, everything is great. On the other hand, we also have what I refer to as active security measures, things like security assessment, penetration testing, bug bounty programs, all of these are active, what I see, you’re knocking the door. You’re actually touching these resources and getting feedback.
Based on that feedback, you come to arrive at some assumptions, which you use to help you to understand what actions you have to take. Compliance is not security, because this is a big problem in the industry. I’m not against compliance, but compliance is all about checkboxes. You want to prove to some person out there that everything is fine. It’s like window dressing, you dress everything nicely. They come, they give you a certificate, they give you all the green check marks, and they go. That’s all. It’s not security because security is continuous. If after you get all these certificates, PCI DSS, you see every company, they have all this nice stuff on their homepages. That’s a snapshot in time. It’s not a reality. That’s why you see a lot of companies, regardless of how big they are, they still get attacked because compliance is all about a snapshot in time. It’s not a proof of the real status of security of an organization.
We have these three security controls. Usually, this is what we refer to as a defense in depth security architecture. On the front, we got preventive security controls. You have things like firewalls and so forth. They’re designed to prevent attackers from even getting into your system in the first place. Then we have detective controls, which are designed to say, because you cannot stop everybody from getting in, you can’t stop all the attackers, if somehow they’re inside, how do you get to know? Detective controls. We will look at some examples later. They’re designed to be able to look at patterns, to look at signals, to look at indicators of compromise. To be able to arrive, to say, it seems we have been breached, this is what the attacker is doing, and this is how we can go about kicking him out. Kicking him out is basically what you do in recovery. You’re actively in the mood of trying to figure out what the attacker has done and what you can do about that.
For example, you have things like ransomware, where the attackers actually tell you, “I have attacked you. I have this, you have to give me that, otherwise I will do that”. In this case, it’s like damage control, and you’re just running around or trying to spend money to hire or bring in some specialists that will help you to do the recovery. What is not really talked about is security testing or validation, or some people actually call it security assurance. At the center of this idea is that regardless of what you’re doing, whether it’s preventive, detective, or recovery, you have to validate. You have to check. There is this saying that hope is not a strategy. You can’t hope that you’re secure, you actually have to check and validate.
Who is a software engineer here? I always tell my cybersecurity colleagues, we got a lot to learn from software engineers, because once you start your software engineering career, you start learning, within a few months or maybe weeks, or maybe on the same day, you’re already being taught about testing. You can’t assume that the code that you wrote is going to behave the way the requirements were provided. At the center of this is the user. Even if you’re not actively having the user in mind, you might have in mind your code reviewer or your manager.
Actually, at the end of the day, it’s the customer you’re looking at, because you don’t want a bad user experience. You have all these tests, unit tests, integration, smoke, load test performance, A/B testing. I really like the way Netflix does it. They have this A/B testing where they do very expansive and detailed testing where every single feature they’re going to be releasing, they’re going to release it to different sets of people, and they want to measure how these people respond, how they interact. At the end, they want to select. The selection is not based on how they feel. It’s not based on who brought up the idea. It’s not based on how innovative it is. It’s purely based on how users interact with it. That’s a very nice way for looking at quality of whatever we’re presenting to our users.
In security, also, we have some testing. We have already talked about penetration testing, web application testing, red and purple teaming, adversary emulation, bug bounty programs, all of this stuff. There is a bunch of problems with these kinds of testing which I would like to highlight. What are the problems or the limitations? I’m actually narrowing this talk specifically to the cloud. Most of the modern infrastructures or applications are built in the cloud. On the right here, you have this diagram which is called the 4C’s of cloud native security. It’s a diagram that was proposed by the Kubernetes security team. They actually wanted to allow or to help people to understand the complexity of a cloud native environment which, as you see, you have code, you have container, you have the cluster, and you have the underlying cloud infrastructure. What’s the problem here? Most security tests are superficial.
Usually, some of us here, maybe if you’re working with maybe your CISO or head of security, and I’ve seen that before where if they’re testing, they say, “Just test here. Don’t touch here. This is special. Don’t tamper with this one. This is the scope of your testing”. That’s great. You don’t want things to spoil. You don’t want things to break. It’s superficial. It’s just at a single layer. You don’t have control when the attackers come in. You can’t define where they should attack. The other thing is context. When it comes to cloud, the cloud is moving rapidly, and most of the security tools are still struggling to align with what the cloud is. The other problem is that of vulnerability.
Every now and then, maybe if you’re listening to the news, you hear people talking about vulnerabilities, and everybody is running around the whole place because they found a vulnerability that has a base score of 10 on the CVSS base score, 10 is the highest, and everyone is concerned about it. In reality, nobody is going to exploit it. It doesn’t make sense. There has to be some context around that. The most important point, which is the center of this talk, is, attackers are not talked about. You’re preventing these attackers from getting into your system, but most of the time, you’re talking about auditors. You’re talking about your company. You’re talking about the CVEs. Attackers are not in scope of all these kinds of assessments.
Cloud Attack Emulation
Let’s talk about cloud attack emulation. What is it? It’s basically a specialized form of adversary emulation where we are looking specifically at the cloud, and we’re saying, this adversary emulation, it’s about mimicking tactics, techniques, and procedures, which are just a definition of how attackers behave. We are just looking at how this concept can be applied to the cloud because the cloud has its own specialties. Surely, the goal is to evaluate and enhance an organization’s cloud security posture. Let’s look at two aspects here: the cloud security testing, threat-informed defense. These are two aspects that I just want to focus on during this talk. Let’s start with threat detection. Who has heard of threat detection? Threat detection is all about, as I said before, you want to identify if there are any malicious behavior or indicators of compromise in your environment.
Regardless of how huge or how fancy a security tool is that does threat detection, what you see here is the very center of the design. They’re getting logs from different sources. They are putting it in a centralized bucket. Then they’re trying to do things like query. This is an example of a threat detection system, like a threat detection engine. There it’s basically running a query against AWS CloudTrail logs to identify an action where the API call was GetSecretValue. This API call is to get secret value from Secrets Manager. What can go wrong here? This is a very good example. Last year, November or so, AWS released a new version of the Secrets Manager. They released a new API called BatchGetSecretValue, which essentially allows you to get as many as, I think, about 20 secrets at once.
Before then, you have to get one, so you have to make a loop and get multiple. With this, you can just harvest as many secrets as possible. This is great from a performance standpoint, from a development standpoint. From a security standpoint, it means that before you know what’s happening, an attacker already harvested everything from your Secrets Manager. That’s very bad. This is what you will see in a CloudTrail record. Here, you can see 10 secrets were collected by this event. What we realized when we were looking around, investigating state-of-the-art threat detection tools is that they are not able to detect this purely because here, basically, it’s very simple, you just need to add that CloudTrail event name. That’s all.
Unfortunately, there’s no magic around threat detection. People have to actually sit down, think about it, and add this kind of query to make it work. The entirety about threat detection is like this, and so, in the end, there are a lot of gaps. If a cloud service provider introduces new APIs, for example, most systems are blind. Attackers will have a free day using these new APIs or these new features, and they’re not being detected.
Let’s look at incident response. Incident response, this is basically the incident response workflow: we’ve got preparation, detection and analysis, containment eradication, recovery, post-incident activity. Importantly, this is a graph from the CrowdStrike recent report, actually, about threats. What I want you to understand here, you can see the red part here, actually cloud-agnostic cases. These are attackers, before, that when they got into a cloud environment, they are not aware. They are attacking a system, and they find themselves in a VM, and they don’t know if it’s a VM on-premises or in the cloud. Now we have cloud-conscious attackers. These guys, immediately they’re in a VM, it’s an EC2 instance, they know. They know that in this VM, there’s going to be an IMDS or something that allows them to get secrets or access, which they can inherit whatever access the VM has, and they can, from there, jump into the control plane and just explore. They are getting wiser. They are getting smarter. There is a 110% increase between the last one year of these kinds of attacks.
Bear in mind that CrowdStrike has a huge suite of security systems, so they collect data from everywhere, and so it means that what you see here is the reality, the state of the art, the way attackers have improved. When it comes to incident response, one of the ways to overcome these kinds of attackers, the way they are getting sophisticated, is to run incident response exercises. As you see here, this is the AWS guide on this. Basically, you have to run, some people call it simulations, I call it emulation. The difference is that simulations are simulations, emulations are emulations. Imagine pilots, they go into a simulation room, they train, but before they get to become pilots, they have to get into the aircraft, and that’s the reality. Emulation, you can look at it as that part.
Simulation, there are still false positives, but we talk about emulation here, so it is the real deal. That’s one way to improve your incident response. If we come back to this diagram here, it means that basically if you have a system that is supposed to detect and respond, you don’t assume that it’s going to detect and respond. You have to do some emulation so that you can have that confidence that it actually will perform if you are attacked.
Let’s look at a very quick example. What you see on the left here is a server-side request forgery. It’s an attack that was conducted around 2019 against Capital One. Capital One is a fintech, one of the biggest. It had this high-profile attack. The attacker was actually a former AWS employee, so very much aware of all that stuff. She got into the AWS EC2 instance, knew that there is a metadata service. As I said before, the metadata service basically allows applications to be able to interact automatically without any human effort. If you ask the metadata service, give me your credentials, it will give to you, and if the credentials are root access, or admin access, it means that you automatically have admin access into the entire account. If it is an AWS organization, it means the entire organization. That’s how attackers take over.
From here, she went, got access to the AWS IAM, got access to the S3 buckets. There were CloudTrail logs here, nobody was analyzing the logs. They were just there piling up. People pile it up. When there’s an incident, that’s when people go to harvest these logs, begin to look at it, to do forensic analysis, and so forth. Because we’re talking here about playbooks and runbooks and the need to validate, this is a document about how to take care of this attack. It’s basically a document provided by AWS. At the end of the day, the countermeasure for here is actually to use version 2, like IMDS version 2, which solves the problem of version 1. If you enable this document, it basically will identify instances that are using version 1 and upgrade them to version 2. The point I want to mention here is that the services in AWS change very fast.
If you have such a runbook and you haven’t even played with it for some time, you’re not sure, if there is an attack, that everything will work. You have to continuously validate. Could be just a very simple line of command that you need to change. It could be something more than that. It could be that resources have moved around. Anything can happen. Validation is super important. The way you do that is by running attacks. This is just an example from our system where we are basically running this SSRF very easily just to allow you to validate your runbooks.
Let’s look at purple/red teaming. Anybody heard about that, purple/red teaming? It’s taken from the military where they have red team, they have blue team, and basically, blue team is defending, red team is attacking. They play a cat and mouse game. In the end, they’re able to identify gaps. On the side of the blue team, they’re able to see whether their defenses are efficient. You see here that this is actually the way GitLab practices it.
Basically, you see, if we just concentrate on the attack emulation part, the red parts are the responsibilities of the red team. Validate, detection, and response, is what the blue team does. That’s their responsibilities. They do this, actually, without telling the SOC team. No one is aware, only a few people. The attack might be going on for like a month. The guys here are basically in a state of pandemonium, running around. At the end of the day, when things are getting bad, they will tell them, it’s an exercise. At the end of the day, you see that they get better. Maybe you’ve not really heard about GitLab in the news about attacks and so forth, because this is what they practice.
Now let’s look at cloud penetration testing. This diagram is basically similar to the one we saw before. Its’s just in another way here, I just redrew it. You can see the red lines are attack paths. You see that an attacker has the advantage. There are so many approaches that they can use to attack the cloud. They can start from the code. They can start from Kubernetes cluster. They can start from the cloud control plane. They can start from the Docker image itself. They can be in there before you deploy. How do you use traditional penetration testing to solve this complex environment? How do you do it? There are a bunch of problems with penetration testing. The traditional one first is expensive. Most of us, if you have been involved, you know how much you have to pay consultants.
Then, because it’s expensive, people do it once or twice a year. Then you have this huge window of opportunity where attackers can get in. Compliance, people just want to make the auditors happy. Superficial, you’re just checking some defined, narrowed-down scope items. Then, periodic, once, twice. Only in the end, people are happy. They say, yes, we did a pen test. It’s actually a false sense of security because the value of the pen test report is tied to the timestamp when it was done. Don’t assume it goes further than then. That’s the reality. Essentially, people have this false sense of security. Here is something that might be better. When you look at it, you see continuous testing. On the left, you’re looking at when the cloud infrastructure changes.
Essentially, every time you ship, you deploy Terraform or CloudFormation templates or whatever you’re deploying, it’s a possibility that you’re shipping in gaps, vulnerabilities, and so forth to the cloud. That’s an opportunity for you to test because that’s exactly what happens in software development. Each time there’s a new feature, there’s tests, you’re trying to test different kinds of testing. I tell my security folks that that’s the way to go. We have to look at software development. We have to learn the way to do testing. That’s the way we can actually be secure. This diagram puts that in context. You can see that if the security team identify that the test failed, they just stop the deployment and they ship it back. Of course, there’s going to be some compromise here. You don’t want to become gatekeepers. You want to allow things to move fast. There’s got to be some context.
Let’s talk a little bit about security for GenAI. Good thing is you can also apply this concept to GenAI. What we see here is AWS Amazon Bedrock. This is basically in the front, you got the chatbot. This is for a restaurant booking application where the restaurant customers can basically go there to ask questions. What kind of menu do you have? Are you for a vegetarian? Do you have kids’ menu? This is the architecture here, very simple. There are a bunch of problems that might be here with this system. We just want to look at one which is data poisoning. The data poisoning attack here is the fact that for AWS RAG architecture, they keep the documents that you need to use for fine-tuning, essentially in the S3 bucket. There are other options, but this is the most popular approach. They’re in S3 buckets, and it means that everything that we knew about S3, all the problems, is applicable here. Attackers can easily just go to the bucket and they can do some stuff. It could be from any direction. Here we see that it’s possible for an attacker to first start from the knowledge bases, because if you go to the Bedrock Knowledge Bases, and ask it, where is your datastore? It will tell you where the datastore is.
Datastore, usually it will tell you, it could be OpenSearch, where they have their databases, where they keep these documents. It could be different formats. In the end, knowledge base will give you that information. From there, you can begin the attack. Here, he just went forward, disabled the S3 login so that no one becomes aware when they begin to make calls. Then they basically add malicious data into here. In the end, this malicious data is part of what will be used for training, so the agent will be mistrained. That’s basically the concept of data poisoning. What happens? When you do this, today AWS really doesn’t have a way to tell you or to identify this problem. The best I have seen is, if you go to GuardDuty, you will see something like that.
Basically, what GuardDuty is telling you is that it saw that someone was writing into a bucket and this operation seems to be malicious. If you haven’t seen it before, you are completely lost, because someone is writing to a bucket, no problem. There’s also a chance that you completely ignore it. Another problem with GuardDuty is it will record this event three times, and after then it will not record it because it will become part of the baseline. If an attacker is there doing it continuously, you don’t get the opportunity as a defender to get a warning.
Threat-Informed Defense
Let’s go to threat-informed defense. Threat-informed defense, by definition, is a systematic application of a deep understanding of adversary tradecraft and technology to improve defenses. It’s basically an idea that was proposed or is proposed by MITRE Engenuity. They looked at all the problems we face, all the problems with just making decisions based on vulnerabilities, and said, we can do better by actually looking at attackers. Let’s look at what this means. Usually, what we do is, if you have a system to defend, you’re looking at the vulnerabilities and you have tools that will just scan it and tell you, you have 1,000 vulnerabilities.
Usually, if it’s small like that, it’s not a problem. It becomes so much, you’re like that. Where do you start from? How do you fix this problem? You’re overloaded. This is a task. You have to fix it one after the other. Threat-informed defense is of the idea that when you have such vulnerabilities or weaknesses, you should associate it directly with a threat actor, someone who actually will take that knowledge and use it against you. That’s evidence for you to prioritize the information, the resources that you have established this relationship, like that.
The first pillar of threat-informed defense is defensive measures. Has anyone seen this before? Basically, this is a matrix. On the top, you see reconnaissance, resource development, initial access. It basically talks about how attackers get into a system and how they move from one point to the other. Down you have what you call the techniques, which are exactly how attacks are conducted. You also have procedures. This information is collected from different researchers, from different companies that have access to your machines, my machines. They collect all this information, and when they see evidence of an attack, they contribute it to this database. Whatever you see in this database is not a hypothesis. It’s not someone’s theory. It’s evidence of what people have seen. It’s more tangible. The second pillar is cyber threat intelligence. Intelligence is all about getting information: this time around, information about specific individuals or threat actors or attacker groups that are doing stuff.
For example, we have Scattered Spider. Scattered Spider is the group that was responsible for the MGM Resorts attack last year. They basically got into this nice hotel, and they attacked it, and key locks, smart locks were not working, computers were not working, ATMs were not working. Everybody was stranded. They said, if you want us to release all that stuff, you have to give a certain amount of money. They’re still actively involved. Cyber threat intelligence basically tells you such information. You see here the U.S. CISA, they release this information about these kinds of groups. They tell you who they are. They tell you the kinds of industries they attack, the kinds of approaches they use. It’s very tangible information about threat actors that you get. You can also put it in this form for using it.
The third pillar is basically testing and evaluation. We’re back to testing. You have information about attackers, you implement some kind of defenses, but you have to evaluate it because something might be wrong. Either your defenses are not well configured, or maybe somehow the threat information you got is wrong because you get it from different sources, or your environment is designed in a quite unique way and it’s a little bit different. You have to test so that you’re very sure. Let’s just look at an example where we are basically putting all this together.
This is the example I talked about before. Here you see an attacker getting access to AWS Secrets Manager. Because he has access to AWS Secrets Manager, he can basically get keys that allow him access to things like RDS, Redshift, Document Database, every single thing that has keys in that Secrets Manager. Here we are using Datadog SIEM. It’s one of the systems that we looked at. That’s the emulation here where we basically looked at the two types of APIs for Secrets Manager. What we found out here, what we refer to as zero-day detections, is that, when you use this BatchGetSecretValue API call, this is not recorded in the trait. It’s not detected, so it’s basically a blind spot. This is just one experiment, but one way you can identify gaps that might be in your system.
Conclusion
I want to conclude with this quote from Mike Tyson. He says, everyone’s got a plan until they get punched in the face. It’s very realistic because you have a plan. It’s nice. You spend days, months writing these policies. You have auditors who came and gave you the green check mark. Everything looks good. Is it going to stand when you’re actually attacked? That’s the question you actually have to ask yourself. The next time security people come to tell you all that stuff, they show all the nice papers, you need to ask them, is it going to stand the test of time? I wrote a nice article, “Getting Punched in The Face”, it’s on LinkedIn. These are some resources. We have done a lot of research around this topic, and you can actually see it on our homepage.
Questions and Answers
Losio: Usually, when we play with the cloud, we tend to feel a lot like shared responsibility with the cloud provider, whatever it is, AWS, in this example, or Microsoft Azure, or Google. We think, some is our responsibility for the security, some is their responsibility, or their data center, whatever. We tend sometimes to think that the responsibility maybe is a bit too much on their side. I think, it’s their problem. Where does this fit when I think about sharing responsibility with the cloud provider?
Torkura: The shared responsibility model describes the responsibilities of the cloud service providers as against ours. When it comes to most of these things, everything logical, like configuring your gateways, your VPCs, all of that stuff is your responsibility as a customer. For example, incident response, as long as the hardware is not involved in the attack, AWS will tell you, you were supposed to configure it well. Even if you run to them, they will ask you questions that you should know. They don’t usually have root access to your account. There’s some basic information that you have to provide for them. This is a very important question because you really need to understand the shared responsibility model, things like penetration testing, all those things. AWS will not do it for you. It’s what you should do as part of your responsibilities.
Participant 1: For that and hype stuff like GenAI, the AWS and Azure are basically just known platforms. They have an established security team, say, like that. We can say about shared responsibilities from that regard. If we’re talking about Anthropic, OpenAI. If we are not talking about AWS that can host Sonnet or Azure that can host GPT-4, but I’m talking about really purely usage of the OpenAI hosted LLMs, for example. They can state whatever they want. What about the trust and all the stuff? For maybe an even more general question about the providers that are not that famous, whether they state something. What could be the strategies here?
Torkura: When you’re talking about Anthropic or OpenAI, these are Software as a Service. In Software as a Service, it means that the provider even has more responsibilities, they have more responsibilities. However, for example, if you’re using OpenAI, you get your API key. If an attacker steals it, you can’t go to them and say they’re responsible for that, because it’s your responsibility to have kept the key in a way that it’s secure from being stolen. There are all those things. I think usually they will try to explain it in some kind of document about what you should do and what they should do.
Participant 1: I’m just thinking in a little bit different direction, say the users, so the company, or just the end users that will use this SaaS provider, like OpenAI and Anthropic, say they will send prompts. These prompts can contain sensitive information. Say OpenAI will have a breach. An attacker will just get the data, potentially they may store it. Just not like something that will not happen. When they have this data, it will be just taken.
Torkura: Sometimes when companies are breached, like the Capital One attack I mentioned, or other ones, some companies find them. Maybe just a recent example, CrowdStrike, they had this problem where airports were shut down and all of that. Some airlines find them. They say, we lost X amount of money because of your fault. You need to give me this. It becomes a court case. If OpenAI gets breached and because of that attackers take advantage and begin to attack your company, you can take them to court because that data was with them and they are responsible for protecting it, and they failed. Some companies take them to court. I’m not sure if it’s always the wise way because in the end it might take a lot of time. Sometimes some companies win because it becomes a legal case. Most of the time all this information is written somewhere in the terms and condition of service. They write it down. The legal people like reading these documents because they can see.
Participant 2: A, normally when you do penetration tests on the cloud providers, they ask you to tell them that you’ll do it. How do you go about with a constant approach? Because if you tell them, yes, I’m constantly doing pen tests. How do they figure out that they’re actually coming from you and not from a malicious source? B, how do you see it with defense in depth? Because if you have a cloud solution that is constantly testing your defense in depth, you need to give it access to the depth. You open another attack vector.
Torkura: You’re correct in terms of the traditional penetration tests. Most of the time you’re testing from outside. You’re looking at the environment from externally, and when you start hitting AWS, you get all these alarms, and they say, you need to tell us you’re going to do, we don’t want to run around. When it comes to what I’m showing here, attack emulation, we actually have some prior access, which goes to your second question.
Firstly, it means that we are attacking from within, and therefore, AWS doesn’t even know because it’s part of your shared responsibility. You can do it as often as you want, the way you do your normal software testing and they will never ask you a question because it’s happening right inside of your cloud account. In the end, you basically do something similar to penetration testing because you’re knocking all the doors and pushing all this stuff.
You remember I talked about the assume breach mindset. The way we do it is, basically after the attack, we roll back. If we make a bucket public, at the end of the day, part of the process is to take it back to being private. There are also some open-source tools which do not do this, and in the end, you have to put it as part of your work. At the end of the day, you need to clean it up. This is one of the reasons why most people don’t like to do that. In the end, it should be just part of the workflow that, ok, at the end of the day, you have to clean it up and you don’t leave it vulnerable.
See more presentations with transcripts

MMS • Claudio Masolo
Article originally posted on InfoQ. Visit InfoQ

Kubernetes v1.33, codenamed “Octarine” in homage to Terry Pratchett’s Discworld, was released on April 23, 2025. This milestone introduces 64 enhancements (18 stable, 20 beta, and 24 alpha) reflecting the project’s ongoing commitment to scalability, security, and developer experience.
One of the most anticipated features in Kubernetes 1.33 is the promotion of sidecar containers to stable status. Sidecar containers provide a native way to deploy companion processes alongside application containers within the same Pod. This pattern has been widely used in service mesh implementations, logging solutions, and other scenarios where auxiliary functionality needs to be tightly coupled with the main application.
An example of how to implement sidecar containers is the following:
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp
labels:
app: myapp
spec:
replicas: 1
selector:
matchLabels:
app: myapp
template:
metadata:
labels:
app: myapp
spec:
containers:
- name: myapp
image: alpine:latest
command: ['sh', '-c', 'while true; do echo "logging" >> /opt/logs.txt; sleep 1; done']
volumeMounts:
- name: data
mountPath: /opt
initContainers:
- name: logshipper
image: alpine:latest
restartPolicy: Always
command: ['sh', '-c', 'tail -F /opt/logs.txt']
volumeMounts:
- name: data
mountPath: /opt
volumes:
- name: data
emptyDir: {}
With the stable implementation, sidecar containers can now be properly managed in their lifecycle, with Kubernetes ensuring they start before and terminate after the main application containers, addressing previous challenges with pod initialization and graceful shutdowns.
Kubernetes 1.33 also promotes in-place resource resizing for vertical scaling of Pods to beta status, addressing a long-standing limitation in the platform. Traditionally, changing resource allocations (CPU and memory) for running workloads required pod recreation, causing application disruption.
With this feature, administrators can now adjust resource allocations without disrupting the running application, enabling more flexible resource management in response to changing application demands. This capability is particularly valuable for stateful applications and databases where pod recreation introduces significant operational overhead.
The release brings enhanced support for service account tokens, with “bound service account token volumes” now reaching a stable status. This feature ensures API authentication uses industry-standard JWT tokens with proper audience and time bindings, significantly improving the security posture of Kubernetes deployments.
Kubernetes 1.33 now features a redesigned allocation system for Service IPs. Every type: ClusterIP
Service requires a unique IP address cluster-wide, with duplicate allocation attempts being rejected.
The enhanced allocator leverages two GA-status APIs: ServiceCIDR
and IPAddress
. This implementation enables cluster administrators to dynamically expand the IP address pool available for type: ClusterIP
Services by simply creating additional ServiceCIDR
objects.
Storage capabilities receive attention in this release with Container Storage Interface (CSI) migration reaching stable status for more volume plugins, simplifying the transition from in-tree storage drivers to the more flexible CSI architecture.
On the networking front, IPv4/IPv6 dual-stack networking continues to mature with additional configuration options and improved performance. Network policy logging moves to beta status, providing better visibility into network traffic controls.
Some old features are also deprecated or removed:
- Endpoints API: Deprecated in favor of EndpointSlices, which offer better scalability and support for modern features .
- gitRepo Volume Type: Removed due to security concerns; users should migrate to alternatives like initContainers with git clone operations .
- Host Networking for Windows Pods: Support withdrawn due to technical challenges .
Kubernetes v1.33 “Octarine” emphasizes stability, security, and operational efficiency. With features like native sidecar support, in-place pod resizing, and enhanced job management, it empowers developers and operators to build and manage robust, scalable applications. As Kubernetes continues to mature, these enhancements reflect the community’s dedication to addressing challenges in cloud-native environments.
For a comprehensive list of changes, refer to the official release notes.

MMS • Steef-Jan Wiggers
Article originally posted on InfoQ. Visit InfoQ
Google has announced the general availability of Cloud WAN, a new managed wide area network (WAN) solution that leverages Google’s global network infrastructure.
The company states that Cloud WAN is designed to provide a secure, reliable, high-performance enterprise backbone, offering an alternative to traditional multiprotocol label switching (MPLS) networks and complex SD-WAN deployments.
Google’s global network comprises 202 points of presence (PoPs), over 2 million miles of fiber, and 33 subsea cables, backed by a 99.99% reliability SLA. The company says this infrastructure powers both Google Cloud and its Cross-Cloud Network solutions, and now forms the foundation for Cloud WAN.
The announcement addresses the increasing complexity of enterprise networking, driven by the adoption of SaaS and cloud applications, and the rise of AI. Google notes that traditional MPLS networks are high-cost, while SD-WAN with direct internet access (DIA) introduces application performance and security challenges. The emergence of AI, with its distributed infrastructure and demands for scalability, security, and cost-effectiveness, further complicates network requirements.
In a Google Cloud blog post, Subhasree Mandal, a lead on the Global Network Technology team, explained that the AI era brought new challenges, starting with the scale of traffic AI-powered apps and model training sent to Google’s network:
We introduced a multi-shard horizontal network architecture to swiftly grow capacity. Here, each shard is essentially a different instance of the network that exists independently. We can scale the network within each shard and increase the number of shards as demand increases. It’s like we’re offering capacity from multiple ISPs, which ensures redundancy, too.
Cloud WAN aims to simplify enterprise connectivity by offering a unified solution for connecting geographically dispersed data centers, branch offices, and campuses.
(Source: Google Cloud website)
In a blog post, the company highlights two primary use cases for Cloud WAN:
- High-performance, cross-region connectivity: For large, global organizations with extensive data center networks, Cloud WAN offers flexible connectivity options, including Cloud Interconnect and Cross-Cloud Interconnect (for multi-cloud connectivity). A new Cross-Site Interconnect feature, currently in preview, provides layer two private connectivity between data centers.
- Migrating branch and campus networks: Cloud WAN extends Google’s Premium Tier network to securely connect branch offices and campuses to cloud resources and SaaS applications. This provides a managed solution with integrated security and a lower total cost of ownership (TCO). Google’s Premium Tier network is designed to optimize application performance by routing traffic through Google’s network as efficiently as possible, with extensive peering connections.
(Source: Google blog post)
Matteo Di Maggio, Senior Platform Group Manager, Connectivity and Voice at Nestle, concluded in a YouTube video on Cloud WAN:
The transition to Cloud WAN has already shown performance improvements and promises further cost savings by replacing older technologies.
Google claims that Cloud WAN can provide up to 40% faster performance compared to the public internet, and up to 40% savings in TCO compared to customer-managed WAN solutions. The TCO reduction is attributed to reduced reliance on MPLS, consolidation of carrier-neutral facility deployments, and flexible pricing options.
Google also highlights that enterprises can work with their existing managed service providers (MSPs) to migrate and operate Cloud WAN and that global system integrators (GSIs) like Accenture, HCLTech, and Wipro offer Cloud WAN services.

MMS • InfoQ
Article originally posted on InfoQ. Visit InfoQ

This eMag explores the challenges of modernizing, migrating, and scaling applications, spotlighting architectures that are redefining what is possible in software today.
How did Uber migrate critical systems to a hybrid cloud architecture with zero downtime? How can we architect software to reduce costs and latency in the cloud? What are the major scaling challenges in legacy modernization and mainframe systems? And why is technological renovation so often overlooked?
While the scale of some of these migration and modernization efforts may seem extreme, posing challenges rarely faced by smaller organizations, there are valuable lessons to be drawn. Legacy systems are an inevitable part of success in long-standing companies, but transforming them is essential for future innovation.
Free download

MMS • RSS
Posted on mongodb google news. Visit mongodb google news
Boothbay Fund Management LLC increased its stake in MongoDB, Inc. (NASDAQ:MDB – Free Report) by 352.6% during the 4th quarter, according to the company in its most recent Form 13F filing with the Securities and Exchange Commission. The institutional investor owned 10,066 shares of the company’s stock after acquiring an additional 7,842 shares during the period. Boothbay Fund Management LLC’s holdings in MongoDB were worth $2,343,000 at the end of the most recent reporting period.
Several other hedge funds have also modified their holdings of MDB. Morse Asset Management Inc acquired a new stake in shares of MongoDB in the third quarter valued at $81,000. Virtu Financial LLC raised its stake in shares of MongoDB by 351.2% in the third quarter. Virtu Financial LLC now owns 10,016 shares of the company’s stock valued at $2,708,000 after acquiring an additional 7,796 shares in the last quarter. Wilmington Savings Fund Society FSB acquired a new stake in MongoDB in the third quarter valued at $44,000. Tidal Investments LLC raised its stake in MongoDB by 76.8% in the third quarter. Tidal Investments LLC now owns 7,859 shares of the company’s stock valued at $2,125,000 after buying an additional 3,415 shares in the last quarter. Finally, Principal Financial Group Inc. raised its stake in MongoDB by 2.7% in the third quarter. Principal Financial Group Inc. now owns 6,095 shares of the company’s stock valued at $1,648,000 after buying an additional 160 shares in the last quarter. 89.29% of the stock is currently owned by institutional investors and hedge funds.
MongoDB Stock Performance
NASDAQ:MDB opened at $173.50 on Monday. The stock has a fifty day moving average price of $195.15 and a two-hundred day moving average price of $248.65. The stock has a market capitalization of $14.09 billion, a price-to-earnings ratio of -63.32 and a beta of 1.49. MongoDB, Inc. has a 1 year low of $140.78 and a 1 year high of $387.19.
MongoDB (NASDAQ:MDB – Get Free Report) last released its quarterly earnings data on Wednesday, March 5th. The company reported $0.19 earnings per share (EPS) for the quarter, missing the consensus estimate of $0.64 by ($0.45). The business had revenue of $548.40 million during the quarter, compared to analysts’ expectations of $519.65 million. MongoDB had a negative return on equity of 12.22% and a negative net margin of 10.46%. During the same quarter in the previous year, the company posted $0.86 EPS. On average, analysts predict that MongoDB, Inc. will post -1.78 EPS for the current year.
Analyst Ratings Changes
A number of research firms have weighed in on MDB. Needham & Company LLC cut their price target on MongoDB from $415.00 to $270.00 and set a “buy” rating on the stock in a research note on Thursday, March 6th. Piper Sandler cut their price target on MongoDB from $280.00 to $200.00 and set an “overweight” rating on the stock in a research note on Wednesday, April 23rd. Canaccord Genuity Group cut their price target on MongoDB from $385.00 to $320.00 and set a “buy” rating on the stock in a research note on Thursday, March 6th. Redburn Atlantic upgraded MongoDB from a “sell” rating to a “neutral” rating and set a $170.00 price target on the stock in a research note on Thursday, April 17th. Finally, Wells Fargo & Company cut MongoDB from an “overweight” rating to an “equal weight” rating and cut their price target for the company from $365.00 to $225.00 in a research note on Thursday, March 6th. Eight investment analysts have rated the stock with a hold rating, twenty-four have issued a buy rating and one has issued a strong buy rating to the company’s stock. According to MarketBeat.com, the stock currently has an average rating of “Moderate Buy” and an average price target of $294.78.
Get Our Latest Analysis on MongoDB
Insider Buying and Selling at MongoDB
In related news, CAO Thomas Bull sold 301 shares of the stock in a transaction on Wednesday, April 2nd. The stock was sold at an average price of $173.25, for a total transaction of $52,148.25. Following the sale, the chief accounting officer now owns 14,598 shares of the company’s stock, valued at $2,529,103.50. This trade represents a 2.02 % decrease in their ownership of the stock. The transaction was disclosed in a legal filing with the Securities & Exchange Commission, which is available at the SEC website. Also, CFO Srdjan Tanjga sold 525 shares of the stock in a transaction on Wednesday, April 2nd. The shares were sold at an average price of $173.26, for a total value of $90,961.50. Following the sale, the chief financial officer now directly owns 6,406 shares in the company, valued at approximately $1,109,903.56. The trade was a 7.57 % decrease in their position. The disclosure for this sale can be found here. Over the last 90 days, insiders have sold 47,680 shares of company stock valued at $10,819,027. Insiders own 3.60% of the company’s stock.
MongoDB Profile
MongoDB, Inc, together with its subsidiaries, provides general purpose database platform worldwide. The company provides MongoDB Atlas, a hosted multi-cloud database-as-a-service solution; MongoDB Enterprise Advanced, a commercial database server for enterprise customers to run in the cloud, on-premises, or in a hybrid environment; and Community Server, a free-to-download version of its database, which includes the functionality that developers need to get started with MongoDB.
Featured Stories
This instant news alert was generated by narrative science technology and financial data from MarketBeat in order to provide readers with the fastest and most accurate reporting. This story was reviewed by MarketBeat’s editorial team prior to publication. Please send any questions or comments about this story to contact@marketbeat.com.
Before you consider MongoDB, you’ll want to hear this.
MarketBeat keeps track of Wall Street’s top-rated and best performing research analysts and the stocks they recommend to their clients on a daily basis. MarketBeat has identified the five stocks that top analysts are quietly whispering to their clients to buy now before the broader market catches on… and MongoDB wasn’t on the list.
While MongoDB currently has a Moderate Buy rating among analysts, top-rated analysts believe these five stocks are better buys.

Market downturns give many investors pause, and for good reason. Wondering how to offset this risk? Enter your email address to learn more about using beta to protect your portfolio.
Article originally posted on mongodb google news. Visit mongodb google news

MMS • Renato Losio
Article originally posted on InfoQ. Visit InfoQ

During the recent Developer Week 2025, Cloudflare announced the public beta of Cloudflare Secrets Store, a secure way to store API tokens, keys, and credentials. While the long-term goal is to integrate Secrets Store with various Cloudflare products, it currently supports only Cloudflare Workers.
Cloudflare Secrets Store enables developers to securely store and manage the secrets their applications need, from API tokens to request authorization headers. Mia Malden, product manager at Cloudflare, Mitali Rawat, systems engineer, and James Vaughan, systems engineer at Cloudflare, write:
Environment variables and secrets were first launched in Cloudflare Workers back in 2020. Now, there are millions of local secrets deployed on Workers scripts. However, these are not all unique (…) With thousands of secrets duplicated across scripts — each requiring manual creation and updates — scoping secrets to individual Workers has created significant friction for developers (…) Now, you can create account-level secrets and variables that can be shared across all Workers scripts, centrally managed and protected within the Secrets Store.
Cloudflare Secrets Store was initially announced in May 2023, but no news had been shared since then, raising questions in the community that the project had been discontinued. Two months ago, user waterforthemasses wrote on Reddit:
This is a long awaited feature, especially given the limitations of Worker env variable secrets. Could Cloudflare confirm if this has been shelved or still WIP? And if possible, what is the rough timeline when to expect it?
According to the documentation, whereas Worker secrets are tied to the account role and anyone who can modify the Worker can modify the secret, access to account-level secrets is restricted with granular controls: Cloudflare Secrets Store uses role-based access control (RBAC) and any changes to the Secrets Store are recorded in the audit logs. Malden, Rawat, and Vaughan add:
Right now, to use a secret within a Worker, you have to create a binding for that specific secret. In the future, we’ll allow you to create a binding to the store itself so that the Worker can access any secret within that store. We’ll also allow customers to create multiple secret stores within their account so that they can manage secrets by group when creating access policies.
The feature was a highly requested requirement from the community, with Bruce Lee Harrison asking a year ago on Reddit:
I’m currently building something out that makes extensive use of PKI, and currently I have to manage all of this within my worker and an R2 database. While this works, root keys still present problem and the new Secret Store would complete solve my issue. Has CF given any guidance beyond the original blog posting? Had anyone gotten access to the beta?
Cloudflare Secrets Store is currently in public beta, and the Workers integration is available for all customers via UI and API.
MongoDB, Inc. (NASDAQ:MDB) Given Consensus Recommendation of “Moderate Buy” by Analysts

MMS • RSS
Posted on mongodb google news. Visit mongodb google news
Shares of MongoDB, Inc. (NASDAQ:MDB – Get Free Report) have been given an average recommendation of “Moderate Buy” by the thirty-three research firms that are currently covering the company, Marketbeat Ratings reports. Eight equities research analysts have rated the stock with a hold recommendation, twenty-four have given a buy recommendation and one has issued a strong buy recommendation on the company. The average twelve-month target price among analysts that have updated their coverage on the stock in the last year is $294.78.
A number of research analysts have weighed in on the company. Rosenblatt Securities reiterated a “buy” rating and set a $350.00 target price on shares of MongoDB in a report on Tuesday, March 4th. Truist Financial decreased their price objective on MongoDB from $300.00 to $275.00 and set a “buy” rating for the company in a report on Monday, March 31st. Monness Crespi & Hardt upgraded shares of MongoDB from a “sell” rating to a “neutral” rating in a report on Monday, March 3rd. Scotiabank reiterated a “sector perform” rating and issued a $160.00 price target (down from $240.00) on shares of MongoDB in a research note on Friday. Finally, KeyCorp lowered shares of MongoDB from a “strong-buy” rating to a “hold” rating in a research note on Wednesday, March 5th.
Check Out Our Latest Report on MDB
Insider Buying and Selling
In other news, Director Dwight A. Merriman sold 3,000 shares of the business’s stock in a transaction on Monday, February 3rd. The shares were sold at an average price of $266.00, for a total transaction of $798,000.00. Following the completion of the sale, the director now owns 1,113,006 shares of the company’s stock, valued at $296,059,596. The trade was a 0.27 % decrease in their position. The sale was disclosed in a legal filing with the SEC, which is available at this link. Also, CAO Thomas Bull sold 301 shares of the firm’s stock in a transaction on Wednesday, April 2nd. The stock was sold at an average price of $173.25, for a total value of $52,148.25. Following the completion of the transaction, the chief accounting officer now directly owns 14,598 shares in the company, valued at approximately $2,529,103.50. This represents a 2.02 % decrease in their ownership of the stock. The disclosure for this sale can be found here. Insiders sold 39,345 shares of company stock valued at $8,485,310 over the last 90 days. Company insiders own 3.60% of the company’s stock.
Institutional Trading of MongoDB
Hedge funds and other institutional investors have recently bought and sold shares of the stock. Cloud Capital Management LLC bought a new stake in MongoDB during the first quarter valued at about $25,000. Strategic Investment Solutions Inc. IL purchased a new stake in shares of MongoDB during the fourth quarter worth about $29,000. Hilltop National Bank raised its stake in MongoDB by 47.2% during the 4th quarter. Hilltop National Bank now owns 131 shares of the company’s stock valued at $30,000 after purchasing an additional 42 shares during the period. NCP Inc. purchased a new position in MongoDB in the 4th quarter worth approximately $35,000. Finally, Versant Capital Management Inc boosted its stake in MongoDB by 1,100.0% in the 4th quarter. Versant Capital Management Inc now owns 180 shares of the company’s stock worth $42,000 after purchasing an additional 165 shares during the period. Hedge funds and other institutional investors own 89.29% of the company’s stock.
MongoDB Price Performance
NASDAQ MDB opened at $174.69 on Wednesday. The business’s 50-day moving average is $190.43 and its 200 day moving average is $247.31. The stock has a market capitalization of $14.18 billion, a PE ratio of -63.76 and a beta of 1.49. MongoDB has a 1 year low of $140.78 and a 1 year high of $387.19.
MongoDB (NASDAQ:MDB – Get Free Report) last issued its quarterly earnings data on Wednesday, March 5th. The company reported $0.19 earnings per share for the quarter, missing analysts’ consensus estimates of $0.64 by ($0.45). MongoDB had a negative return on equity of 12.22% and a negative net margin of 10.46%. The business had revenue of $548.40 million during the quarter, compared to the consensus estimate of $519.65 million. During the same quarter in the prior year, the business posted $0.86 earnings per share. As a group, equities research analysts anticipate that MongoDB will post -1.78 earnings per share for the current year.
MongoDB Company Profile
MongoDB, Inc, together with its subsidiaries, provides general purpose database platform worldwide. The company provides MongoDB Atlas, a hosted multi-cloud database-as-a-service solution; MongoDB Enterprise Advanced, a commercial database server for enterprise customers to run in the cloud, on-premises, or in a hybrid environment; and Community Server, a free-to-download version of its database, which includes the functionality that developers need to get started with MongoDB.
Recommended Stories
Receive News & Ratings for MongoDB Daily – Enter your email address below to receive a concise daily summary of the latest news and analysts’ ratings for MongoDB and related companies with MarketBeat.com’s FREE daily email newsletter.
Article originally posted on mongodb google news. Visit mongodb google news