Arc-Enabled Servers Run Command Public Preview Feature: Remote Management for Various Environments

MMS Founder
MMS Steef-Jan Wiggers

Article originally posted on InfoQ. Visit InfoQ

Microsoft has recently announced a significant preview feature related to Arc-enabled servers, introducing the Run Command. This feature allows customers to manage Azure Arc-enabled servers remotely and securely.

With Azure Arc-enabled servers, customers can manage Windows and Linux physical servers and virtual machines hosted outside of Azure, on their corporate network, or another cloud provider. The Run Command preview feature, built in the Connected Machine agent, supports running scripts and centralizing script management across creation, update, deletion, sequencing, and listing operations.

In addition, the command is currently supported through Azure CLI and PowerShell, supports non-Azure environments, including on-premises, VMware, SCVMM, AWS, GCP, and OCI, and does not require more configuration or the deployment of any extensions. However, the Connected Machine agent version must be 1.33 or higher.

Aurnov Chattopadhyay, a product manager at Microsoft, writes:

Run Command empowers you to perform myriad server management tasks on your Arc-enabled servers, such as application management, security, and diagnostics. For example, you can use Run Command to install or update software, configure firewall rules, run health checks, or troubleshoot issues.

Run Command Operation Overview (Source: Microsoft Tech Community blog post)

The Run Command feature is also available on Cloud platforms like AWS and GCP. For instance, AWS Systems Manager Run Command allows users to remotely and securely manage the configuration of their instances and servers across hybrid environments. Meanwhile, GCP offers a Cloud SDK, which provides tools for managing resources and applications hosted on GCP. It includes a command-line interface (CLI) that allows users to manage their Google Cloud Platform resources from their local machine.

Pratheep Sinnathurai, a senior Azure engineer at baseVISION Ag, concludes in a blog post about the Run Command feature:

The Run Command Feature for Azure Arc-enabled Server is a game changer, enabling server admins to extend Azure capabilities to on-premises and multi-cloud environments. To further enhance automation, Azure Runbooks or Azure Functions can be leveraged to automatically configure Azure Arc-enabled Servers.

Yet, in the comment section on a YouTube video by John Savill on the Arc-Enabled Server Run Command, @andrewbates8858 wrote:

It is useful that MSFT has finally added this, but they could have implemented it via the browser the same as normal VMs. The API functionality will be useful, but for a quick command here and there, I’m not so sure. Maybe a V2 thing once its proving to be a useful function.

The feature does not incur any costs, yet storage of scripts in Azure may incur billing. More details on the Arc-enabled servers and Run Command are available on the documentation landing page.

About the Author

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Presentation: Sustainable Security Requirements with the ASVS

MMS Founder
MMS Josh Grossman

Article originally posted on InfoQ. Visit InfoQ

Transcript

Grossman: I want to talk to you a little bit about the ASVS, the Application Security Verification Standard, and how we can use it to actually get better value early on in the development process. Let’s start off with a story, first of all. I was working with an organization, let’s call them Lumber, Plc. This is a big doc of software organization. They had a variety of different products, variety of different lines, lots of different teams, thousands of developers, and they were looking to implement a secure software development lifecycle. I came into this quite late on. I came in when they’d already built up that documentation, and they wanted to help them implement it. They’d built this big document that was top to bottom, front to back, everything you might want from a software development lifecycle with security built in. They thought from the very beginning to the very end, all the way through, ok, here are the different activities we want to do, top to tail. They’d come up with the big bang approach. They decided, ok, we, the application security team, we’re going to go into each product, each team, we’re going to spend about three months with them implementing everything in this list, and then we’re going to move on. That’s already a little bit of an interesting approach, a little bit of a challenging approach.

They thought, what about requirements and design? What are they going to do for that? What have they got in the document for that? Now for that, they decided they were going to build the checklist. This is what it was called. It was called the checklist. It was a list of security items they needed to consider when they were doing requirements, when they were starting their design phase. I started to ask people about the checklist. I said, what is this? Have you completed this? Have you just done it once or you’re doing it on an ongoing basis? Are you updating it when you make changes? Is this covering just one requirement or is this covering the whole product? I found it quite difficult to get answers. I found it quite difficult to actually get a good picture of what actually is going on with this list. Is this actually telling it’s being used. Everyone was scared to talk about it.

Eventually, I sat down and I thought, I need to have a better look at what this checklist actually is. I looked at the checklist, and it was a monstrosity. It was huge. It was really long. It was complicated. I’m not surprised people were worried about using it. I’m not surprised people were scared of it. I was scared of it. I understood most of it. How can anyone work with this thing? How can anyone use that, especially in a more fast development process, a more Agile development process. What is anyone supposed to do with that? They can’t go through all these questions every single time. The bottom line is we need security to be involved earlier on. We want security to be involved all along the way. We don’t want security to come along at the end and say that’s bad. We also can’t just drop a ton of paperwork, a ton of checklists, a ton of information on developers early on, and expect them to be able to swim with that, to be able to cope with that, or want to cope with that, not for a day. We need to think about how we’re going to do this in a sustainable way now, and also keep going in the future as well in a way that maintains a level of security but also doesn’t drown the developers.

What Are the Problems?

What are the problems? What are the key problems here? Information and requirement overload. Lumber had come in and said, here’s this giant doc of massive checklist that you have to deal with every single time, this always has to be in your head. We can’t be having that much information. We need to customize our approach to how we put security early on. The next problem was that Lumber had this AppSec team that was diving in three months, dive out, go on to the next team. That was the approach they thought they were going to take. Security can’t just be this outside force. Security can’t just come in, make a big noise, and then go back out again. Security needs to be integrated and security needs to be contextualized. You need to be clear, what are we concerned about from a security perspective in this case? If everything is important, nothing is important. We can’t just say you need to do all of these things all the time. Everything has to be done right now right away. We need to be able to prioritize. We need to be able to say, what’s most concerning to us at this point in time? What should we be most concerned about right now? Finally, let’s say for argument’s sake, which this three-month approach, going in working on that today, going out again, let’s just say that that went ok, which I can guarantee you it did not. What about tomorrow? What about going forward? We don’t need this just to work today, we need this to work on an ongoing basis. We need to operationalize this process. We need to make sure that this process works today, tomorrow, and for future development cycles as well.

Profile, and Outline

I live and breathe AppSec. Even just a breaking side, working with developers, working with development teams, looking at how we can build software securely in a more efficient way. That’s very much my day to day. I’m a consultant, which means I go from company to company and see lots of different environments, lots of different sizes, lots of different industries. Finally, we’re going to talk about the ASVS. It’s a OWASP project that I’m a co-leader of. Being a co-leader means obviously I’m quite familiar with using it. I’ve talked a lot in past talks about what is ASVS. That’s not the main purpose. I want to focus on actually using it, actually practicalities of how we’re going to build this into our development lifecycle. This is basically the plan. What is ASVS? How does it compare to other projects with OWASP? Then, how can we use it in our requirements process? These are different sections that we’ll talk through.

What Is The ASVS?

What is the ASVS, or what is the ASVS not? Which I think is a good point for context. OWASP Top 10 Risks. This is the OWASP project everyone’s heard of. It’s a great project released three to four years, has a really strong team of leaders with a lot of security expertise. They also gathered a lot of public comments, a lot of public data for the most recent versions, 2017 and 2021. It’s very frequently cited. It’s mentioned by all sorts of different organizations as a sign to consider. It’s designed as an application security awareness document. Awareness is the big word here. This isn’t controversial, this is front and center of what this product is about. The idea is that it is an awareness document, is to build awareness about application security. It’s not a standard. It’s not comprehensive. It’s not something that you should be assessing yourself against. These are the 10 things that we think are most concerning from application security perspective. It’s certainly not all the things. It’s certainly not necessarily 10 things either. One section may actually cover a wide variety of different issues.

The other drawback to the top 10 list is that it’s bringing problems. It’s saying, here are 10 problems, now what are you going to do? I don’t love bringing that to developers. I don’t like bringing problems. I want to bring some form of solution or something proactive. Like, for example, the OWASP Top 10 Proactive Controls. This is another great document. This is a guidance document for developers and builders of how to build software securely. We’re not bringing problems, we’re bringing solutions. I’m not going to go into detail in each of these sections, but you get the idea. It’s about providing security mechanisms. It’s a great starting point. It’s practical. It gives sensible prevention advice. Also, a great team of leaders with a lot of experience. It suffers from some of the other problems that the top 10 risks suffer from. It’s still not comprehensive. Again, it’s more for awareness. It’s more of a starting point. Ok, you want to secure software, let’s start from here. Then you can graduate onto other things afterwards. It’s also not organized as a standard. It’s not very easy to assess yourself against it. It’s more of a narrative document. It’s a good read, but it’s not necessarily useful for a comparison, ok, where are we? What is our position?

What are you going to do? What are we going to do if we want to build secure software in a methodological way? The answer is the ASVS. Finally, what is the ASVS? The ASVS brings a set of requirements that you’d expect to see in a secure application. It’s designed to be an actual standard, designed to be something that you can assess yourself against, compare yourself against, compare to other applications, and generally be organized in a very methodological way. We consider it to be leading practices, which means that we try and look for requirements, for controls, mechanisms that are valid not just today, but also will be valid in the future. Maybe they’re quite new today, we expect to see more standardized in the future. While, again, I’m lucky to have a bunch of very skilled, very experienced co-leaders, it’s very much developed in the open by GitHub. Anyone can open issues. You can suggest changes. You can just provide your feedback, provide your ideas. It’s all very much in the open, all the discussions are in the open. Which means that it’s not just what the project leaders have seen, but we’re also getting feedback from the wider industry as well. It’s split into three levels. You have the initial requirements, which are considered the minimum level. Then you add more requirements on to that to get to the standard level where we’d like everyone to be, level two. Then finally, level three for the most sensitive applications, most sensitive data, and most high-value transactions, those applications as well, where at which point when you get to level three, you’ll need to be doing all the requirements.

Security In the Requirements Process

This is what the document looks like. As you can see, it’s quite detailed. We also try and make sure that each requirement is standalone. It’s focusing on a particular issue that you can then look at how you’re addressing in your own application. What I do want to talk about is security in the requirements process. These are the problems and the solutions that we talked about at the beginning. If we go back to the story of Lumber, Plc., and the challenges they had with their giant checklist. The ASVS is a big checklist, but the whole point here is to show, how can we actually do this practically? These are the four areas that I want to talk about. I’ll give some examples of each one. Let’s start off with customization. Lumber tried to bring all this information at once, they tried to drop all this information in one go. How can we customize to make it more focused and more specific to the particular problem at hand? The ASVS has about 280 requirements. We don’t want to be looking at those for every single feature. We want to be able to focus on what counts. We want to be able to focus on what is important for this particular feature, for this particular stage in the application’s development? The first useful thing is forking the ASVS: take the ASVS, take your own copy of it and start working on that copy. There are going to be things that are specific to your organization, things that are going to be specific to your situation. It’s definitely worth starting off with your own version that you can then make certain modifications to. That’s very much supported and recommended by the document itself.

There are a few do’s and don’ts here. You do want to match it to your particular situation. If you could justify dropping certain irrelevant requirements, then do that. For example, if you’re not using GraphQL anywhere in your organization, you probably don’t want to be thinking about the requirements related to GraphQL. Certainly, if you’ve got changes that you think are not just relevant for yourself, but also for other users of the ASVS, please do send those to the upstream project. On the other hand, I strongly recommend not changing the numbering. I’d suggest that you try and keep the requirements where they are, so that if in the future you want to compare it to the main standard, you’ve still got that comparison point. Also, don’t make changes without that rationale. Don’t make changes or drop things without explaining why, because two years down the line, someone’s going to ask why and you want to be able to have that clearly to hand. If you’re going to drop things because they’re not in use at all, then that’s fine. Don’t just drop things because, ok, today we can’t think about this, we’ll think about it tomorrow. Because it’ll get lost and suddenly the things that were temporarily dropped will end up being permanently dropped.

The first thing is to tailor it to your organization. Again, what’s relevant to you? You want to identify that upfront. You want to be clear, ok, we know that these requirements are specifically relevant, we want to focus on those. We know these requirements are less relevant, we want to focus less on those. We don’t want a situation where your developers, at the cold phase, at the actual point of development, they’re trying to make that decision, trying to make that determination. Or they’re pushing back, and say, “Why have you given me this requirement? This doesn’t relate to me. This doesn’t relate to what I’m doing right now.” We want to try and that in some way up front. We want to make it specific to a feature or a product. We could do that by saying we’re going to focus on this level of requirements, or we’re going to focus on this chapter. It’s not super straightforward that way. The levels are not necessarily matched to how a particular feature works, or how a particular organization works. Similarly, here are the chapters. Maybe if you’re building an authentication mechanism, then, yes, you can take the authentication chapter and focus on that. Often, it won’t be as clear-cut as that. It won’t be as straightforward. You may need requirements from a variety of chapters, which is where the idea of custom splitting comes.

The idea is you want to create a customized way of saying, here are the requirements we want to see for these features. The way I suggest doing that is by having categorizations for features, almost like questions, saying, does the feature do this? Does a feature do that? Does the feature change your authorization mechanism? If so, we probably need to show the requirements for a relevant way to do authorization. If it doesn’t, then we don’t want to show those requirements. Does a feature accept untrusted input? If so, then we need to start thinking about how we’re protecting ourselves against that input, and how our inputs are going through the system? If it doesn’t, if it’s just a simple view that doesn’t really accept anything to the process, then again, we don’t need to see those requirements now. Does it perform encryption operations? There are a whole bunch of requirements in the ASVS related to cryptographic keys and key management and algorithms. If we’re not doing any encryption that’s not TLS, then we don’t need to talk about that now. We don’t need to see that now. We don’t want our end users to have to go and not applicable. We want them to just not have to worry about that. We want that off their head.

Here’s an example I built for an organization. These are questions we started asking about the features. You’ll notice that this is specific to the organization, it’s difficult to make this generic. That’s why it doesn’t come in the ASVS as it is. It’s difficult to make this generic and to make this something that will apply to all organizations. For example, they were using Auth0 for authentication, so a lot of authentication stuff was actually offloaded to Auth0 in the first place. Here we can see that for this particular feature they selected that it relates to business logic, and that it’s modifying how OTPs are generated, which is a strange combination, but there we are. Now we can see it’s given us a list of ASVS requirements. These are the ones you need to be worried about. These are the ones you need to think about for this feature. Everything else, wait for a feature where it becomes relevant. Then we can select maybe another couple of questions as well, maybe the scope of the feature has been expanded. Again, alongside that, extra requirements come in. We’ve still got requirements to go through. We can’t get away from thinking about security at the requirement stage. We can say, which requirements do we need? Which aspects of security are important to us?

In summary, we want to take a copy of the ASVS, take a fork of the ASVS that suits us. We want to tailor the requirements to the organization. I strongly recommend using some custom splitting, custom categorization to make that feature-specific, and make sure that what’s coming in front of the individual developer is only what they need. They don’t need to start not applicable in all sorts of other items.

Security As an Attribute of Software Quality

We talked about Lumber, Plc., and the security team parachuting in, being around for a few months, and then thinking they are going to jump back out again and go somewhere else. This is representative of a wider problem of security thinking about themselves in two ways. First of all, they’re not part of the team. They’re the security team. They’re special. Security are this special team that come in and have this expertise and they’re going to say something and it’s going to be like a commandment that everyone is now going to follow. Everyone’s going to do that, and that’s it. Security said it, it must be amazing. I’m going to give you some advice certainly for security people here, it’s going to be quite dangerous for you if you don’t take it the right way, and you don’t apply it the right way. Security is not special. Security is not a special snowflake. This is not some esoteric, on the side, unusual aspect that is completely separate to the process of building software. If we think about software, we think about, what do we need in software in order for software to be considered acceptable? Considered, we can deploy this software and people can use it. If you look at Wikipedia, and you talk about the quality attributes of software, it gets pretty long. ISO 25010 defines quality attributes for software. It talks about performance. It talks about portability. It talks about usability. Surprisingly, it talks about security. Security is just another attribute of software quality.

If someone’s going to deploy a piece of software, they’re going to say, does it perform acceptably? Does this feature respond in an acceptable amount of time, or is a user clicking and then waiting 10 seconds for something to happen? They’re going to ask, ok, can a user understand how to use this? Is it clear what the flow is? Is it intuitive for a user to walk through the feature, or are they going to be scratching their heads, searching Google and getting frustrated? We should be using the same logic, is this feature secure? Is this feature going to operate in a way that we expect to? Is it going to expose all of our user’s data, or crash our whole application? No one’s going to accept a feature to be released that doesn’t perform acceptably that users are going to get angry about because they don’t understand it. Why should we accept a feature being rolled out that’s not secure? I think that’s an important way of contextualizing how we think about security in our development process, and in our application process, overall. Security is just another attribute of software quality.

Threat Modeling

One minor problem is that security doesn’t necessarily mean the same thing to everyone. Not everyone has the same threat model. Different organizations have different security concerns. Different things are going to ruin their day. Threat model is obviously a bit of a buzzword. I want to quickly define that in a very simple way, just to make sure it’s clear what I’m talking about here. From my perspective, threat modeling in this context is, we are intentionally considering, we’re not just doing it as a side thing, but we’re actually thinking about, what can go wrong? Things that are going to go wrong in this particular case. What are the security things that could happen and make things bad for us, for our particular case, intentionally considering what can go wrong in your case? I think that’s a very simplified way of talking about threat modeling. I did get this approved by a threat modeling activist who also happens to be my boss. I’m one of the authors of the Threat Modeling Manifesto. If you want to read about Threat Modeling Manifesto, you can read a lot more about threat modeling. I think for our purposes, this gives us a simple enough definition of what we’re talking about by threat modeling.

The idea is, what’s going to be worse for our business? What’s going to make our organization, our business, have a really bad day? We want to make sure these issues are documented. Maybe that’s something that’s already been done. It may be that the security team somewhere, have actually prepared that. Maybe someone in the company, the risk management organization has prepared that. Maybe they know, what is the worst impact to the business. We need to have that in our mind, because that’s going to guide the next stages that we go through. That’s going to give us the context to think about, what do we need to be concerned about from a security perspective? Consider these potential impacts. If we lose customer data, then our reputation will be irreparably damaged, and we wouldn’t have any customers anymore. These aren’t the only impacts. There are obviously ethical considerations as well. I’m just trying to pull out specific examples. If our site is down for more than 30 minutes, we will lose more revenue than we can afford. Maybe we’re a very busy e-commerce site, and the day before Christmas, it goes down. Suddenly, we can’t take in orders, we can’t take in money. That’s going to be a huge revenue hit. If an attacker could alter data in our system, then our customers wouldn’t trust us, and they’ll go elsewhere. Our system, our business model relies on trust from our users. If our users don’t trust us, we don’t have a business anymore. These are examples of what might be the most concerning thing. For each organization, that’s going to be different. Each organization has to figure that out.

In summary, we want to provide context. We want to understand that security should just be another quality attribute, should just be another thing we consider as, is this software acceptable or not? We need to think about, what is going to be the worst impact for our business? We want to think, what’s going to make our business have a really bad day? We want to document that, or find where someone else has documented that and use that as a guide for the next other problems we’re trying to solve here. Ultimately, for Lumber, they really needed to have this context. They didn’t have this context, they were just giving a one-size-fits-all dive in, dive out. They’d had someone who was more permanently attached to that particular team, or was bringing better context to their process. It would have been easier to show the teams, here are the important things we need to consider.

We now need to use this context to prioritize. We need to think about, what do we want to consider today? What do we want to consider tomorrow? We can’t do everything today. If everything is important, nothing is important. We’ve got a bit of a challenge, maybe that’s a little bit more difficult, because, if security is just another requirement, if security is just another part of software quality, then we have to balance it against everything else. That’s why we need this prioritization. We need to be able to say, this is the most important thing to do. Therefore, when I go into a product manager and saying, we need to implement this particular requirement, we need to carry out this particular mechanism, then we can balance that against what the product manager says, “I need this developer to work on this to make this faster, or make it more understandable.” Because we have to balance against everything else. We have to be able to come with the most important aspect. For that, we’ve got the threat model. Again, this is a problem with Lumber. With Lumber, Plc., they didn’t have this prioritized approach. They couldn’t just say we just need to do this there. Their idea was, we need to do everything, we need to cover everything. Now we’ve got our threat model from before that we can use to figure out how to prioritize.

Real Problems Based on Threat Model

We talked about potential impacts before. If we lose any customer data, our reputation will be irreparably damaged. Maybe in a particular feature, we’re thinking, a malicious user could use this feature to access data, to view my profile, view my user profile feature. An attacker could use that to view other users’ data as well. This issue here, this key impact here is one of our big impacts. It’s one of our main items in our threat model. This is a scenario we’re going to be worried about. We were going to want to make sure that the requirement like this, 4.2.1 around preventing insecure direct object references, and preventing accessing records that a user shouldn’t have access to, is going to be front and center for this feature. Going back to the second potential impact around availability. Maybe we’re creating a photo upload feature. Maybe if someone sends lots of very large files, very large photos up to this feature, it will crash the application. Maybe the application will spend so much time trying to churn, trying to process these files, they won’t have time to do anything else. No one else will be able to use the application. In which case, maybe we have wider performance questions, but certainly requirements around large files such as 12.1.1, again, we’re going to want to make sure that we’re hitting those first, we’re considering those first.

Or maybe it’s an integrity question about users trusting the data in our application. Maybe we’re building some currency exchange platform, some money exchange platform. If a user can look at the exchange rates ticker, can I get £1 for how many U.S. dollars? They can, not just view that ticker, they can change what that ticker shows. If users start making trading decisions based on that updated incorrect ticker, then those users are going to lose trust in our application, and they’re going to drop us and go somewhere else. In which case, we might want to make sure that this principle of least privilege, making sure that what’s read only is definitely read only, what’s writable is writable. What functions they can access, only the correct users can access those, as set out in 4.1.3. We want to make sure that that’s going to be key to our considerations for this particular feature. Again, we’re already at the stage where we’ve customized. We should have already customized. We should have said, these are the requirements that are relevant to this particular feature. Now, maybe we’re zeroing in even more and saying, we know these requirements are important from this split. We know that this subset of those requirements are important, because they’re specifically indicated by a threat model as a potential issue.

Tradeoffs

The other thing to think about is tradeoffs for prioritization. We need to think about what we want to consider when we say, I’m going to do this requirement, I’m not going to do that requirement. I’m not going to prioritize this, I’m going to prioritize that. Difficulty versus criticality. We want to think about, how important is a requirement versus how difficult is it to implement? If you’ve got a requirement that’s going to bring us no benefit, and it’s going to be really difficult, then, of course, yes, that’s not a question, we’re probably not going to go for that, now if ever. Then it becomes a question, what if we’ve got a very important requirement, but it’s really hard to implement? Maybe we’ve got a slightly less important requirement, but it’s super easy, it’s a nice quick win. I would certainly recommend trying to balance between the quick wins and the longer-term wins. On one hand, you want to show progress, you want to show that some controls have been implemented. On the other hand, you can’t lose sight of the more difficult tasks that are still important. You want to make sure that you can try and progress on both those fronts. Which brings us to perfect versus good. We don’t want to be in a situation, some people who are constantly complaining about security control. “This security control isn’t 100% effective, because on this edge case, or on that edge case, it won’t work. It’s terrible.” Password managers are a great example. People love to say, “Password managers in this very niche case, if they didn’t encrypt your password in multiple ways?” That doesn’t invalidate the use of password managers. It doesn’t say, what you should do is use the same password everywhere, and then write it on a Post-it note on your laptop. An imperfect control is better than no control. Some incremental improvement is always going to be an easier way of achieving that. Again, it gives us these quick wins. It gives us an initial stage that we can then progress and say, we’ve done a basic version of this control, of this requirement, and we’re going to enhance it going forward.

There’s also a cultural aspect here. We don’t want to have to battle about every single security requirement, every single issue. We need to consider, what’s the team’s current security appetite? How much of their time, bandwidth, and mind space has been occupied recently with security issues? Do we want to add to that, or can we delay that slightly? I’ve seen teams where they’re just completely drowning in security issues of one sort or another, especially tool output. Tool output is great for making very angry developers. You get 1000 results from a static analysis tool, from a code scanning tool. Developers are like, “No, this is nonsense.” They have to spend a day doing that, and now they’re really angry with security in general. You want to avoid that situation to begin with. The wider point here is we want to think about, how much has security been on these developers’ minds? Can we avoid overloading them? Can we say, let’s talk about this most important thing today. This other thing, let’s talk about it tomorrow, or next week. We don’t want to have to bring all of our battles at once.

Exercising Balance

With that being said, it will always be harder to do this later. We don’t necessarily want to have to go back and add security controls after we’ve already deployed something. It’s going to be potentially technically more complex. It’s going to be hard to get buy-in to go back to that. You’re always going to have new security things as well. There is a balance to be had here. We don’t want to try and do everything at once. We also need to be mindful that we don’t want to just push things to later and expect that we’re going to have time then. Hopefully, through these customization and prioritization mechanisms, that gets a little bit easier, and you can have a better way of coming up with that decision.

Security Backlog

One thing I would recommend is having a security backlog. We know about the product backlog, where we keep a list of all the features we haven’t implemented yet that we want to implement at some time in the future. We keep a security backlog as well. We should know, these are the requirements that we know we should be doing. These are the things that we should be doing, and the mechanisms that we know we should be doing. We haven’t got them implemented now, but we should want to implement in the future. We probably want to say, how much effort do we expect it will be to implement that? How difficult will it be? How complicated is it? How much research does it require?

Maybe a little bit about what it involves, what sort of work it is, so it’s a little bit clearer about what this actually involves. The product backlog tends to be a little bit free-form and product managers decide what they want to take here and there. The security backlog, you probably want to be a little bit more strict about. You probably want to say, this needs to be required and implemented before the next release. This can maybe wait until this particular deadline, three months, six months. This can just be prioritized alongside everything else. We probably want to be slightly stricter. We don’t want just to leave it to someone else’s discretion of, I’ll put this when I’ve got time. That means we need to be monitoring these service level objectives. If our objective is by the next release, we need to be monitoring, did we get all those requirements implemented in the next release? If it’s a few months, did we get these requirements implemented by that point? By monitoring that, and then reporting back, we can assess, how well is the backlog working for us? How well are we managing to keep on top of our security requirements, while also maintaining velocity? If we’re seeing that too much is being left by the side and objectives aren’t being hit, maybe we need to take a stricter approach earlier on, and prioritize more things earlier on.

In summary, for that section, we want to prioritize. We want to figure out what our worst-case scenario is, and therefore, what are the requirements we want to consider first? We want to think about the tradeoff considerations. Do we want to focus on the most critical versus the most difficult? How can we balance between those? How can we balance between getting quick wins, or an imperfect control and the longer term when we get a more perfect control? Also, think about the risk appetite of teams. Also, if we’re going to have this prioritization of going to delay certain items, how do we track that? How do we keep a backlog? How do we make sure that we’ve got tracking of that going forward? Certainly, in Lumber, Plc., they hadn’t done prioritization, they were just giving everything every single time. They didn’t really have this mechanism to say, here are the most important things at this point in time. Here’s what you can delay. They also didn’t really have that mechanism of having that discussion, of having that tradeoff discussion. The idea was, ok, you need to do all these security requirements. They ask you, this is what you said you were going to do, so now you’re going to do it. There wasn’t that ongoing engagement, that ongoing discussion about, how do we maintain our velocity while also actioning the most important items? The final big question is the today and tomorrow thing. Lumber went in for three months to go and implement this. They did a big bang approach of training, where they’re like, developers are going to do this training, and we’ve told them what to do. Then, off we go. That’s the process running, we now know the process is running. We need to make this operationalized. We need to have a reusable way of doing this. We need to make sure that it works going forward as well.

Security Fragmentation

One of the big challenges here is fragmentation. We know that certain security challenges, certain security requirements repeat themselves over again. How you authenticate users, how you validate their permissions, how you make untrusted content, safe in a particular context, these are all things that repeat. If we have different solutions for these problems in all different places, then we end up with a bunch of problems. We don’t have a unified way of doing this. We don’t have a unified policy on how to do this. We don’t have one place where we decide what that configuration is going to look like, or these things are considered safe, these things are not considered safe. It’s going to be a lot harder to test these items as well. It’s going to be a lot harder to say, is this operating correctly? If we suddenly decide that we’ve changed policy, we need to now go and do that in multiple different places. We need to go and change that policy in lots of different places. Maybe there’s inconsistency between developers as well. One developer thinks, this is standard. Another developer thinks, that’s the standard, and they’re trying to communicate and they’re working from a different rulebook. Ultimately, it leads to a much more complex, and risky situation.

Unified Solution

I think it’s very important to have some form of unified solution. When we’re solving a particular security problem, we want one specific solution. Ideally, that should be across the organization, across the company. Even if we’re using different languages, maybe it’s a different underlying library that’s doing this, but the rules should be the same. The rules that are defined of how it is, should be the same, and it should be centrally documented, and centrally managed. Ideally, we want one single source of truth. We want a one-stop shop where developers can go and say, this is the organization’s development security policies. These are the libraries or the rules I need to comply to. That one place, it’s easy to maintain. It’s easier for developers to find.

Documented via ASVS

If we’re already using the ASVS as a way of documenting what we think developers should be doing, we can add to this information as well. We can have this specific solution whereby we’re saying, this is the ASVS requirement.

Here’s how we do it in our organization. Effectively adding the how to the what. You can either add extra requirements. Maybe there are specific security requirements that aren’t specifically mentioned in the ASVS, but are relevant for your organization, you can make sure they add it there as well. For example, maybe we’ve got two for one from the ASVS about how passwords are stored. Maybe Lumber, Plc., could have had specific libraries that they used to perform that storage, to hash, to validate. Suddenly, we’re not telling developers, “Use this algorithm, use this mechanism, use this configuration,” because it’s all done by this library. This library is already handling it. The developers are seeing that requirement alongside of how they do it. There’s no question, ok, now I need to go and look up in OWASP. Now I need to go and look up in Google. How do I do this in my language? Because it’s already there. It’s already centrally stored. Do I as a developer have this one-stop shop to go to?

Another example, output encoding. Very complicated, lots of different ways of doing these different contexts. Maybe we know that all of our HTML has been written using Angular or React, and we tell developers you have to use a standard binding. You need to be using a standard binding because standard binding will protect you. If you’re using a non-standard binding, you need to get that review specifically, so they know, if I’m using a standard binding, I don’t need to worry. I don’t need to get too concerned about how I’m doing this. If I suddenly do something non-standard, I start rendering HTML in an unusual way, I know that I need to go and speak to the security team and figure out how we do that.

Other examples, ASVS contains all sorts of requirements about security headers. Maybe Lambda has a fluid reverse proxy that all applications will be going through. That proxy will add those security headers automatically, so the developers don’t need to worry about it. They just need to know, if we’re using this proxy, we know that these headers are going to add it, and these requirements are ticked off our list as well. In this case, you may not even show the developers the requirements, maybe they don’t need to see them. Maybe we say, they don’t need to have any action. They just need to say, use a reverse proxy. Again, we’re giving them the how, not just the what. That means that they’re not just knowing how to do this now, but they know in the future how to address these requirements.

Lumber, Plc., had this big bang approach. We go in, we talk. We go out again, everything gets done. We need to make that work an ongoing basis. We need to be able to give our developers the tools to be able to do that today, tomorrow, next month, next year. All the while, we’re using somewhere centralized. We can also keep that maintained, keep that updated, add in the latest guidance, and we’re only having to do it once. We’re not having to go to every single team and say, this is a new guidance. Here’s how you should be doing it now.

Summary

We talked about four main ideas here. Contextualizing. Making sure that we’re clear about how security fits in. Customizing, making sure that the ASVS is focused to a particular use case, to a particular feature, to a particular functionality that is currently being worked on. Prioritizing, defining, what’s most urgent here? What are the requirements that are addressing our biggest issues? Finally, operationalizing. Finding ways that we can make this reusable. Finding ways that we can give the how and not just the what. We can give developers actionable guidance alongside the requirements that we’re giving them.

Key Takeaways

You tailor the ASVS, tailor it to your needs. Consider security as a characteristic. Identify what’s most concerning to you. Use that to prioritize and find ways of making security reusable, applicable, and operationalized to the future, so that developers can take this in bite-sized chunks and carry on using it.

See more presentations with transcripts

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


MongoDB, Inc. (NASDAQ:MDB) Given Consensus Rating of “Moderate Buy” by Brokerages

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

MongoDB, Inc. (NASDAQ:MDBGet Free Report) has been assigned a consensus recommendation of “Moderate Buy” from the twenty-five research firms that are covering the company, Marketbeat.com reports. One investment analyst has rated the stock with a sell recommendation, three have assigned a hold recommendation and twenty-one have given a buy recommendation to the company. The average 12-month price objective among analysts that have updated their coverage on the stock in the last year is $430.41.

MDB has been the topic of several research reports. TheStreet raised shares of MongoDB from a “d+” rating to a “c-” rating in a research note on Friday, December 1st. Bank of America started coverage on shares of MongoDB in a research report on Thursday, October 12th. They set a “buy” rating and a $450.00 price target for the company. KeyCorp cut their price objective on MongoDB from $495.00 to $440.00 and set an “overweight” rating for the company in a research note on Monday, October 23rd. UBS Group reiterated a “neutral” rating and set a $410.00 target price (down from $475.00) on shares of MongoDB in a research report on Thursday, January 4th. Finally, Capital One Financial upgraded MongoDB from an “equal weight” rating to an “overweight” rating and set a $427.00 target price on the stock in a research note on Wednesday, November 8th.

View Our Latest Report on MDB

Insider Activity

In other MongoDB news, CAO Thomas Bull sold 359 shares of the company’s stock in a transaction that occurred on Tuesday, January 2nd. The stock was sold at an average price of $404.38, for a total transaction of $145,172.42. Following the completion of the sale, the chief accounting officer now owns 16,313 shares of the company’s stock, valued at $6,596,650.94. The sale was disclosed in a document filed with the Securities & Exchange Commission, which can be accessed through the SEC website. In other MongoDB news, CFO Michael Lawrence Gordon sold 7,577 shares of the business’s stock in a transaction dated Monday, November 27th. The shares were sold at an average price of $410.03, for a total value of $3,106,797.31. Following the transaction, the chief financial officer now directly owns 89,027 shares in the company, valued at $36,503,740.81. The transaction was disclosed in a document filed with the SEC, which is available through this link. Also, CAO Thomas Bull sold 359 shares of the business’s stock in a transaction dated Tuesday, January 2nd. The shares were sold at an average price of $404.38, for a total transaction of $145,172.42. Following the completion of the transaction, the chief accounting officer now owns 16,313 shares in the company, valued at $6,596,650.94. The disclosure for this sale can be found here. In the last 90 days, insiders sold 147,029 shares of company stock worth $56,304,511. Insiders own 4.80% of the company’s stock.

Hedge Funds Weigh In On MongoDB

A number of institutional investors and hedge funds have recently added to or reduced their stakes in the company. Nordea Investment Management AB raised its stake in shares of MongoDB by 298.2% during the fourth quarter. Nordea Investment Management AB now owns 18,657 shares of the company’s stock worth $7,735,000 after buying an additional 13,972 shares during the last quarter. Northside Capital Management LLC bought a new stake in MongoDB in the fourth quarter valued at approximately $209,000. IFP Advisors Inc increased its stake in MongoDB by 197.5% in the fourth quarter. IFP Advisors Inc now owns 4,444 shares of the company’s stock valued at $1,817,000 after purchasing an additional 2,950 shares in the last quarter. Jackson Square Capital LLC increased its stake in MongoDB by 18.2% in the fourth quarter. Jackson Square Capital LLC now owns 2,067 shares of the company’s stock valued at $845,000 after purchasing an additional 319 shares in the last quarter. Finally, Beacon Capital Management LLC increased its position in shares of MongoDB by 1,111.1% during the fourth quarter. Beacon Capital Management LLC now owns 109 shares of the company’s stock worth $45,000 after acquiring an additional 100 shares in the last quarter. 88.89% of the stock is currently owned by institutional investors and hedge funds.

MongoDB Stock Up 3.8 %

Shares of NASDAQ:MDB opened at $393.15 on Thursday. The business has a fifty day simple moving average of $394.98 and a 200 day simple moving average of $380.82. MongoDB has a 12-month low of $179.52 and a 12-month high of $442.84. The company has a quick ratio of 4.74, a current ratio of 4.74 and a debt-to-equity ratio of 1.18. The firm has a market cap of $28.38 billion, a P/E ratio of -148.92 and a beta of 1.23.

MongoDB (NASDAQ:MDBGet Free Report) last posted its quarterly earnings data on Tuesday, December 5th. The company reported $0.96 EPS for the quarter, beating the consensus estimate of $0.51 by $0.45. MongoDB had a negative return on equity of 20.64% and a negative net margin of 11.70%. The business had revenue of $432.94 million for the quarter, compared to the consensus estimate of $406.33 million. During the same quarter in the prior year, the business earned ($1.23) earnings per share. MongoDB’s revenue for the quarter was up 29.8% on a year-over-year basis. Analysts predict that MongoDB will post -1.64 earnings per share for the current year.

About MongoDB

(Get Free Report

MongoDB, Inc provides general purpose database platform worldwide. The company offers MongoDB Atlas, a hosted multi-cloud database-as-a-service solution; MongoDB Enterprise Advanced, a commercial database server for enterprise customers to run in the cloud, on-premise, or in a hybrid environment; and Community Server, a free-to-download version of its database, which includes the functionality that developers need to get started with MongoDB.

Featured Articles

Analyst Recommendations for MongoDB (NASDAQ:MDB)



Receive News & Ratings for MongoDB Daily – Enter your email address below to receive a concise daily summary of the latest news and analysts’ ratings for MongoDB and related companies with MarketBeat.com’s FREE daily email newsletter.

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Types of Databases: What Are They and Why Does It Matter? – Serchen.com

MMS Founder
MMS RSS

Posted on nosqlgooglealerts. Visit nosqlgooglealerts

Today, we’re diving into a topic that’s pivotal in the tech world, yet often flies under the radar: databases.

What are the different types of databases out there to manage data from? Types of databases include relational, NoSQL, and cloud, each serving different data management needs in today’s digital world.

IntroductionIntroduction

Welcome to the fascinating world of databases! If you’re reading this, chances are you’re interested in understanding the different types of databases and why they’re so crucial in our digital landscape. So, let’s embark on this journey together, unraveling the complexities of databases in a way that’s both fun and informative.

The Importance of Databases in the Digital Age

Picture this: every time you browse the internet, shop online, or scroll through social media, you’re interacting with databases. These databases are the unsung heroes of the digital age, quietly working behind the scenes to store, organize, and retrieve data. They’re the backbone of websites, applications, and even the devices we use daily.

But why are databases so important? Well, in our increasingly data-driven world, efficiently managing data is crucial. Whether it’s a small business keeping track of inventory or a large corporation analyzing customer behavior, databases make it possible to handle vast amounts of information quickly and accurately.

The Evolution of Databases

Databases have come a long way since their inception. In the early days graph databases, data was stored in simple file systems. This method was fine for small amounts of data but quickly became unwieldy as data volumes grew. Enter the relational database, a game-changer in the world of data management. Pioneered by Edgar F. Codd in the 1970s, relational databases allowed for data to be stored in table formats, making it easier to retrieve and manipulate.

But the evolution didn’t stop there. As the internet exploded and data generation skyrocketed, new challenges emerged. The relational model wasn’t always the best fit for the unstructured data of the web, leading to the rise of NoSQL databases. These databases were designed to handle large volumes of diverse data types and were more flexible in terms of data modeling.

Why Understanding Database Types is Essential

Now, you might be thinking, “Okay, databases are important, but why do I need to know about different types?” That’s a great question! Different types of databases are like different tools in a toolbox. Just as you wouldn’t use a hammer to screw in a bolt, you wouldn’t use a relational database for a task better suited to a NoSQL database.

Understanding the strengths and limitations of each database type helps in making informed decisions about which database to use for what purpose. For instance, if you’re working with structured data and need complex querying, a relational database might be your go-to. But if you’re dealing with massive volumes of unstructured data, a NoSQL database could be more up your alley.

Types of Databases: A Closer Look

As we delve deeper into the types of databases, we’ll explore the world of relational databases like MySQL and PostgreSQL, which are fantastic for structured data and complex queries. We’ll also dive into the realm of NoSQL databases, including MongoDB and Cassandra, known for their scalability and flexibility with unstructured data. And let’s not forget cloud databases like Amazon RDS or Google Cloud SQL, offering the benefits of cloud computing such as scalability and accessibility.

Object-oriented databases integrate object-oriented programming with database technology, providing a more seamless and intuitive way to store, retrieve, and manage data structured as objects, making them ideal for applications dealing with complex data relationships and custom data types.

Document databases store data in a document-oriented format, typically JSON or XML, offering a flexible schema for handling semi-structured data, making them perfect for applications that require agile development and the ability to handle varied data types. Graph databases are designed to store and navigate relationships efficiently, using nodes, edges, and properties to represent and store data, ideal for complex hierarchies and networks like social media, recommendation engines, and network analysis.

Key Points:

  • Object oriented database are integral to the functioning of digital systems, handling everything from user data to business operations.
  • The evolution of databases from simple file systems to complex relational and NoSQL systems reflects the growing demands of the digital age.
  • Understanding different types of databases is crucial for selecting the right tool for specific data management needs.
The Impact of Database Choice on Business and TechnologyThe Impact of Database Choice on Business and Technology

The Impact of Database Choice on Business and Technology

In this section, we’ll explore how the choice of a database can profoundly impact both business strategies and technological advancements. Understanding this can help you make more informed decisions, whether you’re developing a new app, managing a business, or just aiming to stay updated in the tech world.

Aligning Database Selection with Business Goals

Every business has unique needs and goals, and the database technology it employs should align with these objectives. For example, a startup focused on rapid growth might prioritize scalability and flexibility, making NoSQL databases like MongoDB an attractive choice. On the other hand, a financial institution handling complex transactions and requiring strict data integrity might lean towards relational databases like Oracle or MySQL.

Database Choice and Technological Innovation

The choice of database can also drive innovation. Take, for instance, big data analytics. Technologies like Hadoop, often used alongside NoSQL non relational databases now, have revolutionized how we process and analyze vast amounts of data. This synergy between database types and emerging technologies is a driving force in creating new solutions and capabilities in various industries.

Databases and User Experience

The type of database also plays a critical role in user experience. For applications requiring real-time data processing, such as online gaming or stock trading platforms, databases with high throughput and low latency, like in-memory databases (e.g., Redis), are crucial. A poor choice in database can lead to slower response times and a frustrating user experience.

Future-Proofing with the Right Database

In our rapidly evolving tech landscape, choosing a database that not only meets current needs but also accommodates future trends is vital. With the advent of technologies like AI and IoT, databases capable of handling real-time analytics and large-scale data processing are becoming increasingly important.

Key Points:

  • The choice of a database should align with a business’s specific needs and goals, such as scalability or data integrity.
  • Database selection can influence and drive technological innovation, especially in areas like big data analytics.
  • The type of database directly affects user experience, with different databases offering varying performance levels.
  • Future-proofing technology strategy involves choosing databases that can adapt to emerging trends like AI and IoT.
Getting Started with Databases: Tips and Best PracticesGetting Started with Databases: Tips and Best Practices

Getting Started with Databases: Tips and Best Practices

Embarking on your database journey can seem daunting at first. Whether you’re a beginner or looking to expand your skills, this section aims to provide some practical guidance on how to get started and excel in the world of databases.

Starting with Database Fundamentals

Before diving into specific database types, it’s crucial to understand the basics. Learn the fundamental concepts like data models, schemas, queries, and transactions. Online courses, tutorials, and resources like W3Schools or Khan Academy can be excellent starting points.

Choosing the Right Database

Selecting the right database for your project or organization is a critical decision. Consider factors like the nature and size of your data, scalability needs, and the specific use case. It’s often beneficial to start with simpler solutions and evolve as your needs grow.

Best Practices in Database Management

Data Integrity and Consistency: Always prioritize maintaining the accuracy and consistency of your data. This means implementing constraints, using transactions, and regularly checking for data anomalies.

Backup and Recovery Plans: Regular backups are essential. Ensure you have a robust backup and recovery strategy to protect your data from loss or corruption.

Security Measures: With the rising concerns over data breaches, implementing strong security measures is non-negotiable. This includes using encryption, access controls, and keeping your database software updated.

Performance Optimization: As your database grows, performance can become an issue. Regularly monitor and optimize your database’s performance through indexing, query optimization, and appropriate scaling strategies.

Staying Updated: The database technology landscape is constantly evolving. Keep yourself updated with the latest trends, updates, and best practices.

Learning through Real-World Applications

Hands-on experience is invaluable. Engage in projects that allow you to apply your database knowledge. This could be anything from personal projects, contributing to open-source initiatives, or undertaking specific database-related tasks at work.

Key Points:

  • Start by mastering the fundamentals of database technology.
  • Carefully choose the right database based on your specific needs and scalability requirements.
  • Adhere to best practices in database management, focusing on data integrity, security, and performance optimization.
  • Gain practical experience through real-world applications and projects.
Software ToolsSoftware Tools

Software Tools

When dealing with databases, certain tools come in handy. Let’s check out a few:

MySQL Workbench

This is a unified visual tool for database architects, developers, and DBAs. It provides data modeling, SQL development, and comprehensive administration centralized database tools.

MongoDB Compass

If you’re working with MongoDB, this is the GUI for you. It allows you to visualize and manipulate your MongoDB data.

Microsoft SQL Server Management Studio

This tool provides an integrated environment for managing any SQL infrastructure, from SQL Server to Azure SQL Database.

Key Points:

  • Tools like MySQL Workbench, MongoDB Compass, and Microsoft SQL Server Management Studio are essential for database management.
  • Each tool offers unique features suitable for specific types of databases.
FAQFAQ

FAQ

What is a Relational Database?

A relational database is a type of database that stores and provides access to data points that are related to one another. It uses hierarchical databases, a structure known as a table, which organizes data in hierarchical database into rows and columns, making it easy to understand and manipulate.

How Does a NoSQL Database Differ from a Relational Database?

NoSQL databases are designed for a wide variety of data models and are known for their flexibility, scalability, and high performance. Unlike relational databases, they can handle large volumes of unstructured, semi-structured, or structured data and are ideal for big data applications and real-time web apps.

Why Use a Cloud Database?

Cloud databases offer the flexibility of managing data over the cloud, allowing for scalability, high availability, and low cost of ownership. They are excellent for businesses that need to access and store data remotely and scale resources according to demand.

What are Some Common Uses of Databases in Businesses?

Databases are used in businesses for a variety of purposes, including managing customer information, tracking inventory, processing transactions, and analyzing sales data to identify trends.

Can I Migrate Data from One Type of Database to Another?

Yes, data migration between different types of databases is possible. However, it requires careful planning to ensure data integrity and may involve data transformation or the use of network databases or middleware tools.

What is Data Modeling and Why is it Important?

Data modeling is the process of creating a data model for the data to be stored in a database. It is a crucial step in database design that helps to define and organize data structures and relationships, ensuring efficient data retrieval and integrity semi structured data.

How Do I Choose the Right Database for My Project?

Choosing the right database depends on the specific needs of your project. Consider factors like the volume and type of data, scalability requirements, and the complexity of data operations. It’s often helpful to consult with a database specialist.

What is the Role of SQL in Databases?

SQL (Structured Query Language) is a standard language used in programming and managing data held in relational databases. It is used for querying and editing information stored in network database or in a certain database management system.

Are There Any Security Concerns with Databases?

Yes, databases can be vulnerable to various security threats, including unauthorized access and data breaches. Implementing robust security measures like encryption, access controls, and regular security audits key value databases is crucial.

What is the Future of Database Technology?

The future of relational database management system technology is likely to be influenced by trends like AI, machine learning, and real-time analytics. We can expect advancements in database management systems to handle more complex data types and offer more automation and intelligent functionalities.

ConclusionConclusion

Conclusion

The Central Role of Databases in Technology and Business

Databases, in their various forms, are central to both technological advancement and business success. From the precision and reliability of relational databases to the agility and scalability of NoSQL and cloud databases, each type serves a specific purpose and addresses unique challenges. The right database can be a catalyst for growth, driving efficiency, innovation, and competitive advantage.

Future Trends and Evolving Landscape

Looking ahead, the landscape of database technology is set to evolve even further. We are entering an era where the integration of artificial intelligence, machine learning, and real-time analytics with database systems will not only enhance data processing capabilities but also open new avenues for innovation and problem-solving. The future will likely bring more intelligent, self-managing databases that can adapt to changing needs and offer deeper insights.

The Importance of Making Informed Database Choices

The key takeaway from our journey is the importance of making informed choices when it comes to selecting a database. Whether you’re a developer, a business leader, or an enthusiast, understanding the strengths and limitations of different database types is essential. This knowledge enables you to choose the right database that aligns with your project requirements, business goals, and future scalability needs.

Embracing the Challenges and Opportunities

As we embrace the complexities and possibilities of these distributed database technologies, it’s important to stay curious, open to learning, and adaptive to change. The world of databases is not static; it’s dynamic and ever-evolving. By keeping abreast of new developments and trends, we can harness the full potential of these powerful tools to drive success and innovation.

Key Points:

  • Databases are fundamental to technological progress and business operations.
  • The future of databases is intertwined with advancements in AI, machine learning, and analytics.
  • Making informed choices about database selection is crucial for project success and scalability.
  • Staying updated and adaptable in the face of evolving database technologies is key to leveraging their full potential.

Compare hundreds of Database Management Software in our Software Marketplace

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Patronus AI and MongoDB Partner to Boost Enterprise Confidence in Generative AI

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

Patronus AI  announced it is partnering with MongoDB to bring automated LLM evaluation and testing to enterprise customers. The joint offering will combine Patronus AI’s capabilities with MongoDB’s Atlas Vector Search product.

Today, enterprises are using retrieval-augmented generation (RAG) systems to power key workflows using their internal knowledge base. However, these systems are prone to failure. In past research, Patronus AI found that State-Of-The-Art retrieval systems can frequently hallucinate in real world use cases like financial services. Prior research has also shown that LLMs can struggle with reasoning and numerical calculations.

Recommended AI News: Riding on the Generative AI Hype, CDP Needs a New Definition in 2024

“Enterprises are excited about the potential of generative AI, but they are concerned about hallucinations and other unexpected LLM behavior,” said Anand Kannappan, CEO and co-founder, Patronus AI. “We are confident our partnership with MongoDB will accelerate enterprise AI adoption.”

Recommended AI News: MediaGo Partners With Voluum to Optimize Campaign Delivery and Management for Advertisers

The partnership between Patronus AI and MongoDB brings to market a retrieval system solution that enables reliable document-based LLM workflows. Customers can develop these systems with MongoDB Atlas, and evaluate, test, and monitor them with Patronus AI to increase their accuracy and reliability.

Recommended AI News: Analog Devices Deploys SambaNova Suite to Facilitate Breakthrough Gen AI Capabilities

“We recommend developers test iteratively as they experiment with retrieval system design choices,” said Rebecca Qian, CTO and co-founder, Patronus AI. “Patronus AI offers a powerful solution here that is simple to get started with.”

[To share your insights with us, please write to sghosh@martechseries.com]

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


KSP2 Aims to Improve Kotlin Meta-Programming, Adds Support for the K2 Kotlin Compiler

MMS Founder
MMS Sergio De Simone

Article originally posted on InfoQ. Visit InfoQ

Currently available in preview, KSP 2.0, the evolution of Kotlin Symbol Processing (KSP), introduces a new architecture aimed at resolving some limitations in KSP 1.0 and adds support for the new K2 Kotlin compiler, explain Google software engineers Ting-Yuan Huang and Jiaxiang Chen.

While KSP1 is implemented as a compiler plugin, KSP2 is an independent library that can be run without setting up the compiler and with complete control of its lifecycle. This makes it easier to call KSP programmatically and set breakpoints in KSP processors in an easy way, say Huang and Chen. The following snippet shows how you can configure KSP2 and execute it to process a list of symbols:

val kspConfig = KSPJvmConfig.Builder().apply {
  // All configurations happen here.
}.build()
val exitCode = KotlinSymbolProcessing(kspConfig, listOfProcessors, kspLoggerImpl).execute()

Another notable difference in KSP2 is it uses the Kotlin K2 compiler, still in beta, to process source code. You can however use KSP with K1, if you prefer, by setting the languageVersion setting in gradle.properties.

Additionally, KSP2 aims to address a shortcoming in KSP1 which leads to the same source files being potentially compiled multiple times. Leveraging its integration with K2, KSP2 tries to align to how K2 compiles files to process them only once, which will improve performance.

KSP2 also introduces several behavior changes to improve developer productivity, as well as debuggability and error recovery.

The new KSP preview can be enabled in KSP 1.0.14 or newer using a flag in gradle.properties:

ksp.useKSP2=true

KSP is an API that enables the creation of plugins to extend the Kotlin compiler. It understands Kotlin language features like extension functions, declaration-site variance, and local functions in a compiler-independent way.

The API models Kotlin program structures at the symbol level according to Kotlin grammar. When KSP-based plugins process source programs, constructs like classes, class members, functions, and associated parameters are accessible for the processors, while things like if blocks and for loops are not.

This makes KSP-based plugins less fragile than plugins built on top of kotlinc, which have more power but depend strictly on the compiler version.

About the Author

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


The Ultimate Guide to Granting Permissions and Roles in MongoDB – Security Boulevard

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

Table of Contents

Do you want to establish a secure database environment in MongoDB? User permissions are paramount to ensure data protection, limit data access, and secure user operations. Our ultimate guide will show you how to create users and grant permissions in MongoDB, making your database management tasks easier and more efficient.

Understanding MongoDB and User Management

MongoDB, a powerful and flexible NoSQL database, utilizes JSON-like documents for data storage, which enhances its efficiency and scalability. The key to fully unleashing its power lies in effective management, particularly in the realm of user management. This encompasses the creation of users and the assignment of roles. The roles are essentially the tasks and operations a user can perform within the database. The beauty of MongoDB is in this flexibility; you can tailor your user management strategy to fit your unique needs, ensuring your data remains safe and secure.

Creating users and assigning them appropriate roles is the first line of defense against unauthorized access and potential data leaks. It’s like a customized lock on a door, only allowing access to those with the right key. MongoDB takes this security a step further with the concept of roles. Think of roles as different keys with different levels of access. Some keys might only open the front door, while others might also unlock the office or the supply room. The same principle applies to user roles in MongoDB; some users may only read data, while others may have write or administrative access.

Ultimately, understanding and effectively implementing user management in MongoDB isn’t just about securing your data—it’s also about maximizing efficiency. By granting appropriate permissions, you ensure that each user can perform their duties without unnecessary access that could pose a security risk. So, step into the world of MongoDB user management, and discover how you can secure your database while streamlining operations.

Setting up MongoDB for User Creation

Before diving into the creation of users, it’s critical to ensure your MongoDB environment is primed for the task. The first step in this process involves installing MongoDB on your system. Follow the official guidelines provided by MongoDB for your specific operating system to ensure a smooth and successful installation.

With MongoDB installed, you’re ready to activate the Mongo shell. Consider this shell as your interactive command center for MongoDB. It’s essentially a JavaScript interface, giving you the freedom to connect with and operate your MongoDB instance directly.

Now, here’s where things get a bit more technical, but don’t worry, we’ll guide you through it. To enable access control, you’ll need to fire up the ‘mongod’ process with a specific ‘–auth’ option. This step is crucial for ensuring secure connections as it enforces authentication across all your MongoDB interactions. So, think of this ‘–auth’ option as the robust security guard standing by your MongoDB door, ensuring every interaction is validated and authorized.

To put it simply, setting up MongoDB for user creation is a straightforward process. However, each step is vital and contributes to the overall security and functionality of your MongoDB instance. It’s all about laying a secure and effective foundation, one that enables you to explore the world of user creation, roles, and permissions with ease and confidence. So go ahead, set up your MongoDB and get ready to dive into the exciting world of MongoDB user management!

How to Create Users in MongoDB

Ready to create your first user in MongoDB? Excellent! It’s a smooth process that you’ll master in no time. You’re about to learn the ‘db.createUser()’ method, which is your handy tool for this task. Think of it as your welcoming committee, always ready to introduce new users into your MongoDB environment.

Let’s jump in! This method requires a document. Not just any document, but one that holds all the essential details of the new user – the username, password, and roles. It’s like a passport, providing identification and defining what the user is allowed to do within your database.

Here’s something you should know. When it comes to storing passwords, MongoDB uses SCRAM (Salted Challenge Response Authentication Mechanism). You might think, “That sounds complicated.” Not to worry! Essentially, it’s an added layer of protection that shields passwords from unauthorized access. It’s like a secret handshake, only those who know it can gain access.

Remember, creating users is about more than adding new members to your MongoDB club. It’s about defining who gets to do what, and how. Every detail matters, from the username to the roles assigned. So, think carefully about these decisions, as they play a significant role in securing your data and maintaining an efficient MongoDB environment.

As you continue to explore MongoDB, you’ll realize that creating users is just the beginning of the journey. It’s the first step towards crafting a secure, efficient, and tailored database experience. So, gear up and get ready to dive deeper. Your MongoDB adventure is just getting started!

An Overview of MongoDB Roles and Privileges

Roles in MongoDB are like superpowers that we assign to our users, determining the actions they are allowed to perform within our database realm. Each MongoDB user can be given one or several of these superpowers, providing the flexibility to customize access levels based on specific user needs. The built-in roles include some pretty handy abilities such as ‘read’, ‘readWrite’, and ‘dbAdmin’. Each of these roles carries a bundle of privileges, which are specific actions that the role can perform on a particular resource.

But what if your database needs are more unique, and the built-in roles just don’t cut it? MongoDB has you covered! You can create custom roles that cater to your exact requirements. Consider these custom roles as your personalized superpowers, designed and created to accomplish your specific database missions.

Remember, roles and privileges are not just about controlling user access, but also about streamlining your database operations. Assigning appropriate roles means each user has the exact permissions they need to do their job, nothing more, nothing less. It’s about finding that sweet spot between security and efficiency, and that’s where MongoDB roles and privileges truly shine!

So, step into your role as the MongoDB superhero, assign your users their superpowers, and create a database environment that’s not just secure, but also optimally efficient!

Granting Permissions to MongoDB Users

It’s now time to confer your carefully crafted roles onto your MongoDB users, granting them the specific permissions they need to navigate your database effectively. This stage in the process utilizes the method ‘db.grantRolesToUser()’, essentially the master keymaker in your MongoDB instance. But remember, this method should be invoked in the admin database.

Imagine it as the grand ceremony where you bestow your users with their unique database superpowers. You decide who gets to read data, who can modify it, and who has administrative privileges. And guess what? You can easily alter these permissions later if need be. MongoDB remains flexible, allowing you to adapt your users’ roles as your database needs evolve.

But what if a user no longer requires a particular role? Perhaps they’ve shifted positions, or their job responsibilities have changed? Not a problem! The method ‘db.revokeRolesFromUser()’ is your go-to tool for such situations. It’s like the polite bouncer at the database club, ensuring those without the necessary permissions are gently guided away from restricted areas.

Remember, granting permissions is not a set-it-and-forget-it affair. It’s an ongoing process that adapts with your changing data requirements. It’s about fine-tuning the level of access each user needs to perform their job effectively while keeping your database environment secure.

So, get started with ‘db.grantRolesToUser()’ and ‘db.revokeRolesFromUser()’. These methods are your powerful allies in managing your MongoDB permissions, allowing you to craft a secure and efficient database environment that perfectly suits your needs.

Advanced User Management and Permission Controls

Ready for a deep dive into the advanced capabilities of MongoDB’s user management? Let’s go beyond the basics and explore the more sophisticated features that cater to complex requirements. Buckle up and get ready to take your MongoDB skills to the next level!

Ever wondered if you could assign a role only on a specific database? With MongoDB, you can do precisely that! You have the flexibility to grant a role that is strictly confined to a particular database, ensuring a higher level of access control.

Additionally, MongoDB’s dynamic environment allows you to create roles that inherit privileges from other roles. Think of it as passing the torch, where a new role can step in and perform the tasks of an existing role, along with its unique functions.

And there’s more! MongoDB lets you assign a role to a user that is applicable only during a specific session. This feature is especially beneficial when temporary access is required, such as during training or for a specific project. The role automatically expires once the session ends, minimizing potential security risks.

Did we mention the creation of views? MongoDB lets you define views, which are essentially read-only windows into your data. This feature allows you to expose a subset of your data to a user, providing an added layer of permission control. It’s like having a viewing deck, offering a limited yet valuable perspective of your data landscape.

From granting roles on specific databases to creating session-specific roles and defining views, MongoDB’s advanced user management features offer an array of tools to fine-tune your permission controls. Harness these capabilities to create a database environment that’s not just secure, but also highly tailored to your complex requirements. Your MongoDB adventure continues, and it just keeps getting better!

Common Mistakes and Best Practices in MongoDB User and Permission Management

Venturing into the world of MongoDB user management can sometimes feel like navigating a maze. One wrong turn and you might find yourself facing security pitfalls. But fear not! We’re here to guide you through common mistakes and arm you with best practices to keep your database environment secure and efficient.

Are you remembering to enable authentication in MongoDB? Neglecting this step is like leaving your front door wide open, inviting anyone to stroll in. Authentication is your gatekeeper, ensuring only those with the right credentials gain entry. So, never skip this crucial defense layer.

And how about those passwords? Are they weak and easily guessable? Remember, your password is like a secret code, and it needs to be strong to keep intruders at bay. Opt for robust passwords that are hard to crack to fortify your MongoDB security.

Perhaps one of the most common mistakes is granting excess privileges. It’s like giving away too many keys to your house. Adhere to the principle of least privilege, ensuring users have just the right level of access to perform their duties. Excessive privileges not only pose a security risk but also disrupt your database’s efficiency.

Finally, don’t fall into the trap of set-it-and-forget-it when it comes to user permissions. Your MongoDB environment isn’t static, and neither should your user permissions be. Make it a point to regularly audit your user permissions, fine-tuning them to suit evolving needs.

Avoiding these common mistakes and implementing best practices such as using Apono will set you on the path to a secure and efficient MongoDB experience. So gear up, and navigate your MongoDB user management journey with confidence!

Check out our article about Just-in-Time Access to Databases. 

Just-in-time access permission management

*** This is a Security Bloggers Network syndicated blog from Apono authored by Ofir Stein. Read the original post at: https://www.apono.io/blog/the-ultimate-guide-to-granting-permissions-and-roles-in-mongodb/

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Zacks Industry Outlook Highlights Snap, MongoDB and Zoom Video Communications

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

For Immediate Release

Chicago, IL – January 11, 2024 – Today, Zacks Equity Research discusses Snap SNAP, MongoDB MDB and Zoom Video Communications ZM.

Industry: Internet Software

Link: https://www.zacks.com/commentary/2208115/3-internet-software-stocks-to-buy-from-a-prospering-industry

The Zacks Internet Software industry is benefiting from accelerated demand for digital transformation and the ongoing shift to the cloud. The high demand for Software as a Service or SaaS-based solutions due to the increasing need for remote working, learning and diagnosis software has been a major driver for industry players.

The growing demand for solutions that support hybrid operating environments is noteworthy. The growing proliferation of Augmented & Virtual Reality devices is also noteworthy. Increasingly sophisticated cyber-attacks are driving cybersecurity application demand. Robust IT spending on software is a positive for industry participants. Snap, MongoDB and Zoom Video Communications are benefiting from these trends. However, heightened geopolitical risks, raging inflation and high interest rates are major headwinds.

Industry Description

The Zacks Internet Software industry comprises companies offering application performance monitoring, as well as infrastructure and application software, DevOps deployment and Security software. Industry participants offer multi-cloud application security and delivery, social networking, online payment and 3D printing applications and solutions. They use the SaaS-based cloud computing model to deliver solutions to end-users, as well as enterprises. Hence, subscription is the primary revenue source.

Advertising is also a major revenue source. Industry participants target a variety of end markets, including banking and financial services, service providers, federal governments and animal health technology and services.

3 Trends Shaping the Future of the Internet Software Industry

Adoption of SaaS Growing: The industry is benefiting from the continued demand for digital transformation. Growth prospects are alluring, primarily due to the rapid adoption of SaaS, which offers a flexible and cost-effective delivery method of applications. It also cuts down on deployment time compared with legacy systems.

SaaS attempts to deliver applications to any user, anywhere, anytime and on any device. It has been effective in addressing customer expectations of seamless communications across multiple channels, including voice, chat, email, web, social media and mobile. This increases customer satisfaction and raises the retention rate, driving the top lines of industry participants.

Moreover, the SaaS delivery model has supported the industry participants to deliver software applications amid the coronavirus-led lockdowns and shelter-in-place guidance. Remote working, learning and diagnosis have also boosted the demand for SaaS-based software applications.

Pay-As-You-Go Model Gaining Traction: The increasing customer-centric approach is allowing end-users to perform all required actions with minimal intervention from software providers. The pay-as-you-go model helps Internet Software providers scale their offerings per the needs of different users.

The subscription-based business model ensures recurring revenues for the industry participants. The affordability of the SaaS delivery model, particularly for small and medium-sized businesses, is another major driver. The cloud-based applications are easy to use. Hence, the need for specialized training is reduced significantly, which lowers expenses, thereby driving profits.

Ongoing Transition to Cloud Creating Opportunities: The growing need to secure cloud platforms amid the increasing incidences of cyber-attacks and hacking drives the demand for web-based cyber security software. As enterprises continue to move their on-premise workload to cloud environments, application and infrastructure monitoring is gaining importance. This is increasing the demand for web-based performance management monitoring tools.

Zacks Industry Rank Indicates Bright Prospects

The Zacks Internet Software industry, placed within the broader Zacks Computer And Technology sector, carries a Zacks Industry Rank #29, which places it in the top 12% of more than 250 Zacks industries.

The group’s Zacks Industry Rank, which is the average of the Zacks Rank of all the member stocks, indicates bright near-term prospects. Our research shows that the top 50% of the Zacks-ranked industries outperform the bottom 50% by a factor of more than two to one.

The industry’s position in the top 50% of the Zacks-ranked industries is a result of a positive earnings outlook for the constituent companies in aggregate. Looking at the aggregate earnings estimate revisions, it appears that analysts are optimistic about this group’s earnings growth potential. The industry’s earnings estimates for 2024 have moved up 60.8% since Jan 31, 2023.

Before we present the top industry picks, it is worth looking at the industry’s shareholder returns and current valuation first.

Industry Beats Sector, S&P 500

The Zacks Internet Software industry has outperformed the broader Zacks Computer and Technology sector as well as the S&P 500 Index in the past year.

The industry has risen 70.4% over this period compared with the S&P 500 Index’s gain of 22.9% and the broader sector’s growth of 47.5%.

Industry’s Current Valuation

On the basis of trailing 12-month price-to-sales (P/S), which is a commonly used multiple for valuing Internet Software stocks, we see that the industry is currently trading at 3.06X compared with the S&P 500’s 3.99X and the sector’s trailing 12-month P/S of 4.64X.

Over the last three years, the industry has traded as high as 7.34X and as low as 1.65X, with a median of 4.58X.

3 Stocks to Buy Right Now

Snap– This Zacks Rank #1 (Strong Buy) company is riding on user growth and improving user engagement, driven by the strong adoption of Augmented Reality Lenses, Discover content, Shows and Snap Map, which are used by 350 million users on a monthly basis. You can see the complete list of today’s Zacks #1 Rank stocks here.

Snap’s expanding partner base, with the likes of ITV in the U.K. and ESPN in the Netherlands, is noteworthy.

This Venice, CA-based company’s shares have gained 81% in the trailing 12-month period. The Zacks Consensus Estimate for Snap’s 2024 earnings has been unchanged at 13 cents per share over the past 30 days.

MongoDB– This company is benefiting from strong demand for Atlas. MDB’s clientele increased by roughly 1,400 customers sequentially to 46,400 customers at the end of third-quarter fiscal 2024.

Shares of this Zacks Rank #1 company have gained 108% in the past year. The Zacks Consensus Estimate for MongoDB’s fiscal 2024 earnings is pegged at $2.90 per share, unchanged over the past 30 days.

Zoom Video– Another Zacks Rank #1 company, it is benefiting from steady growth in subscriber base and enterprise customer base backed by strong demand for offerings like Zoom Phone. The launch of AI-driven solutions like Zoom Doc and Zoom AI Companion holds promise.

Zoom Video shares have declined 3.6% in the trailing 12-month period. The Zacks Consensus Estimate for the company’s fiscal 2024 earnings is pegged at $4.94 per share, unchanged in the past 30 days.

Why Haven’t You Looked at Zacks’ Top Stocks?

Since 2000, our top stock-picking strategies have blown away the S&P’s +6.2 average gain per year. Amazingly, they soared with average gains of +46.4%, +49.5% and +55.2% per year. Today you can access their live picks without cost or obligation.

See Stocks Free >>

Join us on Facebook: https://www.facebook.com/ZacksInvestmentResearch/

Zacks Investment Research is under common control with affiliated entities (including a broker-dealer and an investment adviser), which may engage in transactions involving the foregoing securities for the clients of such affiliates.

Media Contact

Zacks Investment Research

800-767-3771 ext. 9339

support@zacks.com

https://www.zacks.com

Past performance is no guarantee of future results. Inherent in any investment is the potential for loss. This material is being provided for informational purposes only and nothing herein constitutes investment, legal, accounting or tax advice, or a recommendation to buy, sell or hold a security. No recommendation or advice is being given as to whether any investment is suitable for a particular investor. It should not be assumed that any investments in securities, companies, sectors or markets identified and described were or will be profitable. All information is current as of the date of herein and is subject to change without notice. Any views or opinions expressed may not reflect those of the firm as a whole. Zacks Investment Research does not engage in investment banking, market making or asset management activities of any securities. These returns are from hypothetical portfolios consisting of stocks with Zacks Rank = 1 that were rebalanced monthly with zero transaction costs. These are not the returns of actual portfolios of stocks. The S&P 500 is an unmanaged index. Visit https://www.zacks.com/performance  for information about the performance numbers displayed in this press release.

Want the latest recommendations from Zacks Investment Research? Today, you can download 7 Best Stocks for the Next 30 Days. Click to get this free report

Snap Inc. (SNAP) : Free Stock Analysis Report

MongoDB, Inc. (MDB) : Free Stock Analysis Report

Zoom Video Communications, Inc. (ZM) : Free Stock Analysis Report

To read this article on Zacks.com click here.

Zacks Investment Research

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Traders Buy High Volume of MongoDB Put Options (NASDAQ:MDB) – Defense World

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

MongoDB, Inc. (NASDAQ:MDBGet Free Report) was the recipient of some unusual options trading activity on Wednesday. Stock investors acquired 23,831 put options on the stock. This is an increase of approximately 2,157% compared to the typical daily volume of 1,056 put options.

Insider Buying and Selling

In other news, Director Dwight A. Merriman sold 1,000 shares of the company’s stock in a transaction dated Wednesday, November 1st. The stock was sold at an average price of $345.21, for a total transaction of $345,210.00. Following the completion of the transaction, the director now directly owns 533,896 shares of the company’s stock, valued at approximately $184,306,238.16. The transaction was disclosed in a legal filing with the SEC, which is available at this hyperlink. In other news, CAO Thomas Bull sold 359 shares of the stock in a transaction dated Tuesday, January 2nd. The shares were sold at an average price of $404.38, for a total transaction of $145,172.42. Following the completion of the sale, the chief accounting officer now owns 16,313 shares in the company, valued at $6,596,650.94. The sale was disclosed in a legal filing with the Securities & Exchange Commission, which is available through this link. Also, Director Dwight A. Merriman sold 1,000 shares of the stock in a transaction that occurred on Wednesday, November 1st. The stock was sold at an average price of $345.21, for a total value of $345,210.00. Following the sale, the director now directly owns 533,896 shares of the company’s stock, valued at $184,306,238.16. The disclosure for this sale can be found here. Insiders sold 147,029 shares of company stock worth $56,304,511 over the last 90 days. 4.80% of the stock is owned by insiders.

Institutional Inflows and Outflows

A number of institutional investors have recently made changes to their positions in MDB. Simplicity Solutions LLC increased its position in MongoDB by 2.2% in the 2nd quarter. Simplicity Solutions LLC now owns 1,169 shares of the company’s stock valued at $480,000 after acquiring an additional 25 shares in the last quarter. AJ Wealth Strategies LLC raised its position in MongoDB by 1.2% in the 2nd quarter. AJ Wealth Strategies LLC now owns 2,390 shares of the company’s stock worth $982,000 after purchasing an additional 28 shares during the last quarter. Insigneo Advisory Services LLC raised its position in MongoDB by 2.9% in the 3rd quarter. Insigneo Advisory Services LLC now owns 1,070 shares of the company’s stock worth $370,000 after purchasing an additional 30 shares during the last quarter. Assenagon Asset Management S.A. lifted its holdings in MongoDB by 1.4% during the 2nd quarter. Assenagon Asset Management S.A. now owns 2,239 shares of the company’s stock worth $920,000 after buying an additional 32 shares in the last quarter. Finally, Veritable L.P. boosted its position in MongoDB by 1.4% in the 2nd quarter. Veritable L.P. now owns 2,321 shares of the company’s stock valued at $954,000 after buying an additional 33 shares during the last quarter. 88.89% of the stock is owned by institutional investors.

Wall Street Analysts Forecast Growth

A number of analysts recently commented on MDB shares. Royal Bank of Canada boosted their price objective on shares of MongoDB from $445.00 to $475.00 and gave the stock an “outperform” rating in a report on Wednesday, December 6th. Barclays lifted their price objective on MongoDB from $470.00 to $478.00 and gave the stock an “overweight” rating in a report on Wednesday, December 6th. UBS Group reissued a “neutral” rating and issued a $410.00 target price (down previously from $475.00) on shares of MongoDB in a report on Thursday, January 4th. TheStreet upgraded shares of MongoDB from a “d+” rating to a “c-” rating in a research report on Friday, December 1st. Finally, Truist Financial restated a “buy” rating and set a $430.00 target price on shares of MongoDB in a report on Monday, November 13th. One equities research analyst has rated the stock with a sell rating, three have assigned a hold rating and twenty-one have issued a buy rating to the company. According to data from MarketBeat.com, the stock has a consensus rating of “Moderate Buy” and a consensus target price of $430.41.

Check Out Our Latest Research Report on MDB

MongoDB Price Performance

MongoDB stock opened at $393.15 on Thursday. The business has a 50 day simple moving average of $394.98 and a 200 day simple moving average of $380.82. The company has a debt-to-equity ratio of 1.18, a current ratio of 4.74 and a quick ratio of 4.74. The company has a market capitalization of $28.38 billion, a PE ratio of -148.92 and a beta of 1.23. MongoDB has a one year low of $179.52 and a one year high of $442.84.

MongoDB (NASDAQ:MDBGet Free Report) last issued its quarterly earnings data on Tuesday, December 5th. The company reported $0.96 earnings per share for the quarter, topping the consensus estimate of $0.51 by $0.45. The firm had revenue of $432.94 million for the quarter, compared to analysts’ expectations of $406.33 million. MongoDB had a negative net margin of 11.70% and a negative return on equity of 20.64%. The company’s quarterly revenue was up 29.8% compared to the same quarter last year. During the same period in the prior year, the firm earned ($1.23) earnings per share. On average, sell-side analysts forecast that MongoDB will post -1.64 EPS for the current fiscal year.

MongoDB Company Profile

(Get Free Report)

MongoDB, Inc provides general purpose database platform worldwide. The company offers MongoDB Atlas, a hosted multi-cloud database-as-a-service solution; MongoDB Enterprise Advanced, a commercial database server for enterprise customers to run in the cloud, on-premise, or in a hybrid environment; and Community Server, a free-to-download version of its database, which includes the functionality that developers need to get started with MongoDB.

Featured Stories



Receive News & Ratings for MongoDB Daily – Enter your email address below to receive a concise daily summary of the latest news and analysts’ ratings for MongoDB and related companies with MarketBeat.com’s FREE daily email newsletter.

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


MongoDB and Patronus AI Partner to Boost Enterprise Confidence in Generative AI

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

NEW YORK, Jan. 11, 2024 — Patronus AI has announced it is partnering with MongoDB to bring automated LLM evaluation and testing to enterprise customers. The joint offering will combine Patronus AI’s capabilities with MongoDB’s Atlas Vector Search product.

Today, enterprises are using retrieval-augmented generation (RAG) systems to power key workflows using their internal knowledge base. However, these systems are prone to failure. In past research, Patronus AI found that State-Of-The-Art retrieval systems can frequently hallucinate in real world use cases like financial services. Prior research has also shown that LLMs can struggle with reasoning and numerical calculations.

“Enterprises are excited about the potential of generative AI, but they are concerned about hallucinations and other unexpected LLM behavior,” said Anand Kannappan, CEO and co-founder, Patronus AI. “We are confident our partnership with MongoDB will accelerate enterprise AI adoption.”

The partnership between Patronus AI and MongoDB brings to market a retrieval system solution that enables reliable document-based LLM workflows. Customers can develop these systems with MongoDB Atlas, and evaluate, test, and monitor them with Patronus AI to increase their accuracy and reliability.

“We recommend developers test iteratively as they experiment with retrieval system design choices,” said Rebecca Qian, CTO and co-founder, Patronus AI. “Patronus AI offers a powerful solution here that is simple to get started with.”

Patronus AI is part of the MongoDB Partner ecosystem.

About Patronus AI

Patronus AI is the first automated evaluation and security platform that helps companies use large language models (LLMs) confidently. The company was founded by machine learning experts Anand Kannappan and Rebecca Qian, formerly of Meta AI and Meta Reality Labs. For more information, please visit https://www.patronus.ai/.

About MongoDB

Headquartered in New York, MongoDB’s mission is to empower innovators to create, transform, and disrupt industries by unleashing the power of software and data. Built by developers, for developers, MongoDB’s developer data platform is a database with an integrated set of related services that allow development teams to address the growing requirements for today’s wide variety of modern applications, all in a unified and consistent user experience. MongoDB has tens of thousands of customers in over 100 countries. The MongoDB database platform has been downloaded hundreds of millions of times since 2007, and there have been millions of builders trained through MongoDB University courses. To learn more, visit mongodb.com.


Source: Patronus AI

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.