Is Sanity SQL or NoSQL? – Express Healthcare Management –

MMS Founder
MMS RSS

Posted on nosqlgooglealerts. Visit nosqlgooglealerts

Is Sanity SQL or NoSQL?

In the world of databases, the debate between SQL (Structured Query Language) and NoSQL (Not Only SQL) has been ongoing for years. Both approaches have their own strengths and weaknesses, making it difficult to determine which one is the ultimate solution for all scenarios. One database system that often finds itself at the center of this discussion is Sanity. So, is Sanity SQL or NoSQL? Let’s delve into the details.

Understanding SQL and NoSQL
SQL databases are based on a relational model, where data is organized into tables with predefined schemas. They use SQL as the primary language for querying and managing data. On the other hand, NoSQL databases are designed to handle unstructured or semi-structured data. They offer flexible schemas and use various data models, such as key-value, document, columnar, or graph.

The Case of Sanity
Sanity is a highly flexible and customizable content platform that allows developers to structure and manage content for websites, applications, and other digital experiences. It provides a real-time collaborative environment for content creation and editing. While Sanity does use SQL for some internal operations, it is primarily considered a NoSQL database due to its flexible schema and document-oriented approach.

FAQ

Q: Can Sanity handle structured data?
A: Yes, Sanity can handle structured data by defining schemas and using its powerful querying capabilities.

Q: Does Sanity support real-time collaboration?
A: Yes, Sanity provides real-time collaboration features, allowing multiple users to work on content simultaneously.

Q: Is Sanity suitable for large-scale applications?
A: Yes, Sanity is designed to scale horizontally and can handle large amounts of data and high traffic loads.

Q: Can Sanity be used with other programming languages?
A: Yes, Sanity provides APIs and SDKs for various programming languages, making it compatible with a wide range of technologies.

In conclusion, while Sanity does utilize SQL for certain internal operations, it is primarily considered a NoSQL database due to its flexible schema and document-oriented approach. Its ability to handle structured data, support real-time collaboration, and scale for large-scale applications makes it a popular choice among developers. Ultimately, the decision between SQL and NoSQL depends on the specific requirements of your project and the trade-offs you are willing to make.

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Why analysts are calling MongoDB one of the best AI software picks | Shares Magazine

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

  • Flexible platform suited to large data volumes
  • Firm has a record of smashing forecasts
  • Shares well below 2021 level, analysts target $500

Analysts at Well Fargo have called MongoDB (MDB:NASDAQ) one of the ‘best ways to play AI’ in a note to clients, flagging its scope to be a major player in new artificial intelligence developments.

‘We see MongoDB as the best AI play in software infrastructure given its ever-expanding list of new workloads,’ explained the analysts.

‘We believe AI applications are another new workload that can drive the growth of MongoDB. We believe AI workloads are more impactful to growth than other workloads, and MongoDB is the best-positioned vendor, in our view, to capture these workloads.’

WHAT MONGODB DOES

MongoDB operates a scaleable and flexible document database that can handle both structured (spreadsheet, for example) and unstructured data sets (text documents, say). It is hugely popular with software developers because it is simple to learn and use, while still providing all the powerful capabilities needed to meet the most complex requirements at any scale. It has also become a hit with corporate buying departments, allowing companies to seamlessly tailor the software to their own multi-cloud computing environments. 

‘We provide drivers for 10+ languages (software coding languages), and the community has built dozens more’, says the company.

Company strapline, build the next big thingMongoDB taps into the scope of AI to drive corporate growth / Image source: Adobe

It is a point not missed by Wells Fargo. Its’ analysts believe the key to long-term growth is MongoDB’s ability to win new workloads, as ‘every incremental workload has an exponential impact on ARR (annual recurring revenue)’.

Running AI programmes requires huge amounts of data from which AI can analyse and learn. Take self-driving cars, for example. Not only do they require enormous computing power to assess a multitude of geolocation data points, but they also have to constantly assess large numbers of mobile factors – other vehicles, people, animals, say, and their speed and direction and how that could affect the car.

TARGETING LARGER CUSTOMERS

Wells Fargo believes large customers have an outsized impact on revenue growth, adding that ‘MongoDB is still massively underpenetrated in the G2000’, or in other words, the world’s largest 2,000 organisation. Wells Fargo also believes MongoDB has ‘significant room for operating margin expansion.’

Since listing on Nasdaq in 2017, MongoDB has consistently smashed growth expectations, firing the stock to 1,168% gains. In its last quarter (to 31 Jul), the $27.8 billion company reported revenues of $423.8 million versus $393.68 million forecast.

This year, the stock has more than doubled to $389.07, although that remains far below record $570 levels of June 2021. Wells Fargo analysts have slapped a $500 target on the stock.


Issue Date: 17 Nov 2023

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Insider Sell Alert: MongoDB Inc’s Chief Revenue Officer Cedric Pech Unloads Shares

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

In the dynamic landscape of the stock market, insider transactions often provide valuable insights into the internal perspectives of a company’s health and future prospects. Recently, MongoDB Inc (NASDAQ:MDB) has witnessed a significant insider sell by its Chief Revenue Officer, Cedric Pech. On November 14, 2023, Cedric Pech sold 1,563 shares of MongoDB Inc, a move that has caught the attention of investors and market analysts alike.Who is Cedric Pech of MongoDB Inc?Cedric Pech is a seasoned executive with a track record of driving revenue growth and scaling operations in the technology sector. As the Chief Revenue Officer of MongoDB Inc, Pech is responsible for the company’s global sales efforts, including direct and channel sales, professional services, and customer success. His role is pivotal in shaping the company’s revenue strategies and ensuring that MongoDB continues to expand its market presence.MongoDB Inc’s Business DescriptionMongoDB Inc is a leading provider of general-purpose database platforms. The company’s flagship product, MongoDB, is a document database designed to cater to the needs of modern applications with a powerful, flexible, and scalable approach. MongoDB’s platform is widely used by developers around the globe to build and run applications across a multitude of industries, including technology, financial services, healthcare, retail, and more. The company’s innovative technology has positioned it as a key player in the database market, offering solutions that address the complexities of working with data in the digital age.Analysis of Insider Buy/Sell and the Relationship with the Stock PriceThe recent transaction by Cedric Pech is part of a broader pattern of insider selling at MongoDB Inc. Over the past year, Pech has sold a total of 55,786 shares and has not made any purchases. This trend is consistent with the overall insider activity at the company, which has seen 60 insider sells and no insider buys over the same timeframe.

Insider Sell Alert: MongoDB Inc's Chief Revenue Officer Cedric Pech Unloads SharesInsider Sell Alert: MongoDB Inc's Chief Revenue Officer Cedric Pech Unloads Shares

Insider Sell Alert: MongoDB Inc’s Chief Revenue Officer Cedric Pech Unloads Shares

The relationship between insider selling and stock price can be complex. While insider sells do not always indicate a lack of confidence in the company, they can sometimes lead investors to question the insiders’ outlook on the stock’s future performance. In the case of MongoDB Inc, the consistent selling by insiders, including the Chief Revenue Officer, may suggest that they believe the stock’s current price reflects its fair value or that they are taking profits after a period of appreciation.Valuation and Market CapOn the day of Cedric Pech’s recent sell, shares of MongoDB Inc were trading at $400, giving the company a market cap of $27.758 billion. This valuation places MongoDB Inc among the more substantial players in the tech sector, reflecting its growth and the market’s confidence in its business model.With a price of $400 and a GuruFocus Value of $597.47, MongoDB Inc has a price-to-GF-Value ratio of 0.67, indicating that the stock is significantly undervalued based on its GF Value.

Insider Sell Alert: MongoDB Inc's Chief Revenue Officer Cedric Pech Unloads SharesInsider Sell Alert: MongoDB Inc's Chief Revenue Officer Cedric Pech Unloads Shares

Insider Sell Alert: MongoDB Inc’s Chief Revenue Officer Cedric Pech Unloads Shares

The GF Value is a proprietary metric developed by GuruFocus, which takes into account historical trading multiples, a GuruFocus adjustment factor based on past returns and growth, and future business performance estimates from analysts. The fact that MongoDB Inc’s stock is trading below its GF Value suggests that the market may not be fully recognizing the company’s intrinsic value, potentially presenting an opportunity for investors.ConclusionThe insider selling activity at MongoDB Inc, particularly by Chief Revenue Officer Cedric Pech, is a development that warrants attention. While the reasons behind Pech’s decision to sell shares are not publicly known, the transaction adds to a pattern of insider selling at the company. Despite this, MongoDB Inc’s valuation metrics suggest that the stock may be undervalued, offering a potentially attractive entry point for investors who believe in the company’s long-term growth prospects.Investors and analysts will continue to monitor insider activity and market trends to gauge the sentiment around MongoDB Inc and adjust their investment strategies accordingly. As always, it is essential for investors to conduct their due diligence and consider the broader market context when interpreting insider transactions.

This article, generated by GuruFocus, is designed to provide general insights and is not tailored financial advice. Our commentary is rooted in historical data and analyst projections, utilizing an impartial methodology, and is not intended to serve as specific investment guidance. It does not formulate a recommendation to purchase or divest any stock and does not consider individual investment objectives or financial circumstances. Our objective is to deliver long-term, fundamental data-driven analysis. Be aware that our analysis might not incorporate the most recent, price-sensitive company announcements or qualitative information. GuruFocus holds no position in the stocks mentioned herein.

This article first appeared on GuruFocus.

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Insider Sell Alert: MongoDB Inc’s Chief Revenue Officer Cedric P – GuruFocus

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

In the dynamic landscape of the stock market, insider transactions often provide valuable insights into the internal perspectives of a company’s health and future prospects. Recently, MongoDB Inc (MDB, Financial) has witnessed a significant insider sell by its Chief Revenue Officer, Cedric Pech. On November 14, 2023, Cedric Pech sold 1,563 shares of MongoDB Inc, a move that has caught the attention of investors and market analysts alike.
Who is Cedric Pech of MongoDB Inc?
Cedric Pech is a seasoned executive with a track record of driving revenue growth and scaling operations in the technology sector. As the Chief Revenue Officer of MongoDB Inc, Pech is responsible for the company’s global sales efforts, including direct and channel sales, professional services, and customer success. His role is pivotal in shaping the company’s revenue strategies and ensuring that MongoDB continues to expand its market presence.
MongoDB Inc’s Business Description
MongoDB Inc is a leading provider of general-purpose database platforms. The company’s flagship product, MongoDB, is a document database designed to cater to the needs of modern applications with a powerful, flexible, and scalable approach. MongoDB’s platform is widely used by developers around the globe to build and run applications across a multitude of industries, including technology, financial services, healthcare, retail, and more. The company’s innovative technology has positioned it as a key player in the database market, offering solutions that address the complexities of working with data in the digital age.
Analysis of Insider Buy/Sell and the Relationship with the Stock Price
The recent transaction by Cedric Pech is part of a broader pattern of insider selling at MongoDB Inc. Over the past year, Pech has sold a total of 55,786 shares and has not made any purchases. This trend is consistent with the overall insider activity at the company, which has seen 60 insider sells and no insider buys over the same timeframe.
1725484705857466368.png
The relationship between insider selling and stock price can be complex. While insider sells do not always indicate a lack of confidence in the company, they can sometimes lead investors to question the insiders’ outlook on the stock’s future performance. In the case of MongoDB Inc, the consistent selling by insiders, including the Chief Revenue Officer, may suggest that they believe the stock’s current price reflects its fair value or that they are taking profits after a period of appreciation.
Valuation and Market Cap
On the day of Cedric Pech’s recent sell, shares of MongoDB Inc were trading at $400, giving the company a market cap of $27.758 billion. This valuation places MongoDB Inc among the more substantial players in the tech sector, reflecting its growth and the market’s confidence in its business model.
With a price of $400 and a GuruFocus Value of $597.47, MongoDB Inc has a price-to-GF-Value ratio of 0.67, indicating that the stock is significantly undervalued based on its GF Value.
1725484725063184384.png
The GF Value is a proprietary metric developed by GuruFocus, which takes into account historical trading multiples, a GuruFocus adjustment factor based on past returns and growth, and future business performance estimates from analysts. The fact that MongoDB Inc’s stock is trading below its GF Value suggests that the market may not be fully recognizing the company’s intrinsic value, potentially presenting an opportunity for investors.
Conclusion
The insider selling activity at MongoDB Inc, particularly by Chief Revenue Officer Cedric Pech, is a development that warrants attention. While the reasons behind Pech’s decision to sell shares are not publicly known, the transaction adds to a pattern of insider selling at the company. Despite this, MongoDB Inc’s valuation metrics suggest that the stock may be undervalued, offering a potentially attractive entry point for investors who believe in the company’s long-term growth prospects.
Investors and analysts will continue to monitor insider activity and market trends to gauge the sentiment around MongoDB Inc and adjust their investment strategies accordingly. As always, it is essential for investors to conduct their due diligence and consider the broader market context when interpreting insider transactions.

This article, generated by GuruFocus, is designed to provide general insights and is not tailored financial advice. Our commentary is rooted in historical data and analyst projections, utilizing an impartial methodology, and is not intended to serve as specific investment guidance. It does not formulate a recommendation to purchase or divest any stock and does not consider individual investment objectives or financial circumstances. Our objective is to deliver long-term, fundamental data-driven analysis. Be aware that our analysis might not incorporate the most recent, price-sensitive company announcements or qualitative information. GuruFocus holds no position in the stocks mentioned herein.

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Presentation: Security Checks Simplified: How to Implement Best Practices with Ease

MMS Founder
MMS Varun Sharma

Article originally posted on InfoQ. Visit InfoQ

Transcript

Sharma: Financial transactions have become easier over time. Long back, people used to use the barter system to exchange something that they had for something else that they wanted. Obviously, this was very challenging. With the introduction of paper money, things became a lot easier. This led to more transactions being done. There was still this problem that you had to carry cash around, and then go to the bank to withdraw more.

With the introduction of debit cards and credit cards, this problem was also solved, and the financial transactions became even more easy. This led to more of them getting done. Now it’s become even easier that you can just tap a phone or a smartwatch on a reader and a financial transaction gets done. Unfortunately, the same is not true for security remediation. Historically, security tools tend to find issues and then point towards documentation for the fixes.

The developers are expected to read through the documentation, figure out what the right fix should be, and then implement it. This means that there is a learning curve to implement the fix. One has to experiment a bit to figure out whether the fix is really going to work or not. This causes frustration. As a result, not a lot of security remediation gets done. Here we can learn from the financial industry that if we make remediation easier, then more remediation will get done.

Background

My name is Varun. I’m the CEO and co-founder of StepSecurity, which is an open-core startup that helps developers defend against software supply chain attacks by automating security best practices. The title of my talk is, security checks simplified: how to implement best practices with ease. What we’ll do is look at the OpenSSF Scorecard tool, which is a tool that finds issues across the software development lifecycle.

We’ll then try to fix these issues using manual remediation, and look at what are some of the challenges. Then we are going to look at some strategies and techniques to automate these fixes and see how that makes it easier to get a higher score. Throughout this presentation, we’ll use a demo repository, where we’ll start off with a low score and then use automation to fix the checks, to implement the checks, and get a higher score by the end of the talk.

OpenSSF Scorecard Tool

The Scorecard tool is from the OpenSSF, which stands for the Open Source Security Foundation. The tool itself assesses projects for security risks through a series of automated checks. Even though the initial intent was to run these on open source projects, a lot of the checks also apply to private repositories. Since a lot of open source projects are on GitHub, a lot of checks work well on GitHub, but there is work being done to port the checks to GitLab and other source code management platforms.

Let me go through the different categories in which it finds these issues. The first one is code vulnerabilities, where it checks whether the project itself has known vulnerabilities, or if one of the dependencies has known vulnerabilities. The second one is maintenance, where it checks if there is a process in place to update dependencies to the latest version. The third category is continuous testing, where the tool checks if basic security tools are being run as part of the CI/CD pipeline.

The fourth category is source risk assessment, where it checks the settings on the repository itself, things like branch protection, and code review policies. Then the last category is, build risk assessment, where it checks whether security best practices are being followed for the CI/CD pipeline. As you can see, the checks are across the different components, the source repository itself, the code and the pipeline within it, and also the dependencies.

Demo 1: Running OpenSSF Scorecard on a Demo Repository

Let’s look at a few demos. First, we’ll start with a look at the OpenSSF Scorecard project. Then we’ll apply and run the tool on a demo repository. This is the GitHub page for the OpenSSF Scorecard tool. You can check it out to see how these different checks have been implemented. When you use Scorecard, you get a score, which is from 0 to 10. You can also add a badge to your project. In this case, you can see that the Scorecard tool for the project has a badge for itself and has a really high score of 9.6.

Then you can scroll down and go to the README page. I think I’ll just stop at where it mentions the goals, which are to automate analysis and trust decisions on the security posture of open source projects. Also, to use this data to proactively improve the posture of critical projects that the world depends on. In this case, we are going to be looking at the second goal where we have a project and we are going to run Scorecard on it, get a score, and then improve the posture of that project.

Next, let’s see how you can run the tool on a repository. In this case, I’m using a Docker image published by the Scorecard project, and running it on a demo repository, which is this spring-petclinic repository that I’ve created where the code comes from a popular sample. You will also notice that this is using a GitHub Auth token because some of the checks require looking at the repository setting itself using the GitHub API. You need to provide a token for these checks to run.

Here, it’s running these checks, and it’s going to soon print out the output. Let’s look at the output. Here, you can see that the aggregate score is 3.6 out of 10. That’s our starting score. Then it goes into details about why that is the case. Our branch protection is not set up so we have a zero on that. There are no code review policies. There is no dependency update tool that’s been set up in this repository.

Then, if we look at pinned dependencies looks like some of the dependencies have been pinned, and so we have a score of 9 on 10 there. There is no static analysis tool which is being run, so there’s a 0 there. Then, finally, token permissions have not been set up on the GitHub Actions workflows, which is why that was also 0. That was a quick demo of running the tool on a repository.

Now we’ll look at another way to run the tool on a repository, which is by adding it to the CI/CD pipeline. In this case, Scorecard is being run in a GitHub Actions workflow. There is this Scorecard action which is published by the project. With this, it runs as part of the pipeline. Anytime there is a pull request, or anytime there’s a change to the branch protection, the same Scorecard tool runs and uploads the findings in the code scanning UI. In here, you can now see the same findings that earlier we saw in a terminal, you can see them in this view.

That was a quick demo of running the Scorecard tool on a repository. As an output, as we saw, our current score is 3.6. We have a lot of work that we need to do in order to improve the score. We need to set up branch protection, so that we can then set up policies for code reviewers. We need to set up a dependency update tool so that dependencies get updated as there are new versions that become available.

We need to pin dependencies because if you don’t pin them, then it basically means that whenever the dependency has a new version, it might just use the latest version and you don’t get a chance to review if that latest version is going to work well or not. We then need to declare the least privilege CI/CD tokens, which is something specific to GitHub Actions workflows. We need to set up a static analysis tool.

Demo 2: Manual Remediation of Issues Identified by The Scorecard Tool

In this next demo, we will try to do these things manually. Then look at, what are some of the challenges? In here, we are back to that demo repository, spring-petclinic, and now the goal is to try and implement these fixes manually. If you look into branch protection, this then points us to documentation. Then, in order to set it up, you have to go through this documentation, learn about it. Then there are a lot of these different options that are available, that you need to know which ones to set to get a higher score.

Then, this is where you can actually go and set up branch protection through the project settings. Here you see those actual settings. This involves a lot of clicks. It also is something which can only be done by an administrator or someone who has the right access level to set up branch protection, which means that if you have to do this at scale across a number of repositories, you might have to reach out to someone else who is an admin, and ask them to set it up. This again, causes a challenge and makes it harder to do. Here we’re just setting some options. We will later do this in a more automated manner.

Now that we’ve looked at branch protection, let’s look at a few others. Now we’re going to look at setting up a dependency update tool. Here, again, you see that the way to fix it is that there is certain documentation, you have to go through that. In this case, let’s say if you want to use Dependabot to do these updates, then there is another configuration file that we need to learn about.

There are these several settings that apply to it, which are different depending on what package ecosystem you want to use. You may be using multiple ecosystems in your project. This again becomes the same problem, that you have to go through documentation, read through it, figure out what the fix should be before you can apply it. There are a lot of manual steps involved here.

Next, let’s look at setting up token permissions. That then points to another set of documentation, where you can see that there are a whole set of permissions which can be set up, and there’s a particular way of doing it. Now you’re not sure what permissions does your workflow really need. If you get it wrong, you might cause the workflow to break. If you just give all the permissions, then it doesn’t really help either, and your score will not really increase. That’s another example where you need to look at the documentation, figure out what the fix should be.

Then there’s pinning of dependencies where there’s, again, a set of documentation. You look at an example of how to do it. You can see that you need to get the commit SHA for a tag and set it in a certain way. Then, finally, there’s about setting up an SAST tool where there are a few options. For each of them, again, there’s additional documentation. As you can see that, going through this process, and doing this manually, involves a lot of steps. It might actually involve reaching out to others in the organization who have the right level of access to do some of these things. This is where it gets challenging.

Automating Security Remediation

Now let’s look at how we can automate these steps. What is our overall strategy going to be for this? That is basically to list down all of the steps that are in the documentation, and then try to see if we can eliminate steps in some way. If we cannot eliminate them, then we try to automate these steps. This is just a general work simplification strategy, which has been applied in a lot of different domains where you list down the steps, try to eliminate the steps. If you can’t eliminate them, then try to automate them.

We are going to apply this strategy to these different checks that had to be fixed manually. Then let’s see how we can go about doing this. The first thing that we’re going to try to solve is branch protection. To do that, a common strategy which is being used is to use GitOps, to manage repositories and to set up branch protection. GitOps basically has these three components where the first one is infrastructure as code. You declare your infrastructure as code, and you have a single source of truth for that in a Git repository.

Then, if you want to make changes to it, you do that using a pull request. Let’s say if I want to add a repository or change a repository, instead of going and doing it through the UI, I would then create a pull request in this repository, where I’ve declared the repository settings as code. That gives an opportunity to review the changes, and run CI tools on it to make sure that the changes are appropriate as per the requirements of the organization. Then once that is merged, the third part comes in, which is the CI/CD, which will actually implement those changes or publish those changes to the repository.

You see this concept of declared repositories where they’re declared using code. Then, whenever changes have to be done, that code is modified, and those changes get done. This has a lot of advantages, in the sense that it makes the whole workflow easier and faster. You can enforce standards using CI tests. There is a clear audit trail of who made the change and who approved it. You can delegate these changes.

You don’t need everybody to be an admin or have high privileges in order to set branch protection or create a repository. You can actually delegate it, because there can be different people who create a request or a pull request for it, and a different person who approves it. This has actually been followed by a lot of projects. One particular reference that I want to share is Peribolos, which is from the Kubernetes team. They actually use GitOps to manage their repository, especially the areas around adding new users and managing teams in different repositories.

Demo 3: GitOps to Automate Branch Protection

Now let’s look at a demo, where we are going to set up branch protection for our demo repository using GitOps. In here, what you see is a separate repository with the name GitOps, where I’ve set up infrastructure as code for our demo repository. If you look at here, in this case, I’ve used Terraform, in order to specify the current state of the repository. Here you can see that the repository name, spring-petclinic, is specified, and the description is specified. This is how I actually created the demo repository. Now what we want to do is to make a change to it in order to set up branch protection.

What I’m going to do here is to add more infrastructure as code, which is specifying what are those specific things, the settings that need to be set up for branch protection. What are the required status checks, and so on. This I’ve selected in a way that gets us to a higher score from Scorecard. What we’re going to do is that we’re going to create a pull request. When that happens, we can then run different CI tests on this. Your organizational policy might be that you need a minimum of two reviewers.

That is something that can be found using a CI test. In this case, we have a workflow, where we are doing basic linting of any changes that are being made. Then, we are using Terraform to publish a plan so that we know what are the changes that are going to happen. We can also see these changes as a comment in the pull request. This is basically how the workflows have been set up. I’m going to go ahead and merge this pull request. In a real scenario, it would be someone else who reviews it and merges it.

When this happens, now a CI/CD would run to actually apply these changes to our repository. This is where it’s going to actually take the admin credentials and apply these changes. This is where branch protection is being set up. Once this finishes, we will actually go and see branch protection being set up for our demo repository. You can see here that under settings, under branches, this branch protection rule has been set up.

This got done using GitOps. Here, you can see the same settings that were specified there are visible in the UI. With this technique, it makes it much easier to scale these settings across repositories and enforce policies in an org, rather than having a few people have admin access and then doing a lot of these clicks across repository, which makes it much harder.

That was a quick demo of using GitOps to manage repositories and specifically managing a branch protection. We’re going to run Scorecard again on this demo repository. Now that we’ve fixed the branch protection using some level of automation, we should see an increase in the score. If we go up, we can see that our score has increased to 4.2. The branch protection score has increased to 8 out of 10. It’s still not 10 out of 10 because there are certain more requirements, like having a code owners file, which I didn’t go into. That’s still a great improvement in terms of score.

Introducing Secure-repo

Now that we went through that demo, we can see that our score has gone up from 3.6 to 4.2. We’ve set up branch protection. We’ve set up a code review policy. We’ve done a couple of things. We saw how this was done using automation, using GitOps. The next thing that we need to do is to fix the rest of the issues to set up a dependency update tool, pin dependencies, declare least privilege tokens, and set up a static analysis tool. To do this, we’re going to use code from this repository or this project called Secure-repo.

Secure-repo is something that StepSecurity maintains. It is a set of automations that help you transform a source file from one state to another. For example, if the minimum token permissions have to be set up, there is code in there which takes a CI/CD pipeline file, analyzes it, and transforms it into an output file, which has those permissions set up. Or when dependencies have to be pinned, again, it has logic to take in a file as an input, and then has transformation logic to convert that into an output file, which has those dependencies pinned.

Demo 4: Using Secure-repo to Automate Fixes for Scorecard Checks

Next, let’s look at a quick demo of the Secure-repo project and see how we can use it to improve the score via automation. This is the GitHub project for Secure-repo. It’s an open source project. To start with, let me show you the design from looking at the test. If you see the tests are organized, using this input and output folders where the input has, what are the expected files. Here, for example, you can see that this is what a workflow looks like, and it has a v1 tag. This is something that it’s going to transform. The corresponding expected file is in the output folder.

This is how it’s expected to be. Similarly, for setting up the right permissions, again, there is this input, output folder. The input has the expected files. In the input folder, there is a workflow which is missing the permissions keys, and then in the output it has the actual permission set up. What I’m trying to show you is how the test cases are organized, because with that you get an understanding of what it is trying to do. It’s essentially just trying to transform a file from one state into another, so from a state where it doesn’t have the fix applied to a state where the fix has been applied.

Let’s look at some of the code that does this transformation. I just want to go over it at a high level, because you can always go back and look at this code. In here, let’s look at how it does pinning of actions. In this case it takes the YAML file as an input. Then for each of the jobs, it tries to pin actions. Then the way it pins that action is by getting a commit SHA for that action using a GitHub API call, and then replaces it in place of the tag which was being used. In here, we can look at how it sets up permissions, which is, again, similar in the sense that it basically tries to do what a human would have done, and just automating those steps.

It looks at each of the jobs in a particular workflow, and then tries to calculate a permission for the job. It does that by looking at each of the steps in that job. Then by trying to calculate the permission for that step, and then it adds all the permissions up. The permissions for each step, or for each action are actually stored in a knowledge base. There’s this knowledge base in the repository, which has information about a lot of common actions that are being used, so in this case this is for the create-release action, which requires contents write permission.

All this information has been curated, and then the algorithm uses to add these permissions up to come up with an actual answer. If a human had to do this it would take a lot of steps, a lot of learning involved in terms of what are the different permissions needed for different actions. In here, we are looking at the code that adds a certain workflow, like a CodeQL workflow or a different tool into the repository which is getting fixed. For that, there are templates which are stored in this repository, so it takes a template, looks at what are the languages that are being used by that project where the template needs to be applied, and then sets those languages in there, and then outputs a workflow, which is specific to that repository.

That was a quick demo of the Secure-repo project, and how is it organized. Next, let’s look at how we can use Secure-repo on our demo project. In here, what we’re going to do is we’re going to try to fix the issues in our demo project using Secure-repo. The way this is done is that Secure-repo has a hosted version, and Scorecard actually points to it for remediation. That is to make it easier for projects to get a higher score. If you saw there, there was a link going to the hosted version of Secure-repo.

In this case, it basically pulls the file, because these are open source projects, it can just pull the source file, does the transformation. In this case, it has pinned the actions to a particular commit SHA. Now the maintainer of the project can just copy this file and paste it back in their code editor, or just create a pull request using it. With that, this pinning gets done. It basically cuts down on a lot of steps involved by automating them, by using Secure-repo.

Then, what it can do is it can even go further so that you don’t have to do this for each file. You can see here the concepts being applied where you’re reducing the steps at each stage. Instead of having to do this manually every time, another better option which is available is to create a pull request. In here, you can go click on pull request. In here, I’m actually putting in the demo repository, and clicking on analyze repository.

Then, the Secure-repo project looks at all of the things that it can fix. This is something you can just try right now on your open source projects, if you have any. You see all of the ways in which you can fix them. You can unselect things if you don’t want a certain fix, and then click on create a pull request. You don’t have to set up any app for this, it just uses a fork to create the pull request. Now you’ll soon see all of the changes that are being made using this automation.

This is a pull request that got created on our demo repository with a list of all the changes. In here, you can see that a Dependabot configuration file has been added. It figured out that these are the different ecosystems being used by this project, and so it’s suggesting those. Then, it has added a CodeQL file with the right language setup, so it detected it as using Java, and so it set that up. Then, for an existing workflow, it set up the right set of permissions that it needed. It has also pinned this dependency. It essentially cuts down on a huge amount of steps that one had to follow in order to make these fixes.

Now, once this pull request is merged, we can actually go ahead and see the change in the score. In here, what I’m going to do is that after merging that pull request, I’m running Scorecard again, on our demo repository, and this time, we have a higher score. Now the score has increased to 6.3 out of 10. If you compare it with where we started, so now we have a higher score in branch protection, we have a higher score in the dependency update tool, because the Dependabot configuration got added, so we have a 10 on 10 there.

In terms of pinning dependencies, because those got done via automation, we have a score of 10 on 10 there. Our static analysis tool got added, so we have a score of 8 on 10 there. Then the token permissions were set up, so we have a score of 10 on 10 there. Overall, with this automation, and by using Secure-repo and doing the changes using automation, it was much easier to apply these fixes, and now our score is 6.3.

If you see in terms of the automation status, all of these things have been done by a high degree of automation. That’s why a lot of projects are actually able to increase their score from a low score to a higher score, just by clicking on a few buttons to create a pull request, and by managing their repositories using GitOps.

Summary

I just want to reiterate that, historically, security tools tend to find issues and then point to documentation for fixes. If this documentation, if this can be automated, if each of the steps can be automated, the remediation can be made much easier. When that happens, a lot more remediation will get done. From a more practical standpoint, you can actually use the Scorecard tool to understand the state of your projects. Then, in terms of automation, you can consider using GitOps to manage repositories and the Secure-repo project to help with this automated remediation.

See more presentations with transcripts

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


What’s New in C# 12: Primary Constructors, Collection Expressions, and More

MMS Founder
MMS Almir Vuk

Article originally posted on InfoQ. Visit InfoQ

As part of the .NET 8 launch, on November 14th Microsoft unveiled the new features of C# 12, the latest version of the popular .NET programming language. As announced the most notable improvements include collection expressions, primary constructors for all classes and structs, syntax to alias any type, and default parameters for lambda expressions.

C# 12 introduces the extension of primary constructors to include this feature in all classes and structs, not limited to records only. This enhancement allows the definition of constructor parameters directly within the class declaration.

The primary constructor parameters find versatile applications, serving as arguments for base() constructor invocations, encouraging member field or property initialisation, referencing within instance members, and facilitating dependency injection by eliminating boilerplate code.

public class BankAccount(string accountID, string owner)
{
    public string AccountID { get; } = accountID;
    public string Owner { get; } = owner;

    public override string ToString() => $"Account ID: {AccountID}, Owner: {Owner}";
}

(Source: Microsoft .NET DevBlog, Announcing C# 12)

The primary constructor parameter operates as a class-wide scoped parameter, applicable to various types, including class, struct, record class, and record struct. Notably, when applied to record types, the compiler automatically generates public properties for each primary constructor parameter, simplifying member management for record structures.

Collection expressions are also introduced, which simplifies the syntax for creating various collections, providing a unified approach. As stated, this eliminates the need for distinct syntax when initializing arrays, lists, or spans. The compiler generates efficient code, optimizing collection capacity and avoiding unnecessary data copying.

int[] x1 = [1, 2, 3, 4];
int[] x2 = [];
WriteByteArray([1, 2, 3]);
List x4 = [1, 2, 3, 4];
Span dates = [GetDate(0), GetDate(1)];
WriteByteSpan([1, 2, 3]);

Additionally, a new spread operator simplifies the inclusion of elements from multiple collections. The team stated that this feature is open to feedback for potential future expansions, including dictionaries and support for var (natural types) in upcoming C# versions.

int[] numbers1 = [1, 2, 3];
int[] numbers2 = [4, 5, 6];
int[] moreNumbers = [.. numbers1, .. numbers2, 7, 8, 9];
// moreNumbers contains [1, 2, 3, 4, 5, 6, 7, 8, ,9];

Regarding performance in C# 12, two new features ref readonly parameters and inline arrays are introduced to enhance raw memory handling and boost application performance. The ref readonly parameters offer an adaptable approach to passing parameters by reference or value, particularly beneficial when a method needs the memory location of an argument without modification.

Inline arrays, a struct-based, fixed-length array type, provide a secure way to work with memory buffers, improving performance in scenarios involving arrays of structures without the need for unsafe code. These additions empower developers to optimize their code for increased efficiency.

[System.Runtime.CompilerServices.InlineArray(10)]
public struct Buffer
{
    private int _element0;
}

// Usage
var buffer = new Buffer();

for (int i = 0; i < 10; i++)
    buffer[i] = i;

foreach (var i in buffer)
    Console.WriteLine(i);

Furthermore, two experimental features are introduced, the Experimental attribute and interceptors. The Experimental attribute clarifies the status of experimental features, requiring the calling code to be marked as experimental to avoid errors. Interceptors allow the redirection of method calls, offering optimization possibilities.

While interceptors are not recommended for production due to potential changes, both features underscore the importance of using the Experimental attribute for clarity and consistency in experimental code.

Other available C# 12 features are the possibility of adding optional parameters to lambda expressions and the using alias directive to alias any sort of type (not just named types). This allows developers to create semantic aliases for tuple types, array types, pointer types, or any other unsafe types.

Func testForEquality = (x, y) => x == y;

The original announcement blog post comment section received a few interesting ideas about improvements and suggestions of C# language, and based on different threads and forums overall community feedback is positive.

Official dotnet YouTube channel also published the 35-minute recording of the .NET 8 launch session with the title What’s New in C# 12 and it is highly recommended for developers to watch it.

Readers can find more about the available C# 12 features on the official language documentation page. Developers can get it by downloading .NET 8, the latest Visual Studio, or Visual Studio Code’s C# Dev Kit.

About the Author

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Databricks’ lakehouse becomes foundation under fresh layer of AI dreams – Theregister

MMS Founder
MMS RSS

Posted on nosqlgooglealerts. Visit nosqlgooglealerts

Databricks has decided to launch a complete overhaul of its platform during the climax of Ignite, the global tech shindig run by Microsoft, the software giant with which the data analytics and ML vendor shares a significant partnership.

The company founded by some of the original creators of Apache Spark has announced it is building something else atop its “lakehouse” concept, which it launched in early 2020 as a means of combining structured BI and analytics workloads of data warehousing with the messy world of data lakes.

While retaining the lakehouse’s unified governance layer across data and AI and a single unified query engine to span ETL, SQL, machine learning and BI, the company said it wants to move on to exploit the technology gained in its $1.3 billion buy of MosaicML, a generative AI startup.

In an announcement big on claims and scant on detail, Databricks says it is introducing a data intelligence layer it calls DatabricksIQ, which “fuels all parts of our platform.”

The idea is to employ “AI models to deeply understand the semantics of enterprise data.”

New genAI enabled features Databricks claims it will introduce include end-to-end retrieval augmented generation (RAG) designed to help create “high quality conversational agents on your custom data.” The company also plans to enable training of custom models either from scratch on an organization’s data, or by continued pre-training of existing models. The company is yet to announce products or release data that reflects these aspirations.

Gartner senior director analyst Aaron Rosenbaum said Databricks is one of the vendors competing in the market for data fabric, “a design framework” which the analyst firm promotes.

“Enterprises will have a rich set of choices in 2024, with vendors offering both revolutionary and evolutionary approaches to the data fabric,” he said.

He said Databricks’ announcement would help it provide active metadata management, profiling, genAI for data management, and data cataloguing. The idea is to make it simpler for organizations to gain insights from data with quicker time to value and less staff time and expertise.

“However, organizational and cultural challenges to this new approach to data management will be a barrier to adoption for many enterprises,” he said.

Rosenbaum declined to comment on the timing of Databricks’ release, which coincided with the general availability of Microsoft’s Fabric product portfolio. Like Databricks, Fabric uses the Delta table format to underpin most of its new data engineering and analytics products.

Hyoun Park, CEO and chief analyst with Amalgam Insights, pointed out that the companies collaborate on the Azure Databricks product, hosted in Microsoft’s cloud platform. “It may be the most successful product on Microsoft Azure,” he said.

Databricks closed $500 million in series one VC funding during September, giving it a nominal $43 billion valuation and making it one of the highest valued pre-IPO startups. Its customers include Shell, Toyota, Air Canada, Rolls-Royce, and global bank ABN AMRO.

We asked Databricks for more details on the announcement. ®

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


OpenAI Launches GPTs to Enable Creating No-Code, Custom Versions of ChatGPT

MMS Founder
MMS Sergio De Simone

Article originally posted on InfoQ. Visit InfoQ

At the recent OpenAI developer conference, OpenAI announced it is rolling out GPTs, custom versions of ChatGPT created for specific tasks. Developers will also be able to share their GPTs on the forthcoming ChatGPT Store and monetize them, the company says.

GPTs provide a mechanism to combine ChatGPT with custom instructions, external knowledge, and any combination of skills. They attempt to answer the need for customizing ChatGPT for specific uses, such as learning the rules of board games, help teaching math, or designing stickers.

Many power users maintain a list of carefully crafted prompts and instruction sets, manually copying them into ChatGPT. GPTs now do all of that for you.

Before GPTs, prompt engineering was the most common approach to specializing ChatGPT’s behavior. According to OpenAI, building GPTs does not require coding skills and is a great option for educators, coaches, or anyone who loves to build helpful tools.

Creating one is as easy as starting a conversation, giving it instructions and extra knowledge, and picking what it can do, like searching the web, making images or analyzing data

Using OpenAI’s interface, the first step to creating a GPT is chatting with ChatGPT and describing what you would like to achieve. Once that is done, you can define instructions, conversation starters, knowledge, capabilities, and actions.

Instructions constitute possibly the most significant section. You define here which resources to use, the style, the tone, and the desired behavior. For example, you could specify that when the user provides some data, your GPT should use those files to carry through some Internet searches, then run some script to process the results, and so on.

Conversation starters are sentences you provide as helpers for users to know what they can ask the GPT. Knowledge is a collection of resources you upload and make available to the GPT as an extension to its model. Capabilities are tools the GPT can use. Actions are calls to external services.

GPTs are available to users paying for the ChatGPT Plus subscription and also require enabling the “Beta options” setting.

As mentioned, OpenAI has also announced its GPT Store, which makes it possible to share GPTs publicly. According to the company, the GPT Store should be available starting late November 2023, with support for monetary transactions coming in the next few months.

Previously, ChatGPT offered ChatGPT plugins as a mechanism to modify ChatGPT behavior through integration with third-party applications. While GPTs seem to obsolete plugins in that regard, OpenAI says they are promoting Actions, that build upon the foundations provided by plugins, to leverage many of the core ideas of plugins.

About the Author

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


A Closer Look at MongoDB’s Options Market Dynamics – Benzinga

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

Loading…
Loading…
Loading…

Investors with a lot of money to spend have taken a bearish stance on MongoDB MDB.

And retail traders should know.

We noticed this today when the trades showed up on publicly available options history that we track here at Benzinga.

Whether these are institutions or just wealthy individuals, we don’t know. But when something this big happens with MDB, it often means somebody knows something is about to happen.

So how do we know what these investors just did?

Today, Benzinga‘s options scanner spotted 10 uncommon options trades for MongoDB.

This isn’t normal.

The overall sentiment of these big-money traders is split between 30% bullish and 70%, bearish.

Out of all of the special options we uncovered, 4 are puts, for a total amount of $175,198, and 6 are calls, for a total amount of $298,100.

Predicted Price Range

Based on the trading activity, it appears that the significant investors are aiming for a price territory stretching from $70.0 to $440.0 for MongoDB over the recent three months.

Analyzing Volume & Open Interest

Looking at the volume and open interest is an insightful way to conduct due diligence on a stock.

This data can help you track the liquidity and interest for MongoDB’s options for a given strike price.

Below, we can observe the evolution of the volume and open interest of calls and puts, respectively, for all of MongoDB’s whale activity within a strike price range from $70.0 to $440.0 in the last 30 days.

MongoDB Option Activity Analysis: Last 30 Days

Loading…
Loading…
Loading…

Largest Options Trades Observed:

Symbol PUT/CALL Trade Type Sentiment Exp. Date Strike Price Total Trade Price Open Interest Volume
MDB CALL TRADE NEUTRAL 06/21/24 $410.00 $63.9K 270 20
MDB CALL TRADE NEUTRAL 06/21/24 $410.00 $63.7K 270 10
MDB CALL TRADE BEARISH 06/21/24 $410.00 $63.2K 270 20
MDB PUT SWEEP BULLISH 11/24/23 $385.00 $61.6K 84 148
MDB PUT TRADE BEARISH 11/24/23 $395.00 $45.2K 32 53

About MongoDB

Founded in 2007, MongoDB is a document-oriented database with nearly 33,000 paying customers and well past 1.5 million free users. MongoDB provides both licenses as well as subscriptions as a service for its NoSQL database. MongoDB’s database is compatible with all major programming languages and is capable of being deployed for a variety of use cases.

After a thorough review of the options trading surrounding MongoDB, we move to examine the company in more detail. This includes an assessment of its current market status and performance.

MongoDB’s Current Market Status

  • With a trading volume of 584,533, the price of MDB is down by -0.9%, reaching $389.05.
  • Current RSI values indicate that the stock is may be approaching overbought.
  • Next earnings report is scheduled for 19 days from now.

What Analysts Are Saying About MongoDB

A total of 4 professional analysts have given their take on this stock in the last 30 days, setting an average price target of $449.25.

  • An analyst from Keybanc has decided to maintain their Overweight rating on MongoDB, which currently sits at a price target of $440.
  • In a cautious move, an analyst from Wells Fargo downgraded its rating to Overweight, setting a price target of $500.
  • Reflecting concerns, an analyst from Truist Securities lowers its rating to Buy with a new price target of $430.
  • An analyst from Capital One upgraded its action to Overweight with a price target of $427.

Options trading presents higher risks and potential rewards. Astute traders manage these risks by continually educating themselves, adapting their strategies, monitoring multiple indicators, and keeping a close eye on market movements. Stay informed about the latest MongoDB options trades with real-time alerts from Benzinga Pro.

Loading…
Loading…
Loading…

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Building modern applications faster: MongoDB’s Sahir Azam on innovation in the AI era

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

Presented by MongoDB


We live in exciting times for app developers: the advent of democratizing innovations like generative AI (gen AI) and AI-powered coding assistance will lead to an explosion of new applications. Indeed, IDC predicts that over 750 million cloud-native applications will be created by 2025. But for many organizations, maintaining a regular cadence of competitive new products and services remains a challenge.

“On the one hand, organizations are under constant pressure to innovate and differentiate — and that pressure has increased because of generative AI and how disruptive or advantageous it could be for their business,” said Sahir Azam, chief product officer at MongoDB. “Yet, the cost of capital has gone up significantly. Teams are being asked to do this with fewer resources, more efficiency, and for less cost. So, there’s a real tension between the market disruption with gen AI on one hand, and cost-saving pressure and economic headwinds on the other. Balancing those two is top of mind.”

What’s more, developers are in short supply. As a result, it’s crucial that these valuable resources focus on solving their organization’s core challenges, rather than dealing with the complexity of traditional relational database systems. Prioritizing the developer mission is the best way for organizations to stay competitive, as well as the vendors they partner with.

“And that’s why we’re making sure we build technology solutions that are delightful for developers and serve their needs,” Azam added. “But we’re also supporting the most mission-critical, scalable and secure applications in the world.”

VentureBeat spoke with Azam about what organizations are prioritizing as they work to modernize their stacks, from ways AI is transforming the development process from the ground up, to revolutionizing the end user experience and tackling sprawl.

Moving faster with generative AI

Advances in AI, and generative AI in particular, are the biggest news in tech today. Developers and organizations are especially excited about the new AI-powered tools designed to increase productivity. These include everything from a chatbot that answers coding questions, to code generation assistants like Amazon CodeWhisperer and GitHub Copilot.

Azam shared some of the ways MongoDB is investing in AI. For one, he explained, the company has embedded AI into its developer tooling to make it easier for developers to write MongoDB code and queries according to the company’s best practices. MongoDB has also partnered with some of the major hyperscalers — the large-scale data centers that offer massive, on-demand computing resources. These partnerships are focused on optimizing large language model (LLM) training with internal knowledge of MongoDB’s own resources, including documentation, best practices and knowledge bases.

The AI boom also means tools are emerging to support an array of AI use cases. For instance, developers using public APIs like OpenAI and Azure AI need a tool to help them use their proprietary data to better customize their results — and RAG, or Retrieval-Augmentation Generation, was born. And for companies that build and train their own models, the vector database has emerged. Vector databases make it easier for machine learning models to remember previous inputs, making power search, recommendations and text generation use cases more effective.

“For most organizations, the challenge in bringing on these tools also means a whole new technology partner and brand-new technology to validate,” Azam explained. “Making sure it’s secure, stable, performant and so on puts major pressure on IT leaders — and adds yet more tech sprawl. To counter that challenge, we’ve focused on enabling vector database capability out of the box.”

For example, with Atlas Vector Search developers can build AI-powered experiences while accessing all the data they need through a unified and consistent developer experience. Because Atlas Vector Search is built on the MongoDB Atlas developer platform, customers can leverage it without the burden of finding, buying, installing and managing yet another component.

Other AI advances under MongoDB’s belt include new LLM capabilities in MongoDB Compass, which aid developers in MongoDB query writing, speeding up the development process and making sure the code is more accurate. Azam shared that they’ve also integrated gen AI into Atlas Charts, which helps build charts and graphs for the application’s dashboard so that developers can now use natural language to automatically generate queries.

“Typically, you would have to know MongoDB’s query language to generate those beautiful charts and graphs that you want to build in your app or put on your dashboard for your business to look at,” said Azam. “Now you can use natural language to automatically generate the query.”

Finally, MongoDB has begun to implement AI capabilities into its Relational Migrator tool, which significantly reduces the high cost of modernizing legacy. It analyzes the legacy database and then automatically generates new data schema and code to migrate to MongoDB Atlas, with no downtime required. From there, it generates optimized code for working with data in the new, modernized application. 

Consolidating costs and tackling technology sprawl

After the wave of digital transformation that marked the past few years, organizations are now taking stock of their vendor relationships. Leaders see how overlapping vendor agreements are leading their teams to spend more time on maintenance than delivering business value.

“We’ve been through the era of the pandemic and a looser monetary policy where it was easy for organizations to spend a lot on technology, leverage whatever sprawl of tools they might have, even if they’re overlapping, even take on the cost and tax of integrating all those things together,” said Azam. “We’re now seeing organizations looking to consolidate costs with less vendors who can provide more capabilities so that they can save time and effort operationally.”

This is exactly why MongoDB has put a major focus on enabling these business needs.

“The developer data platform strategy has been an expansion of what MongoDB has been up to from day one, which is getting the data out of the way of developers building modern applications,” he explained. “With one interface, one language to learn, one environment, developers have what they need to build today’s applications faster, with significantly less sprawl.”

As a result, organizations spend less money and developers are more productive. They’re able to build any kind of application, and gain the flexibility to leverage multiple clouds, whether for differentiation or pricing benefits, or even run apps in their own data center.

The transformation of end-user experiences

“Every organization wants to be defined by the customer experiences they provide, and increasingly those customer experiences are driven by software,” Azam said. “MongoDB makes it easy for organizations to move fast, and to take an idea from inception to a globally scalable application that can serve those millions of users more easily than any other platform.”

On top of that, MongoDB does it in a truly multi-cloud way, which means a developer can build an application in a customer’s data center or run across all major public cloud platforms simultaneously when necessary (such as for regulatory reasons). Organizations can work with multiple infrastructure providers, and as necessary, take advantage of each provider’s differentiated services more easily, all with the flexibility of controlling and managing their data no matter where it needs to run.

Notably, Azam explained, MongoDB is the only company that’s combined all that complexity into a single developer data platform, not just for a single component of the application or database stack.

“If an organization is betting on a technology that’s in the data space, it’s likely a decision that they’ll live with for years, if not decades,” Azam said. “It’s incumbent on them to find technology that their developers love, that can help recruit talent but can also scale with the organization.”

Ready to invest in your developers by giving them the tools, technology and support they need? Start here.


Sponsored articles are content produced by a company that is either paying for the post or has a business relationship with VentureBeat, and they’re always clearly marked. For more information, contact sales@venturebeat.com.

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.