NoSQL Market Growing at 28.1% CAGR | Reach USD 86.3 Billion by 2032 Globally

MMS Founder
MMS RSS
NoSQL Market Share

NoSQL Market Share

WILMINGTON, DE, UNITED STATES, July 3, 2025 /EINPresswire.com/ — Allied Market Research published a new report, titled, “NoSQL Market Growing at 28.1% CAGR | Reach USD 86.3 Billion by 2032 Globally.” The report offers an extensive analysis of key growth strategies, drivers, opportunities, key segments, Porter’s Five Forces analysis, and competitive landscape. This study is a helpful source of information for market players, investors, VPs, stakeholders, and new entrants to gain a thorough understanding of the industry and determine steps to be taken to gain competitive advantage.

The global NoSQL market size was valued at USD 7.3 billion in 2022, and is projected to reach USD 86.3 billion by 2032, growing at a CAGR of 28.1% from 2023 to 2032.

Request Sample Report (Get Full Insights in PDF – 350 Pages) at: https://www.alliedmarketresearch.com/request-sample/640

Driving Factors NoSQL Market

The rise in demand for big data analytics, enterprise-wide need for scalable and flexible database solutions, and growth in adoption of cloud computing technology are expected to drive the global NoSQL market growth. However, the high complexities of administrating NoSQL databases and the potential threat of data-related inconsistencies are expected to hinder market growth. Furthermore, the rise in adoption of advanced technologies such as AI & ML offers lucrative market opportunities for the market players.

Market Segmentation NoSQL Market

The NoSQL market is segmented on the basis of type, application, industry vertical, and region. On the basis of type, it is categorized into key-value store, document database, column-based store, and graph database. On the basis of application, it is divided into data storage, mobile apps, data analytics, web apps, and others. The data storage segment is further sub-segmented into distributed data depository, cache memory, and metadata store. On the basis of industry vertical, it is categorized into retail, gaming, IT, and others. On the basis of region, the market is analyzed across North America, Europe, Asia-Pacific, and LAMEA.

Key Market Players in NoSQL Market

The report analyzes the profiles of key players operating in the NoSQL market such as Aerospike Inc., Couchbase Inc., IBM Corporation, Neo4j, Inc., Objectivity, Inc, Oracle Corporation, Progress Software Corporation, Riak, ScyllaDB, Inc. and Apache Software Foundation. These players have adopted various strategies such as collaboration, acquisition, and product launch to increase their market penetration and strengthen their position in the NoSQL market.

If you have any questions, Please feel free to contact our analyst at: https://www.alliedmarketresearch.com/connect-to-analyst/640

North America region to maintain its dominance by 2032

On the basis of region, the North America segment held the highest market share in terms of revenue in 2022, accounting for less than two-fifths of the NoSQL market revenue. The increase in the usage of NoSQL solutions in businesses to improve businesses and the customer experience is anticipated to propel the growth of the market in this region. However, the Asia-Pacific segment is projected to manifest the highest CAGR of 26.8% from 2023 to 2032. Countries such as China, India, and South Korea are at the forefront, embracing digital technologies to enhance their effectiveness and competitiveness, further expected to contribute to the growth of the market in this region.

The key-value store segment to maintain its leadership status throughout the forecast period

On the basis of type, the key-value store segment held the highest market share in 2022, accounting for less than two-fifths of the NoSQL market revenue, and is estimated to maintain its leadership status throughout the forecast period. This is attributed to the high scalability and the ability to support multiple data models on a single database with faster access would continue driving its application. However, the document database segment is projected to manifest the highest CAGR of 29.0% from 2023 to 2032, as these database services help to reduce the time and costs associated with optimizing systems in the initial phase of deployment.

The web apps segment to maintain its lead position during the forecast period

On the basis of application, the web apps segment accounted for the largest share in 2022, contributing to more than one-fourth of the NoSQL market revenue, owing to growth in the usage of website-based solutions in several industries. However, the mobile apps segment is expected to portray the largest CAGR of 31% from 2023 to 2032 and is projected to maintain its lead position during the forecast period. It provides several advantages such as reducing costs, supporting business, and effectively controlling the business environment in the organization.

The IT segment to maintain its lead position during the forecast period

On the basis of industry vertical, the IT segment accounted for the largest share in 2022, contributing to less than two-fifths of the NoSQL market revenue, owing to the development of digital technologies in IT sector. However, the gaming segment is projected to manifest the highest CAGR of 35.4% from 2023 to 2032. The surge in the implementation of automation trends and the increase in utilization of digital technology in this sector is expected to provide lucrative opportunities for the market.

Buy Now & Get Exclusive Discount on this Report (350 Pages PDF with Insights, Charts, Tables, and Figures) at: https://www.alliedmarketresearch.com/NoSQL-market/purchase-options

COVID-19 Scenario

● The NoSQL market witnessed stable growth during the COVID-19 pandemic, owing to the dramatically increased dependence on digital devices. The surge in online presence of people during the period of COVID-19 induced lockdowns and social distancing policies fueled the need for NoSQL solutions.

● In addition, with the majority of the population confined in homes during the early stages of the COVID-19 pandemic, businesses needed to optimize their business operations and offerings to maximize their revenue opportunities while optimizing their operations to support the rapidly evolving business environment, post the outbreak of the COVID-19 pandemic.

Recent Partnerships in the NoSQL Market:

● For instance, in May 2023, MongoDB partnered with Alibaba Cloud, MongoDB and Alibaba Cloud have also expanded aspects of the alliance including new joint marketing efforts, joint revenue commitments, and tighter technology integrations. Customers can rapidly innovate and scale their business while reducing costs and increasing efficiency on ApsaraDB for MongoDB by using the MongoDB database with Alibaba Cloud’s distinctive features.

Recent Product Launches in the NoSQL Market:

● In July 2021, Couchbase launched its updated NoSQL database providing users with a series of new features that aim to narrow the gap between NoSQL and relational databases.

Thanks for reading this article, you can also get an individual chapter-wise section or region-wise report versions like North America, Europe, or Asia.

If you have any special requirements, please let us know and we will offer you the report as per your requirements.

Lastly, this report provides market intelligence most comprehensively. The report structure has been kept such that it offers maximum business value. It provides critical insights into market dynamics and will enable strategic decision-making for existing market players as well as those willing to enter the market.

Other Trending Reports:

IP Telephony Market
Router Market

About Us:

Allied Market Research (AMR) is a market research and business-consulting firm of Allied Analytics LLP, based in Portland, Oregon. AMR offers market research reports, business solutions, consulting services, and insights on markets across 11 industry verticals. Adopting extensive research methodologies, AMR is instrumental in helping its clients to make strategic business decisions and achieve sustainable growth in their market domains. We are equipped with skilled analysts and experts and have a wide experience of working with many Fortune 500 companies and small & medium enterprises.

Pawan Kumar, the CEO of Allied Market Research, is leading the organization toward providing high-quality data and insights. We are in professional corporate relations with various companies. This helps us dig out market data that helps us generate accurate research data tables and confirm utmost accuracy in our market forecasting. Every data company in the domain is concerned. Our secondary data procurement methodology includes deep presented in the reports published by us is extracted through primary interviews with top officials from leading online and offline research and discussion with knowledgeable professionals and analysts in the industry.

Contact:
David Correa
1209 Orange Street,
Corporation Trust Center,
Wilmington, New Castle,
Delaware 19801 USA.
Int’l: +1-503-894-6022
Toll Free: +1-800-792-5285
UK: +44-845-528-1300
India (Pune): +91-20-66346060
Fax: +1-800-792-5285
help@alliedmarketresearch.com

David Correa
Allied Market Research
+ 1800-792-5285
email us here
Visit us on social media:
LinkedIn
Facebook
YouTube
X

Legal Disclaimer:

EIN Presswire provides this news content “as is” without warranty of any kind. We do not accept any responsibility or liability
for the accuracy, content, images, videos, licenses, completeness, legality, or reliability of the information contained in this
article. If you have any complaints or copyright issues related to this article, kindly contact the author above.

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.

What Makes Atlas the Core Driver of MongoDB’s Revenue Growth? – July 3, 2025 – Zacks

MMS Founder
MMS RSS

We use cookies to understand how you use our site and to improve your experience.

This includes personalizing content and advertising.

By pressing “Accept All” or closing out of this banner, you consent to the use of all cookies and similar technologies and the sharing of information they collect with third parties.

You can reject marketing cookies by pressing “Deny Optional,” but we still use essential, performance, and functional cookies.

In addition, whether you “Accept All,” Deny Optional,” click the X or otherwise continue to use the site, you accept our Privacy Policy and Terms of Service, revised from time to time.

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.

Two Tools To Elevate Your MongoDB Experience – I Programmer

MMS Founder
MMS RSS

The tools contradict each other; the first one allows you to write SQL instead of using Mongo’s special syntax, while the other allows you to manipulate the database without having to write SQL and by just employing natural language.

The first one, Queryleaf, translates SQL queries into MongoDB commands. It parses SQL using node-sql-parser,
transforms it into an abstract command set, and then executes those commands against the MongoDB Node.js driver .

It supports both simple SQL statements like insert, update, select and delete, but can also do more advanced querying, such as handling nested field access, arrays, aggregates, etc.

So for instance the following MongoDB query:

db.collection(‘users’).find(
{ age: { $gt: 21 } },
{ name: 1, email: 1 }
)

Will be translated into SQL as:

SELECT name, email FROM users WHERE age > 21

So far so good, but the real magic takes place when accessing nested field with SQL only:

MongoDB:

db.collection(‘users’).find(
{ ‘address.city’: ‘New York’ },
{ name: 1, ‘address.city’: 1, ‘address.zip’: 1 }
)

SQL:

SELECT name, address.city, address.zip FROM users WHERE address.city = ‘New York’

Note that Queryleaf is multifaced. It’s a library, as such it can be called from your code, it’s avalable as a CLI for doing command-line SQL queries, a Web Server for REST API access and a PostgreSQL Wire Protocol Server for connecting with standard PostgreSQL clients.

The lesson here, you can never rule SQL out. But while the push of SQL as a universal protocol that unifies access to all kinds of services and from all kind of tools, as we examined in “Tailpipe – The Log Interrogation Game Changer” where SQL was used to manipulate access logs, the new counterpart of the Agentic era is challenging that position. Which brings us to the other tool ScoutDB, an Agentic Mongo GUI.

With ScoutDB, Instead of writing queries in Mongodb syntax or SQL, you simply describe what you’re looking for in plain English and let it be translated into MongoDB queries. That way you eliminate the need to remember the exact syntax of collections and structures.

Of course as in other text-to-SQL tools, it first needs to understand your database schema in order to infer the correct queries. For that reason, ScoutDB automatically maps the relationships between your collections, understanding how your data interconnects even when those relationships aren’t explicitly defined in your schema.

So which option will you go for? If you are a developer who knows his way around his database schema, uses SQL, can write optimal queries and at the same time doesn’t want to learn the MongoDB native query syntax, then Queryleaf is for you. Plus it’s free and open source.

If on the other hand you are an Enterprise user who wants to write a report the easiest way possible, then Scout DB is for you. I’m emphasizing “Enterprise” user because the tool is not free, with the pricing scheme TBA. That, however, doesn’t rule out a  potential free plan.

More Information

Queryleaf

ScoutDB

Related Articles

Tailpipe – The Log Interrogation Game Changer

 

To be informed about new articles on I Programmer, sign up for our weekly newsletter, subscribe to the RSS feed and follow us on Twitter, Facebook or Linkedin.

Banner

pico book

Comments

or email your comment to: comments@i-programmer.info

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.

Databricks Contributes Spark Declarative Pipelines to Apache Spark

MMS Founder
MMS Patrick Farry

At the Databricks Data+AI Summit, held in San Francisco, USA, from June 10 to 12, Databricks announced that it is contributing the technology behind Delta Live Tables (DLT) to the Apache Spark project, where it will be called Spark Declarative Pipelines. This move will make it easier for Spark users to develop and maintain streaming pipelines, and furthers Databrick’s commitment to open source.

The new feature will allow developers to define data streaming pipelines without needing to create the usual imperative commands in Spark. While the changes simplify the task of writing and maintaining pipeline code, users will still need to understand the runtime behavior of Spark and be able to troubleshoot issues such as performance and correctness.

In a blog post that describes the new feature, Databricks wrote that pipelines could be defined using SQL syntax or via a simple Python SDK that declares the stream data sources, tables and their relationship, rather than writing imperative Spark commands. The company claims this will reduce the need for orchestrators such as Apache Airflow to manage pipelines.

Behind the scenes, the framework interprets the query then creates a dependency graph and optimized execution plan.

Declarative Pipelines supports streaming tables from stream data sources such as Apache Kafka topics, and materialized views for storing aggregates and results. The materialized views are updated automatically as new data arrives from the streaming tables.

Databricks provide an overview of the SQL syntax in their documentation. An excerpt is shown here. The example is based on the New York City TLC Trip Record Data data set.

-- Bronze layer: Raw data ingestion
CREATE OR REFRESH STREAMING TABLE taxi_raw_records 
(CONSTRAINT valid_distance EXPECT (trip_distance > 0.0) ON VIOLATION DROP ROW)
AS SELECT *
FROM STREAM(samples.nyctaxi.trips);

-- Silver layer 1: Flagged rides
CREATE OR REFRESH STREAMING TABLE flagged_rides 
AS SELECT
  date_trunc("week", tpep_pickup_datetime) as week,
  pickup_zip as zip, 
  fare_amount, trip_distance
FROM
  STREAM(LIVE.taxi_raw_records)
WHERE ((pickup_zip = dropoff_zip AND fare_amount > 50) OR
       (trip_distance  50));

The example shows how a pipeline can be built by defining streams, with the CREATE STREAMING TABLE command, and then consuming them with a FROM statement in subsequent queries.. Of note in the example is the ability to include data quality checks in the pipeline with the syntax CONSTRAIN … EXPECT … ON VIOLATION.

While the Apache Spark changes are not yet released, many articles already describe the experience of engineers using Databricks DLT. In an article in Medium titled “Why I Liked Delta Live Tables in Databricks,” Mariusz Kujawski describes the features of DLT and how they can best be used: “With DLT, you can build an ingestion pipeline in just a few hours, compared to the days required to develop a custom framework. Additionally, built-in data quality enforcement provides an extra layer of reliability.”

In addition to a declarative syntax for defining a pipeline, Spark Declarative Pipelines also supports change data capture (CDC), batch and stream logic, built in retry logic, and observability hooks.

Declarative pipelines are in the process of being merged into the Spark project. The feature is planned for the next Spark Release, 4.10, which is expected in January 2026. Progress can be followed on the Apache Jira Spark project in ticket SPARK-51727.

About the Author

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.

3 Volatile Stocks in the Doghouse – StockStory

MMS Founder
MMS RSS

<br /> 3 Volatile Stocks in the Doghouse<br />

*]:w-full [&>*]:flex [&>*]:flex-col [&>*]:grow”>

Cover image

MDB (©StockStory)


Adam Hejl /

2025/07/03 12:32 am EDT


Volatility cuts both ways – while it creates opportunities, it also increases risk, making sharp declines just as likely as big gains.
This unpredictability can shake out even the most experienced investors.

Navigating these stocks isn’t easy, which is why StockStory helps you find Comfort In Chaos. Keeping that in mind, here are three volatile stocks best left to the gamblers and some better opportunities instead.

MongoDB (MDB)

Rolling One-Year Beta: 1.74

Started in 2007 by the team behind Google’s ad platform, DoubleClick, MongoDB offers database-as-a-service that helps companies store large volumes of semi-structured data.

Why Do We Think Twice About MDB?

  1. Historical operating margin losses show it had an inefficient cost structure while scaling
  2. Lacking free cash flow generation means it has few chances to reinvest for growth, repurchase shares, or distribute capital

MongoDB’s stock price of $205.25 implies a valuation ratio of 7x forward price-to-sales. Read our free research report to see why you should think twice about including MDB in your portfolio.

Carrier Global (CARR)

Rolling One-Year Beta: 1.20

Founded by the inventor of air conditioning, Carrier Global (NYSE:CARR) manufactures heating, ventilation, air conditioning, and refrigeration products.

Why Are We Hesitant About CARR?

  1. Organic revenue growth fell short of our benchmarks over the past two years and implies it may need to improve its products, pricing, or go-to-market strategy
  2. Free cash flow margin shrank by 5.9 percentage points over the last five years, suggesting the company is consuming more capital to stay competitive
  3. Diminishing returns on capital suggest its earlier profit pools are drying up

Carrier Global is trading at $75 per share, or 24.5x forward P/E. To fully understand why you should be careful with CARR, check out our full research report (it’s free).

Crane (CR)

Rolling One-Year Beta: 1.23

Based in Connecticut, Crane (NYSE:CR) is a diversified manufacturer of engineered industrial products, including fluid handling, and aerospace technologies.

Why Do We Steer Clear of CR?

  1. Absence of organic revenue growth over the past two years suggests it may have to lean into acquisitions to drive its expansion
  2. Demand will likely be soft over the next 12 months as Wall Street’s estimates imply tepid growth of 6.3%
  3. Earnings per share were flat over the last five years and fell short of the peer group average

At $192.42 per share, Crane trades at 34x forward P/E. Dive into our free research report to see why there are better opportunities than CR.

Stocks We Like More

The market surged in 2024 and reached record highs after Donald Trump’s presidential victory in November, but questions about new economic policies are adding much uncertainty for 2025.

While the crowd speculates what might happen next, we’re homing in on the companies that can succeed regardless of the political or macroeconomic environment.
Put yourself in the driver’s seat and build a durable portfolio by checking out our Top 5 Growth Stocks for this month. This is a curated list of our High Quality stocks that have generated a market-beating return of 183% over the last five years (as of March 31st 2025).

Stocks that made our list in 2020 include now familiar names such as Nvidia (+1,545% between March 2020 and March 2025) as well as under-the-radar businesses like the once-micro-cap company Kadant (+351% five-year return). Find your next big winner with StockStory today for free. Find your next big winner with StockStory today. Find your next big winner with StockStory today

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.

Experiences from Using AI as a Software Architect

MMS Founder
MMS Ben Linders

Artificial intelligence excels at refining language and processing large text volumes, but lacks human-like contextual reasoning and emotional intelligence, Avraham Poupko said. Many human traits come into play when doing software architecture. As an architect, he suggests using AI for exploring tradeoffs and refining language with clarity and precision.

Avraham Poupko shared his experiences from using artificial intelligence in his daily work as a software architect at OOP conference.

Software architects and system architects will not be replaced anytime soon by generative artificial intelligence (AI) or large language models (LLMs), Poupko explained in How a Software Architect Uses Artificial Intelligence in His Daily Work. They will be replaced by software architects who know how to leverage generative AI and LLMs, and just as importantly, know how NOT to use generative AI.

AI is helpful when exploring tradeoffs, or when trying to find the right way to say something with precise language as in using the correct word and sentence structure, Poupko mentioned. As a language model, AI is very helpful and provides a great deal of clarity:

As an experiment, I provided Gemini with the above paragraph and asked it to suggest an alternate wording. The result is:

Gemini>>”While AI cannot yet replace human architects due to the deep system, domain, and organizational knowledge required for effective design, it’s a valuable tool for exploring tradeoffs and refining language, providing clarity and precision in documentation.”

While the paragraph provided by Gemini might be more concise, it still needs some work. For example, notice the word “yet” that Gemini added to the sentence. “I do not think it belongs there at all,” Poupko said. If one can be offended by an LLM, he is offended by that, he added.

As humans, we are very good at “Contextual Reasoning”. That means that we are good at taking a great deal of context as well as previous experiences and considering optimal behaviour, Poupko explained:

We know how to read a situation, body language and gauge appropriate behaviour. That is an area where AI is currently very poor at.

AI is most useful in tasks that involve written language. When it comes to reading a large amount of text and using the knowledge encoded in those texts, we might find that AIs are better than humans, Poupko said.

As an architect, he has learned that so many of his human traits come into play when he does software architecture, Poupko explained:

I learned about the importance of my ability to communicate. I learned about the importance of my Theory of Mind, meaning my ability to create a mental representation of other people’s mental model.

AI has made it clear to me that true “understanding” goes beyond understanding texts and technology. It is about understanding context and people, Poupko concluded.

InfoQ interviewed Avraham Poupko about using AI as a software architect.

InfoQ: How can we leverage our human strengths in a world enhanced by AI?

Avraham Poupko: Where AI is good at encoding knowledge from a large amount, humans are better at contextual reasoning.

For example, I provided the following prompt to Gemini:

Avraham>> I am building a web based AI agent. That knows how to take a problem or challenge from the user and use AI to ask clarifying questions, and then use AI to respond. Can you provide some system requirements for me to get started with?

The answer I got from Gemini was useful, and it can be seen that it collected a great deal of knowledge from a great deal of sources:

Gemini>>

Choose a simple framework: Start with a lightweight frontend framework like React or Vue.js, and a backend framework like Flask.

Select pre-trained models: Utilize pre-trained models for NLP tasks from Hugging Face Transformers or other libraries.

Build a basic API: Create a simple API endpoint to handle user input and return basic responses.

Develop a basic UI: Create a basic web page with input and output fields.

Iterate and improve: Gradually add more features, refine the user interface, and improve the AI capabilities over time.

These are all good and helpful answers that are certainly appropriate.

When I asked the same question to a good friend who knows me well, she answered quite differently:

Friend>>Avraham, you know, that is an interesting idea that touches on the relationship between humans and LLMs. I think a good place for you to start would be to clarify to yourself exactly what will your "agent" do and what the LLM will do. I think you should start on your local PC. Don’t do any development. Spend a lot of time investigating how you can get the LLMs to ask provoking questions that will clarify your own thinking. If you can do some of that before OOP in Munich that would be cool. You can present findings in the talk, and maybe get some good feedback or partnerships. By the way - is the word "agent" the right word here? Maybe use something cool like "pre-thinker" or something.

Between my friend’s answer and the AI’s answer, I was able to have a really interesting conversation about the practical and theoretical aspects of using LLMs and what the role of an agent might be in the interaction.

InfoQ: What have you learned about generative AI’s abilities and limitations?

Poupko: I learned that AI is constantly able to surprise me. That AI is sometimes astonishingly good at gathering and articulating known information and applying it to a new situation.

However, conversing with AI is not the same as conversing with a human. AI is not curious about the world, and will never be my “friend” in the sense that we can share a common sense of value or of purpose.

About the Author

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.

Presentation: Security or Convenience – Why Not Both?

MMS Founder
MMS Dorota Parad

Transcript

Parad: I would really like you to imagine something. Imagine that you’re an engineer who is on call, it’s Saturday evening and you get paged. There is a critical issue in one of your team’s services, multiple customers affected. You know the service, you wrote the code, so you hope this one will be quick. You start the company-issued laptop. You really dislike this machine. It’s slow, clunky. You don’t like the operating system. You can’t even install an ad blocker for your browser. You log in, type in the password, your password is older than 90 days, please change it. You set up a new password, incrementing the last digit as one does, you’re in. You go to your incident management portal. This website is not available. You’re not in the office, so you need to start the VPN. You do that, it fails to connect, wrong password. Ok, you try again, you’re in. You look at the incident report, the issue is actually really well described. There’s even an error with full stack trace.

Then you read that the issue is intermittent, so you lose some of that enthusiasm. You don’t have access to production, you can’t reproduce the issue locally, which means you’ll have to go to the prod test environment, which is supposed to be exactly like production, but for developers to test in. It’s never a good time. You log in into the environment, obviously it’s a mess. Your service doesn’t even work correctly, it’s misconfigured. You know this one, it’s an easy fix. It’s just a few tweaks in your cloud console, configuration change. You go and do that. You log in to your cloud console, you have no permissions to access that resource again. You spend the next half hour rummaging through the console, and finally you actually manage to grant yourself those missing permissions. You carry on. You hack the prod test environment, you start investigating. You track the issue down to one of the libraries that you’ve been using. This is strange, the code should really work. You try googling.

The first result from Stack Exchange seems promising. You click, this website is dangerous, it has been blocked. You’re still on VPN. You disconnect that, carry on. It turns out that the library has been automatically upgraded to the latest version, which should be a good thing. It’s an easy fix. You make the necessary changes. You create a pull request, and the build fails. Of course, it fails. You didn’t expect it to work the first time. It’s your vulnerability scanner throwing some of those silly errors that everyone just keeps ignoring, so you do that. You jiggle the build, it passes. Time to deploy, and it fails. Of course, it fails.

The connection to your credentials vault in test environment times out, as it often does, so you retry and retry and retry, and it works. Your fix is live. Since you hacked the prod test and you still have no access to production, you actually can’t be really sure if you’ve fixed the issue for good. You reluctantly close the laptop and hope for no more alerts till Monday. If we look at this whole situation, we can see that the actual valuable productive work was really easy and straightforward. We have encountered a lot of tiny, yet annoying obstacles along the way that slowed us down at every corner, broke our flow. Those are only some of the examples of productivity sacrifices that companies make in the name of security.

Background

I’m Dorota. I work at Authress, which is an authentication and authorization API. If you’re making software, you want your users to log in or have different permissions and roles. Instead of coding those bits yourself, you can use our tool. Based on what Authress does, dealing with identity and access, you can imagine that our customers take it for granted that we’re secure. We’re also a Swiss-based company, which means we’re actually under legal obligation to follow the principles of privacy by default, and privacy by design, which actually means security by default. We’re also a private for-profit company. We’re not a startup, but we happen to be a very small company. This means that every one of our engineers has to be happy and productive. We really can’t afford to implement all those security best practices, the ones that make your engineers’ lives a little bit harder. We really had to find a different way to be secure by default.

Traditionally, and it is still a very common mindset among security professionals, we tend to think about security as building this impenetrable citadel, this bastion, so well defended that no one can get in. No one. Not even our own employees. Building such citadels is difficult and they’re awkward in everyday use. What we often end up with is something that’s more like this, security theater, or as I like to call it, security circus.

Making Security BLISS

What if, instead of building that citadel or settling for some bouncy castle, what if we could make security BLISS? BLISS is a framework that will help you identify how to replace some of those annoying, cumbersome security practices with ones that don’t impede productivity. Let’s see what that means. B stands for Bulkheads. It is a feature on all submarines, maybe not Titan, that when a section of a ship gets some catastrophic failure, it gets sealed off so that the problem doesn’t spread. We can apply the same principle in software security. When part of a system gets compromised, it should be possible to seal that part off or remove it so that the problem is contained. Think of it as reducing the blast radius. How well we can apply this principle is going to be severely constrained by our architecture choices.

If we’re using microservices with each service deployed independently and services communicating solely through REST APIs, then the service boundaries are our natural bulkheads. It’s really easy. If we have a monolith, we have to get a little more creative with how we apply bulkheads. Things like deploying separate instances for different customers or using containers are good examples. Regardless how we do it, we want to have those bulkheads spread throughout our systems and processes. L stands for Levels. What are we protecting? Not all data is equally sensitive and not operations equally critical. There are different levels of risk. It makes no sense to put a single thick wall in front of everything. We should have some gradation, different levels in our protection strategy so that we can add a little bit of protection for less important things and a little bit more on top and a little bit more, so on, proportionate to the level of risk.

I stands for Impact. When we think about cybersecurity, the first thing that tends to come to mind is preventing the attacks, making it less likely that we get breached. This was a very good way of thinking in the early days of the internet. Nowadays, when you have something out there, it is getting compromised. It’s only a matter of time. Instead of putting all our efforts towards making it less likely that an attack will succeed, we should try to limit the impact first. We get compromised, so what? Do we have bulkheads in place to limit the blast radius? What can the attackers do with their access? Do we have more levels of protection further down the line? What’s the worst that can happen? How can we lessen that impact? The first S stands for Simplicity. If we put practices in place that are difficult to follow, people will find workarounds.

If we keep adding tools to our tool chain, that’s more vulnerabilities to patch. If you need five years of experience in security in order to even understand our security strategy, who’s going to execute on it? As a general rule, the simpler something is, the easier it is to secure. Notice that there is some tension between this and the other principles I’ve listed. Adding bulkheads means adding complexity, because now we’re no longer dealing with one thing, we have multiple things. On the other hand, what is simpler? One big system that does everything or multiple systems that do one thing each? Depends. Same goes for levels. Different levels of protection make things more complicated.

On the other hand, having those levels lets us keep things simple for hopefully majority of cases. We want simple, not simplistic. The best way to think of it is whenever there’s a tradeoff, we should opt for the simpler option. The last S is my favorite. It’s creating a Pit of Success. It should be so easy to do the right, the secure thing, that it happens almost by accident that we fall into that pit of success. To put it differently, doing the wrong thing should be annoying and cumbersome. It should require us going out of our way. The default option, the one we take without thinking, should be the right one. We should be really thoughtful about where we place our obstacles. Ideally, the obstacles should be reserved for the attackers and for things that are extra risky, that we only do rarely. To make security BLISS, we have bulkheads to reduce the blast radius. We have different levels of protection. We focus on minimizing the impact of the incidents. We keep things simple. All of that lets us create a pit of success.

Applying the BLISS Principles to Resolve Productivity Issues

Let’s have a look at the situation that I asked you to imagine in the beginning and see how we can apply these principles to resolve all those little productivity issues without sacrificing actual security. First thing to get in our way was our machine. It needs a password to start. The password has to be changed. It’s slow, clunky. We don’t like the operating system. We couldn’t even configure the device in the way we wanted. That tends to affect our productivity. This happens all the time. Companies not only restrict our hardware and operating system, they also install mobile device management, MDM software. If you’re really lucky, you may not notice it’s there.

Most of the time, your device will be slightly slower. The battery will drain faster. You may be prevented from installing little quality of life tools that help you, or an update may be pushed on you in the most awkward moment. That is a tiny but constant drain on our productivity. If you’re really unlucky, you may not even be able to do any work at all. Anyone remembers CrowdStrike? What are we protecting? Is our laptop really that special? The reason why companies do that lies in this old-fashioned approach to security, the idea that we can build that citadel, secure the perimeter to keep our assets secure. We have to stop the attackers at all cost. It is costly, not only having to manage those devices, it costs us engineers’ productivity.

The worst of all is that it actually doesn’t do that much for security. It reminds me of this old XKCD comic. The reality is that most of successful attacks happen because of phishing. Restricting our operating system or installing MDM software doesn’t protect us from phishing. We know that because companies that do it get phished all the time.

Maybe instead of trying so hard to protect that device and paying the productivity tax, maybe we should try and minimize the impact instead. Our machine gets compromised, so what? What’s on that laptop that could be valuable to the attacker? Let’s see. We have emails, chat history, maybe some internal documents. If we make it easy to store and access these in the cloud, we should have no local copies. Unless you’re an executive or you work in HR, your emails aren’t really all that interesting. What else do we have? Credentials. What? If we force our engineers to do annoying things like connect to VPN every time they’re not in the office, some credentials will inevitably end up part of some script in plain text on that machine. Don’t ask how I know. If we create a pit of success everywhere where credentials are involved, this shouldn’t happen. Let’s keep that in mind. What else is on the machine that’s precious? Source code.

If I want to do development on the machine without pulling all my hair out, I have to save that Git SSH key, which means whoever has my device has access to see all the source code that I can see. What’s more? They can make commits. No. Obviously, we could then double down on protecting our device or do even something more convoluted like always only use virtual machines. That’s a lousy tradeoff.

Let’s stop and think for a moment, what are we protecting? The attackers are almost never after the source code itself. It doesn’t have inherent value. Your code is actually not that special. What they’re after is credentials, data, and your production environment. Maybe instead of trying so hard to prevent our source code from leaking, we should look at the impact instead. I have some good news, everyone. If you are doing all of the usual common sense, continuous integration and continuous development practices, you are already sufficiently minimizing security risks of those commits coming from a filthy, unprotected machine. If you are doing CI/CD, I bet you are protecting your code on multiple levels already. Here’s the thing, this is a very standard, basic, cookie-cutter CI/CD pipeline. We write, commit some code, we create a pull request, some test run, and it gets merged and deployed to production.

First step, that’s the least risky one. We have a little bit of protection here. Does every engineer need access to all our repositories? If we have a monorepo, we already made our tradeoffs. Otherwise, let’s restrict access to only those repositories that are relevant for this particular engineer at a given time. In order for those potentially malicious commits to end up in our production, we actually have to go ahead and create a pull request, which typically involves going to some UI, GitHub, GitLab, whatever you’re using, clicking some buttons. I can only do that if I provide my credentials. We not only need to have the device, we also need credentials.

That’s not all, assuming all the tests pass, typically, under normal setup, in order for that pull request to get merged, we have another person do a code review and approve, which means another engineer will have to go to that UI and click some buttons. They need to provide their credentials. That is a very powerful way to minimize security risks. Have another human scrutinizing what’s going on. We do it all the time.

For all of that to work, every single commit in existence has to go through our pipeline. No exceptions. No manual tinkering, no logging on to server to copy files. Which means our builds have to be reliable. No random errors that force our engineers to go in and manually push the build through. Here’s a kicker, investing in hardening our CI/CD pipelines is actually an investment in security. If you need any more excuses, just do it for security. We really want to keep our pipelines simple so that there are fewer things to break. Anything that can run as an async process, that’s out of the pipeline, so code coverage analysis, vulnerability scans.

Now that we have looked at the impact of the engineering machine getting compromised and applying multiple levels of protection to our code, we should be able to loosen up our grip on managing our device. That means, no need for a complex password to start. If a device supports biometric login, we’re going to use that. Otherwise, simple password, fully sufficient. We are not going to install MDM software or manage the device in any way. We just don’t trust any device. We’re also not going to restrict the operating system in any way. If someone wants to run Arch and feel smug about it, let them. If you’re really paranoid, you can always force full drive encryption. Most of the time, that’s an overkill. This is, in fact, fully sufficient from a security standpoint for a vast majority of businesses out there.

We’ve liberated our device. What happened next was we tried to access our incident management portal and we couldn’t. We had to be on our internal corporate network, which in this case required a VPN. You can’t just secure a corner of an internet and think it’s good. That’s the citadel mindset all over again. We’re not going to do that. We’re not going to do that because it encourages bad behaviors, like trusting someone without credentials just because their IP address matches. Or engineers storing their credentials in plain text to connect automatically. It really sets our engineers up for a failure. Let’s turn that into a pit of success.

Instead of relying on an IP address or every tool’s internal password database, we’re going to do something very obvious. We’re going to rely on corporate identity provider to authenticate through single sign-on. I know this is obvious. If you’re writing internal tools, no matter how small or insignificant, just add that SSO capability. It’s super easy to do nowadays. Doesn’t require much work. If you’re managing third-party tools, same story, enable SSO. It is so worth it. Single sign-on is a good developer experience, because I only need to enter my password once when I start my workday. The only other time to do that is when I do something risky that requires another level of protection. If our engineers are used to typing in their passwords all the time, they’re more susceptible to phishing.

Speaking of passwords, they are annoying. Not only we have to memorize them, it takes time to type them in. If we add more rules on top of that, like uppercase, lowercase, numbers, special characters, and we also force people to change their passwords on the regular, what’s going to happen is that password ends up on a sticky note right next to my keyboard, or it gets reused. Probably both. That doesn’t look like a pit of success to me. If your company requires any arbitrary password complexity rules for anything or requires people to change their passwords on a timer, I hope no one does it, but if you do that, find whoever is responsible for that process and tell them to stop. Cancel. Don’t do that. Don’t mess with people’s password selection strategy and don’t force anyone to ever change their password unless you suspect they’ve been compromised. Please, whatever you do, don’t be that company.

My theory is that this got rejected because it is actually a valid Perl program. I could be wrong. I hope what I’m saying is really nothing new to you because it’s been in the NIST guidelines since at least 2017. The guidelines are currently under revision. The new version goes even further. The, SHOULD NOT gets changed to SHALL NOT. Here’s a link. Forward it to whoever needs to see it. When setting up passwords, let’s create a pit of success for ourselves. Does everyone remember the Correct Horse Battery Staple password strategy? Here’s a reminder. The idea is to make the password easy to memorize but difficult to guess.

The single most relevant factor determining password strength is the length. We’re going make it super long, 30 characters, 60 characters, maybe more. We are going to be kind to ourselves and we’re going to pick words or phrases that make sense to us. Here are some examples of very good, very strong passwords. As you can see, all of them are over 30 characters long. They’re words and phrases that make sense, at least to me. Good luck guessing them. One could argue that you should not use just a string of plain English words like the first one, but it is still so much better than writing the password down or reusing it. Better still, misspell words. Invent words. Mix languages. Just have fun.

Should Engineers Have Access to Production?

We’ve created a pit of success around accessing our internal corporate resources which incidentally also freed us to browse the web however we want. Next, we tried to reproduce our problem, investigate our issue, and we stumbled because we had no access. Now I am curious, what do you think? Should engineers have access to production? Who thinks they should have no access to production? This is a common misconception among security people because no one should have access to production. No one.

In fact, there’s really nothing wrong with testing in production. I, in fact, think everyone should do it. As long as we have exactly the same kind of access as our customers. Think about it. Our customers are isolated from each other. One customer cannot mess another customer’s account. We have bulkheads in place, we’re good. What engineers really should have no access to is customer data, which includes any copies, and production infrastructure. At the same time, we need to be able to test our stuff, which means messing with data or infrastructure directly sometimes. This implies different levels of access depending on the environment and the situation. Access control is hard. From a security standpoint, we want to give everyone the least amount of access possible, least privilege. From a productivity standpoint, we don’t ever want to block anyone from doing what they need to. Fortunately, all cloud providers give us a very granular way to define our permissions.

Unfortunately, that almost always ends up in a tangled mess of rules and policies, and no one really knows what’s going on. This is a real-life example. You always end up with either too much access or not enough. If you ever played with IAM in your cloud account for a big environment, you know what I’m talking about. This is a problem, unless we apply bulkheads. Every cloud provider lets us attach multiple accounts to a single building account. Let’s use that. Let’s have multiple cloud accounts. How many? That depends on your architecture and team structure. It’s probably more than you think.

At the very least, we want to have two accounts per team. One exclusively for production with no manual access whatsoever, and one for testing. Really, if your software is modular, whatever can get deployed separately, that’s a single production cloud account. It’s not enough to have one big account for everything. Split it. This is also what all cloud providers recommend. That way, because our accounts are smaller, we can keep our access controls simple. Not only we see what’s going on, we can give everyone just the right level of access, not more. If one account gets compromised because our engineer got phished, we can just terminate that account, spin up a new one, and hopefully we have backups. Speaking of permissions and access, when defining your permissions and roles, carefully consider who can grant access. It should be different than who has access.

All of this security thing I’m talking about means nothing if everyone can just go in and grant themselves full admin rights. By applying bulkheads, we can now test in production safely and mess with our test infrastructure if we need to.

We also previously talked about how our CI/CD pipeline protects our code on multiple levels, and for that, it has to be robust, and we should basically keep it simple. There’s one more trick I would like to share, because not many people know about it, and it is a great illustration of how simple means more secure. We said all the code has to go through the pipeline, no manual access to production infrastructure, but our production software sometimes needs to authenticate with third parties like cloud providers or payment providers and whatnot. How do we get the credentials into our production, automatically, without exposing those credentials along the way? One still very common way is to use environment variables that get stored in our Git server secrets and injected into our pipeline and passed along our application control plane.

That is a lot of places where we’re potentially exposing our credentials to parts of our infrastructure that don’t even need to know about it. A much better way is to use some secrets vault, usually a third party that stores the secrets securely, and that gets accessed by our production. That is much better, already very good. There are few moving pieces, few places to break, but we’re still exposing our credentials to that third party, and they get stored in their internal database, which can get compromised. This may be good enough, very little risk there, but we can simplify it even further and make it even more secure. What we’re going to do is instead of a secrets vault, we’re going to use a key management service, which is something every cloud provider offers. It’s a very simple piece of technology, no database involved whatsoever. We’re going to use our KMS to encrypt our credentials, and we are going to store our credentials in Git, encrypted.

At runtime, our application will basically call the KMS to decrypt the credentials and get access. That way, if our source code leaks, because we’re working on an unprotected machine, the credentials are encrypted, no big deal. If our pipeline gets compromised, there are no credentials anywhere. This is really simple and also secure.

Accounting for Emergencies

Throughout this process, we have removed some obstacles, but we actually put other ones in place. We said all the code has to go through our pipeline, no manual tinkering with infrastructure. We also added requirements for merging. What if you are that poor engineer who is on call Saturday night, and the world is on fire? Sometimes, should be rare, but sometimes we have to do something out of band. We have to deploy an emergency patch. We have to manually change the database configuration. I say we have to because something horrible is happening. Maybe we are hard down. Maybe we’re bleeding tons of money every minute. We need to have some way of bypassing all our security mechanisms in an emergency. We have to be careful here because we don’t want this process to be ripe for abuse.

If it’s easier to do development this way, everyone will do it. Here’s where we have to actually think about our obstacles. We want to have that emergency path to be easy to follow because it’s an emergency. The engineer is stressed already, but it should be really cumbersome to trigger. Maybe it’s an email to a group. Maybe we write a short essay on what’s happening and why. Maybe there’s some extra annoying buttons to click in some UI to get a temporary elevation to our permissions. Maybe we always have at least two people on call. Regardless how we set it up, we want to have a full audit trail, what happened, when? We want to look at that, review it later. Without this formal pre-thought emergency process, engineers would get creative under stress and we will end up with backdoors all over the place.

Security Mandates

I talked a lot about replacing or changing your security practices. Some of those changes may be fully within your sphere of control. Other practices may actually be mandated to you by someone way above your pay grade with security in their job title. That doesn’t mean we’re stuck with those mandates. When companies hire a chief information security officer and invest in a security organization, it is actually almost never about security. These people are there to ensure compliance, which is a very important role, but it has nothing to do with security.

In fact, these two sometimes are at odds with each other. This is one of my favorite recent examples. This happens all the time. The person asks to remain anonymous, but it is a company that I bet you all heard about. The job of a CISO is to convince third-party stakeholders that your company does a good enough job minimizing security risks. Those stakeholders can come from different places. There are three main compliance requirement sources: we have certifications, we have regulations, and we have insurance policies. Stakeholders representing these, so auditors, government officials, insurance salespeople, they have no idea about your business. They don’t know how software is made. They may not even have any technical knowledge. Your CISO needs to convince them that you’re secure. That’s a really tough job. It may be tempting to optimize for the ease of audit. It may be tempting to just copy what someone else did that got them to pass the audit, even if it makes no sense in your situation. That’s not the only way.

Contrary to the popular opinion, none of those compliance requirement sources are prescriptive in their nature. It really is up to us to define the scope, the means, and the implementation. Another misconception is that we need to provide some proof that we’re doing the right thing. It is never about a proof. You can’t prove anything when humans and computers are involved. What it really is about is crafting a convincing narrative. This is what your CISO is after, solid, well-structured narrative that will convince the third parties.

If you want to help your CISO, document how you think about security within your area. Do it for yourselves, for your new hires. Only you know your system the best. Describe your bulkheads. Talk about how you classify different parts of your system, what levels of protection you have. Explain what you’re doing to minimize the impact of incidents. Keep this narrative simple, plain English, so that any non-technical person can understand it. Like that, you really should be able to push back on a lot of those really pesky, annoying security practices. Like that, your security can also be BLISS.

See more presentations with transcripts

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.

KBC Group NV Raises Stock Holdings in MongoDB, Inc. (NASDAQ:MDB) – MarketBeat

MMS Founder
MMS RSS

KBC Group NV grew its stake in shares of MongoDB, Inc. (NASDAQ:MDBFree Report) by 20.4% in the 1st quarter, according to the company in its most recent Form 13F filing with the Securities & Exchange Commission. The fund owned 21,766 shares of the company’s stock after purchasing an additional 3,687 shares during the period. KBC Group NV’s holdings in MongoDB were worth $3,818,000 as of its most recent SEC filing.

A number of other hedge funds and other institutional investors have also modified their holdings of MDB. Strategic Investment Solutions Inc. IL purchased a new position in shares of MongoDB during the 4th quarter valued at $29,000. Coppell Advisory Solutions LLC increased its position in MongoDB by 364.0% in the 4th quarter. Coppell Advisory Solutions LLC now owns 232 shares of the company’s stock worth $54,000 after purchasing an additional 182 shares during the last quarter. Smartleaf Asset Management LLC raised its stake in shares of MongoDB by 56.8% during the 4th quarter. Smartleaf Asset Management LLC now owns 370 shares of the company’s stock worth $87,000 after purchasing an additional 134 shares in the last quarter. J.Safra Asset Management Corp lifted its holdings in shares of MongoDB by 72.0% during the 4th quarter. J.Safra Asset Management Corp now owns 387 shares of the company’s stock valued at $91,000 after buying an additional 162 shares during the last quarter. Finally, Aster Capital Management DIFC Ltd bought a new stake in MongoDB in the fourth quarter worth $97,000. Institutional investors and hedge funds own 89.29% of the company’s stock.

Analyst Ratings Changes

Several equities analysts recently weighed in on the company. JMP Securities reissued a “market outperform” rating and set a $345.00 price objective on shares of MongoDB in a research report on Thursday, June 5th. Stifel Nicolaus reduced their price target on shares of MongoDB from $340.00 to $275.00 and set a “buy” rating on the stock in a research report on Friday, April 11th. The Goldman Sachs Group lowered their target price on MongoDB from $390.00 to $335.00 and set a “buy” rating for the company in a research note on Thursday, March 6th. Truist Financial cut their price target on MongoDB from $300.00 to $275.00 and set a “buy” rating on the stock in a research note on Monday, March 31st. Finally, Loop Capital cut shares of MongoDB from a “buy” rating to a “hold” rating and decreased their price objective for the company from $350.00 to $190.00 in a report on Tuesday, May 20th. Eight analysts have rated the stock with a hold rating, twenty-five have issued a buy rating and one has given a strong buy rating to the company. Based on data from MarketBeat, the stock has a consensus rating of “Moderate Buy” and an average target price of $282.47.

View Our Latest Stock Analysis on MongoDB

Insider Activity

In other news, Director Hope F. Cochran sold 1,174 shares of the stock in a transaction dated Tuesday, June 17th. The shares were sold at an average price of $201.08, for a total value of $236,067.92. Following the completion of the transaction, the director owned 21,096 shares of the company’s stock, valued at approximately $4,241,983.68. This represents a 5.27% decrease in their ownership of the stock. The transaction was disclosed in a legal filing with the Securities & Exchange Commission, which can be accessed through the SEC website. Also, Director Dwight A. Merriman sold 2,000 shares of the business’s stock in a transaction on Thursday, June 5th. The stock was sold at an average price of $234.00, for a total value of $468,000.00. Following the sale, the director owned 1,107,006 shares in the company, valued at $259,039,404. This represents a 0.18% decrease in their position. The disclosure for this sale can be found here. In the last ninety days, insiders sold 28,999 shares of company stock worth $6,728,127. 3.10% of the stock is currently owned by corporate insiders.

MongoDB Trading Up 3.2%

MDB stock traded up $6.62 during mid-day trading on Thursday, hitting $211.05. 1,522,935 shares of the company were exchanged, compared to its average volume of 1,957,715. The business has a 50 day simple moving average of $194.66 and a 200-day simple moving average of $215.72. The stock has a market capitalization of $17.25 billion, a PE ratio of -185.13 and a beta of 1.41. MongoDB, Inc. has a 1-year low of $140.78 and a 1-year high of $370.00.

MongoDB (NASDAQ:MDBGet Free Report) last released its quarterly earnings results on Wednesday, June 4th. The company reported $1.00 EPS for the quarter, beating the consensus estimate of $0.65 by $0.35. The business had revenue of $549.01 million for the quarter, compared to the consensus estimate of $527.49 million. MongoDB had a negative net margin of 4.09% and a negative return on equity of 3.16%. MongoDB’s revenue was up 21.8% compared to the same quarter last year. During the same period in the prior year, the company posted $0.51 earnings per share. Equities research analysts expect that MongoDB, Inc. will post -1.78 earnings per share for the current year.

MongoDB Company Profile

(Free Report)

MongoDB, Inc, together with its subsidiaries, provides general purpose database platform worldwide. The company provides MongoDB Atlas, a hosted multi-cloud database-as-a-service solution; MongoDB Enterprise Advanced, a commercial database server for enterprise customers to run in the cloud, on-premises, or in a hybrid environment; and Community Server, a free-to-download version of its database, which includes the functionality that developers need to get started with MongoDB.

Featured Stories

Institutional Ownership by Quarter for MongoDB (NASDAQ:MDB)

Before you consider MongoDB, you’ll want to hear this.

MarketBeat keeps track of Wall Street’s top-rated and best performing research analysts and the stocks they recommend to their clients on a daily basis. MarketBeat has identified the five stocks that top analysts are quietly whispering to their clients to buy now before the broader market catches on… and MongoDB wasn’t on the list.

While MongoDB currently has a Moderate Buy rating among analysts, top-rated analysts believe these five stocks are better buys.

View The Five Stocks Here

These 7 Stocks Will Be Magnificent in 2025 Cover

Discover the next wave of investment opportunities with our report, 7 Stocks That Will Be Magnificent in 2025. Explore companies poised to replicate the growth, innovation, and value creation of the tech giants dominating today’s markets.

Get This Free Report

Like this article? Share it with a colleague.

Link copied to clipboard.

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.

MongoDB, Inc. (NASDAQ:MDB) Holdings Increased by Sumitomo Mitsui Trust Group Inc.

MMS Founder
MMS RSS

Sumitomo Mitsui Trust Group Inc. increased its stake in MongoDB, Inc. (NASDAQ:MDBFree Report) by 8.2% in the 1st quarter, according to its most recent Form 13F filing with the Securities and Exchange Commission (SEC). The institutional investor owned 211,534 shares of the company’s stock after acquiring an additional 16,091 shares during the period. Sumitomo Mitsui Trust Group Inc. owned approximately 0.26% of MongoDB worth $37,103,000 at the end of the most recent reporting period.

Other hedge funds have also made changes to their positions in the company. Diversified Trust Co boosted its stake in MongoDB by 117.8% during the first quarter. Diversified Trust Co now owns 15,071 shares of the company’s stock valued at $2,643,000 after buying an additional 8,152 shares in the last quarter. Robeco Institutional Asset Management B.V. boosted its position in shares of MongoDB by 13.2% during the 1st quarter. Robeco Institutional Asset Management B.V. now owns 19,449 shares of the company’s stock valued at $3,411,000 after purchasing an additional 2,270 shares in the last quarter. WoodTrust Financial Corp grew its stake in shares of MongoDB by 21.2% in the 1st quarter. WoodTrust Financial Corp now owns 5,150 shares of the company’s stock worth $903,000 after purchasing an additional 900 shares during the last quarter. Oppenheimer Asset Management Inc. raised its holdings in shares of MongoDB by 41.9% in the 1st quarter. Oppenheimer Asset Management Inc. now owns 7,058 shares of the company’s stock worth $1,238,000 after purchasing an additional 2,085 shares in the last quarter. Finally, TrueMark Investments LLC lifted its stake in MongoDB by 14.0% during the first quarter. TrueMark Investments LLC now owns 6,419 shares of the company’s stock valued at $1,126,000 after purchasing an additional 790 shares during the last quarter. Institutional investors and hedge funds own 89.29% of the company’s stock.

Insiders Place Their Bets

In related news, CEO Dev Ittycheria sold 25,005 shares of the stock in a transaction that occurred on Thursday, June 5th. The stock was sold at an average price of $234.00, for a total transaction of $5,851,170.00. Following the completion of the sale, the chief executive officer owned 256,974 shares in the company, valued at $60,131,916. This represents a 8.87% decrease in their position. The sale was disclosed in a document filed with the Securities & Exchange Commission, which is available through this link. Also, Director Hope F. Cochran sold 1,174 shares of MongoDB stock in a transaction that occurred on Tuesday, June 17th. The stock was sold at an average price of $201.08, for a total value of $236,067.92. Following the completion of the transaction, the director directly owned 21,096 shares in the company, valued at approximately $4,241,983.68. This represents a 5.27% decrease in their ownership of the stock. The disclosure for this sale can be found here. Insiders sold 28,999 shares of company stock valued at $6,728,127 over the last 90 days. 3.10% of the stock is currently owned by company insiders.

MongoDB Stock Down 0.8%

MDB stock opened at $204.43 on Thursday. The firm has a 50 day moving average of $193.91 and a two-hundred day moving average of $216.09. MongoDB, Inc. has a 12 month low of $140.78 and a 12 month high of $370.00. The company has a market capitalization of $16.70 billion, a price-to-earnings ratio of -179.32 and a beta of 1.41.

MongoDB (NASDAQ:MDBGet Free Report) last issued its quarterly earnings data on Wednesday, June 4th. The company reported $1.00 earnings per share (EPS) for the quarter, topping analysts’ consensus estimates of $0.65 by $0.35. The firm had revenue of $549.01 million during the quarter, compared to analysts’ expectations of $527.49 million. MongoDB had a negative net margin of 4.09% and a negative return on equity of 3.16%. MongoDB’s revenue for the quarter was up 21.8% on a year-over-year basis. During the same period in the prior year, the company posted $0.51 earnings per share. On average, research analysts expect that MongoDB, Inc. will post -1.78 EPS for the current fiscal year.

Analysts Set New Price Targets

MDB has been the topic of several recent research reports. UBS Group lifted their price target on shares of MongoDB from $213.00 to $240.00 and gave the stock a “neutral” rating in a research report on Thursday, June 5th. Wells Fargo & Company lowered MongoDB from an “overweight” rating to an “equal weight” rating and reduced their price objective for the stock from $365.00 to $225.00 in a research note on Thursday, March 6th. Wedbush reaffirmed an “outperform” rating and set a $300.00 target price on shares of MongoDB in a research report on Thursday, June 5th. Piper Sandler upped their target price on MongoDB from $200.00 to $275.00 and gave the company an “overweight” rating in a report on Thursday, June 5th. Finally, Mizuho decreased their price target on MongoDB from $250.00 to $190.00 and set a “neutral” rating for the company in a research report on Tuesday, April 15th. Eight equities research analysts have rated the stock with a hold rating, twenty-five have issued a buy rating and one has assigned a strong buy rating to the company. According to data from MarketBeat.com, the stock currently has an average rating of “Moderate Buy” and a consensus target price of $282.47.

Read Our Latest Analysis on MongoDB

MongoDB Company Profile

(Free Report)

MongoDB, Inc, together with its subsidiaries, provides general purpose database platform worldwide. The company provides MongoDB Atlas, a hosted multi-cloud database-as-a-service solution; MongoDB Enterprise Advanced, a commercial database server for enterprise customers to run in the cloud, on-premises, or in a hybrid environment; and Community Server, a free-to-download version of its database, which includes the functionality that developers need to get started with MongoDB.

Featured Stories

Want to see what other hedge funds are holding MDB? Visit HoldingsChannel.com to get the latest 13F filings and insider trades for MongoDB, Inc. (NASDAQ:MDBFree Report).

Institutional Ownership by Quarter for MongoDB (NASDAQ:MDB)

This instant news alert was generated by narrative science technology and financial data from MarketBeat in order to provide readers with the fastest and most accurate reporting. This story was reviewed by MarketBeat’s editorial team prior to publication. Please send any questions or comments about this story to contact@marketbeat.com.

Before you consider MongoDB, you’ll want to hear this.

MarketBeat keeps track of Wall Street’s top-rated and best performing research analysts and the stocks they recommend to their clients on a daily basis. MarketBeat has identified the five stocks that top analysts are quietly whispering to their clients to buy now before the broader market catches on… and MongoDB wasn’t on the list.

While MongoDB currently has a Moderate Buy rating among analysts, top-rated analysts believe these five stocks are better buys.

View The Five Stocks Here

Options Trading Made Easy - Download Now Cover

Learn the basics of options trading and how to use them to boost returns and manage risk with this free report from MarketBeat. Click the link below to get your free copy.

Get This Free Report

Like this article? Share it with a colleague.

Link copied to clipboard.

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.

Alibaba Cloud reveals DB cluster manager it says can beat rival hyperscalers – The Register

MMS Founder
MMS RSS

Alibaba Cloud has revealed a cluster manager it says allows it to run databases more efficiently than its hyperscale rivals.

The Chinese cloud champ revealed its tech in a paper [PDF] titled “Eigen+: Memory Over-Subscription for Alibaba Cloud Databases” that it presented at the recent SIGMOD/PODS conference, an Association of Computing Machinery (ACM) event dedicated to database research.

The paper opens with an observation that hyperscalers often assign more memory to VMs than is physically available – a technique called “memory oversubscription” that’s used because virtual machines don’t always use all the RAM allocated to them. Memory oversubscription therefore makes it possible to run more guest machines on each host.

If that sounds a bit fraught, you’re not alone in worrying about memory oversubscription because Alibaba Cloud’s researchers worry that it “increases the risk of Out of Memory (OOM) errors, potentially compromising service availability and violating Service Level Objectives.”

Users of memory oversubscription try to avoid such incidents in two ways. One is using historical data to predict future memory usage. They also employ bin packing algorithms – an optimization technique used to figure out how to pack differently sized objects into bins of fixed size. Think of them as Tetris, but for fitting workloads into a pool of compute resources.

Alibaba Cloud thinks the combination of historical data and bin packing “often fall short in providing precise predictions, particularly in high-utilization environments where slight forecast errors can result in critical failures.”

The company offered that conclusion based on analysis of its own operations.

Which is a bit awkward because this paper is a sequel of sorts to Alibaba Cloud’s 2023 paper describing the first version of the Eigen cluster manager.

This time around Alibaba Cloud thinks it’s found an even better way to cram more database VMs into its servers, by starting with the Pareto Principle – aka the 80/20 rule – that most problems come from a small number of causes. In the case of cloudy databases running on Alibaba Cloud, that means “database instances with memory utilization changes exceeding five percent within a week constitute no more than five percent of all instances, yet these instances lead to more than 90 percent of OOM errors.”

Eigen+, Alibaba Cloud’s new cluster manager, therefore profiles all database instances to detect those with transient memory use and prevents them from using memory oversubscription. Eigen+ also models the impact of oversubscription and can initiate live migration of database workloads to reduce the likelihood of OOM errors across its server fleet.

Alibaba Cloud’s paper claims that applying Eigen+ to VMs running MySQL allowed it to eliminate OOM errors and improved memory allocation by 36 percent, meaning the Chinese cloud can use less memory to host more database VMs.

The paper asserts that Eigen+’s classification of dangerous DBs is something its cloudy rivals AWS, Google, and Microsoft don’t do, and that its cluster management capabilities represent advances on tools such as Google’s Borg, Kubernetes, and Mesos.

Of course they would say that – but the paper said it well enough that the ACM thought it worthy of a slot at SIGMOD/PODS. ®

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.