3 Big Data Stocks to Prosper in the ‘Intelligence Economy’ | InvestorPlace

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

In the burgeoning intelligence economy, data stands as the linchpin of innovation. By 2025, six billion consumers are expected to tap into data every 18 seconds, bolstered by IoT devices producing a staggering 79 zettabytes (ZB).

Looking back at 2020, the International Data Corporation highlighted a massive 59ZB of data creation and capture, and with the rise of 5G and IoT, projections indicate this could surge to 200ZB by FY2025.

Furthermore, pioneers in today’s industry demonstrate data’s transformative prowess. For instance, Netflix (NASDAQ:NFLX) capitalizes on its vast data pool to innovate in entertainment. And Uber (NYSE:UBER) strategically utilizes insights, driving profound shifts in transportation dynamics.

Consequently, this paints a golden tableau for savvy investors. It’s not just the sheer volume but the inherent value of data that matters. Firms expertly navigating this data terrain are not only reshaping industries but also redefining investment horizons.

Splunk (SPLK)

Splunk (SPLK) logo on the company office in Santana Row.

Source: Michael Vi / Shutterstock.com

Diving deep into the intelligence economy, big data is the pulsating heart, and Splunk (NASDAQ:SPLK) stands out as its maestro.

This data analytics powerhouse, harnessing machine learning, offers tools such as Splunk Enterprise and Splunk Cloud, which empower users to collect, dissect seamlessly, and harness data.

Moreover, Splunk’s recent history paints a portrait of success. A snapshot from Q2 of 2024 reveals a robust revenue spike of $911 million, marking an impressive 14% year-over-year (YOY) growth. Simultaneously, their annual recurring revenue (ARR) flourished, reaching $3.86 billion. These figures aren’t merely numbers. They’re a testament to Splunk’s rising prominence across different industries and geographies.

Furthermore, with a tantalizing $100 billion addressable market in security and observability, Splunk’s growth potential is vast. A recent free cash flow of $805 million further propels them, fueling innovation.

In the big data saga, Splunk appears not just as a character but as a compelling protagonist.

MongoDB (MDB)

A close-up view of the MongoDB (MDB) office in Silicon Valley.

Source: Michael Vi / Shutterstock.com

Steering away from the well-trodden paths of traditional SQL and Oracle databases, MongoDB (NASDAQ:MDB) champions the NoSQL architecture. While it has legacy databases in its arsenal, the cloud-based Atlas database takes the spotlight. It resonates with Gartner’s endorsement as a leader in cloud database management systems.

Moreover, synchronized with the AI surge, MongoDB taps into the growing demand. AI’s expansion calls for enhanced storage, memory, and databases. MongoDB, with its agile document-based structure, is tailor-made for this AI era, offering unparalleled scaling. Additionally, its recent earnings spotlight a 40% YOY spike in subscription revenue, hitting $409.3 million, with Atlas registering massive growth numbers.

As we gaze ahead, MongoDB’s trajectory seems poised for distinction. A slew of innovative features, including Atlas Stream Processing, promise to further its edge. Coupled with strong customer growth, adding 1900 in the last quarter, MongoDB is not just a participant in the big data era but a formidable contender.

Datadog (DDOG)

The Datadog (DDOG) logo displayed on a laptop screen.

Source: Karol Ciesluk / Shutterstock.com

In the expansive realm of big data stocks, Datadog (NASDAQ:DDOG) emerges as a formidable cloud monitoring maestro.

It boasts an impressive clientele of over 26,000, with luminaries like Tesla (NASDAQ:TSLA) and Microsoft (NASDAQ:MSFT) in the fold. Datadog’s stock trajectory did hit a bump, tumbling by a stark 58% in 2022. Yet, Datadog’s stock is rebounding, currently up 21% year-to-date. Additionally, it showcased a robust 25.4% revenue boost in Q2 2023, clocking in at $509.4 million and outpacing estimates with earnings per share at 36 cents.

Further, with an eye on soaring cloud adoption, they’re targeting an addressable market of $62 billion by 2026. And cloud monitoring expenditures are predicted to spike 24% annually through 2030.

Finally, fresh launches in AI, security, and automation are enhancing Datadog’s repertoire. With over 30% of clients already onboard with products introduced post-2021 and a promising profitability curve, DDOG is undeniably a stock to watch in the intelligence economy.

On the date of publication, Muslim Farooque did not have (either directly or indirectly) any positions in the securities mentioned in this article. The opinions expressed in this article are those of the writer, subject to the InvestorPlace.com Publishing Guidelines.

Muslim Farooque is a keen investor and an optimist at heart. A life-long gamer and tech enthusiast, he has a particular affinity for analyzing technology stocks. Muslim holds a bachelor’s of science degree in applied accounting from Oxford Brookes University.

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Polystores: The Data Management Game Changer – The New Stack

MMS Founder
MMS RSS

Posted on nosqlgooglealerts. Visit nosqlgooglealerts

<meta name="x-tns-categories" content="Data / Contributed“><meta name="x-tns-authors" content="“>

Polystores: The Data Management Game Changer – The New Stack

Which agile methodology should junior developers learn?

Agile methodology breaks projects into sprints, emphasizing continuous collaboration and improvement.

Scrum

0%

Kanban

0%

Scrumban (a combination of Scrum and Kanban)

0%

Extreme Programming (XP)

0%

Other methodology

0%

Bah, Waterfall was good enough for my elders, it is good enough for me

0%

Junior devs shouldn’t think about development methodologies.

0%

2023-09-28 10:00:33

Polystores: The Data Management Game Changer

contributed,

A polystore is a game-changing approach to data management that enables seamless integration of diverse data sources and technologies.


Sep 28th, 2023 10:00am by


Featued image for: Polystores: The Data Management Game Changer

Image by PIRO from Pixabay.

The amount of digital information being generated is growing exponentially. In 2021, there were 79 zettabytes of data created, copied, and consumed globally. By 2026, that figure is expected to double and by 2030, my opinion is that we will breach the yottabyte era.

To put it into perspective:

  • One petabyte is approximately 11,000 4K movies
  • One zettabyte (ZB) is 1 million petabytes, or approximately 11 billion 4K movies

To put it another way, all of the books in the Library of Congress, if digitized, would be around 40 terabytes or 4 percent of a petabyte (calculating one book as one MB x 40 million books, rounded down). The world has produced approximately 500,000 movies, approximately 46 petabytes worth, which is less than 1% of a zettabyte.

Of course, not all organizations will be faced with big data challenges. However, data is the foundation of most, if not all, businesses. Like it or not, our data footprint will continue to expand and data will evolve not only in size, but in form as well. Structured or unstructured, data tells a story and each story is unique to the success of our businesses. Whether an organization is accumulating large amounts of data or have smaller siloed sets of data, the amount and types of data that organizations need to ingest will evolve and change over time. It’s just the natural progression of evolving business needs.

And just like nature, to stay ahead of the curve we must learn to adapt. The current paradigm of traditional approaches to data management are facing unprecedented challenges. This is where polystores come in.

According to big data experts and researchers, a polystore system is a “database management system (DBMS) that is built on top of multiple, heterogeneous, integrated storage engines. Each of these terms is important to distinguish a polystore from conventional federated DBMS.”

A polystore is a game-changing approach to data management that enables seamless integration of diverse data sources and technologies. By combining different database technologies tailored for specific use cases, organizations can optimize performance, scalability, and analytical capabilities through a polystore.

As businesses, individuals, and connected devices generate an ever-increasing amount of information, the need to effectively manage and extract value from this data becomes paramount.

The Rise of Unstructured Data

When we go see a doctor, what languages do we speak? We don’t speak in terms of data; however what we do say and share — regardless of language — does get translated into a “usable” form by medical professionals and the tools that they use. Just within the medical industry, medical knowledge is said to double every 73 days. This means that the sheer amount of data needed to be consumed by doctors to be up to date is not only growing exponentially but challenging to keep up with. On the other hand, it isn’t only new knowledge medical professionals are struggling with; it’s having to “throw out” outdated medical information.

Unstructured data and the consumption of it has evolved, but the technology behind its storage and the use of it is still in its infancy. Analyst firm IDC predicts that by 2025, approximately 80% of global data will be unstructured. This includes diverse data types like text, images, audio, video, social media posts, and more. Traditional data management approaches often fall short in handling the complexity and variety of data sources, leading to silos, inefficiencies, and missed opportunities for valuable insights.

To say that organizations are grappling with the challenges of managing vast amounts of diverse data is probably a huge understatement.

Unlocking the Power of Polystores

Over the years, we’ve witnessed the growth of data units, moving from megabytes to gigabytes, terabytes, and petabytes. With the rise of zettabytes, we enter an era where data volumes are measured in millions of petabytes. This exponential growth needs innovative solutions to handle and derive insights from such vast amounts of information.

Polystores can help address the challenges of this data explosion and unstructured data. They excel at seamlessly integrating diverse data sources, so organizations can consolidate and harmonize data from various systems, databases, and applications. Whether it’s structured data from relational databases, unstructured data from social media feeds, or semi-structured data from IoT devices, polystores provide a unified view of the entire data landscape. With polystores, you can break down data silos, facilitate cross-functional analysis, and derive comprehensive insights. You can pull data from a single source without having to find which database the data was stored in.

As new storage technologies continually emerge, there’s bound to be frequent shifts in the data technology ecosystem. Polystores offer the flexibility to adapt and evolve along with these changes. As organizations transition from one database technology to another, polystores provide a seamless transition path, ensuring minimal disruption and maximum utilization of existing data assets. This adaptability future-proofs data management strategies, enabling businesses to leverage emerging technologies without starting from scratch.

There are over 300 different types of databases from various vendors. Each has its own unique use and functionality, whether it’s performance, scale, or other unique features. Polystores embrace a hybrid approach, leveraging the strengths of different database technologies tailored to specific use cases. By combining the power of various databases, such as relational, NoSQL, columnar, and graph databases, organizations can optimize performance, scalability, and analytical capabilities. This allows for efficient data processing, faster query performance, and the ability to handle diverse data types. Polystores empower businesses to unlock the true potential of their data by utilizing the most suitable technologies for different data requirements.

In the ever-expanding world of data, organizations face the daunting task of managing multiple datasets efficiently. Every time a business needs change, we add to the layer of data complexity. Polystores offer a game-changing solution, allowing seamless integration of diverse data sources while adapting to evolving data technologies. Businesses that embrace polystores can overcome data silos, reduce the risk of migrating databases, and unlock valuable insights to make informed decisions. It’s worth keeping an eye out (if not making the leap to embrace them ahead of your competition) — polystores are the key to future-proof data management strategies, enabling you and your organization to thrive in this era of big data.

Group
Created with Sketch.

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Mongodb Insider Sold Shares Worth $43845181, According to a Recent SEC Filing

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

MongoDB, Inc. is a developer data platform company. Its developer data platform is an integrated set of databases and related services that allow development teams to address the growing variety of modern application requirements. Its core offerings are MongoDB Atlas and MongoDB Enterprise Advanced. MongoDB Atlas is its managed multi-cloud database-as-a-service offering that includes an integrated set of database and related services. MongoDB Atlas provides customers with a managed offering that includes automated provisioning and healing, comprehensive system monitoring, managed backup and restore, default security and other features. MongoDB Enterprise Advanced is its self-managed commercial offering for enterprise customers that can run in the cloud, on-premises or in a hybrid environment. It provides professional services to its customers, including consulting and training. It has over 40,800 customers spanning a range of industries in more than 100 countries around the world.


More about the company

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


NoSQL Developer (With Cassandra or MongoDB) – Stellent IT LLC – – Dice

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

NoSQL Developer (With Cassandra or MongoDB)

100% Remote

Contract

Job Description

Skills

NOSQL Databases needed- Cassandra or MongoDB

Azure/ Azure cosmos and Azure data platform

Description:

We are seeking a highly skilled and experienced NoSQL Developer with a deep understanding of data modeling, design, and a specialization in transactions and analytical data. The ideal candidate will possess a profound expertise in representing relational models using graph databases, especially for large clusters of nodes. Additionally, experience with Microsoft Azure Data Platform elements is essential for this role.

Key Responsibilities:

  • General understanding of Entity Resolution techniques
  • Data Modeling and Design: Develop and maintain efficient data models for transactional and analytical data systems.
  • Graph Database Expertise: Design and optimize graph database structures for large clusters of nodes, ensuring optimal performance and scalability.
  • NoSQL Development: Develop and implement NoSQL database solutions tailored to specific project requirements.
  • Optimizing graph database queries for performance and scalability.
  • Microsoft Azure Integration: Utilize Azure Data Platform elements to build and optimize data solutions on the Microsoft cloud ecosystem.
  • Performance Optimization: Identify and address performance bottlenecks, ensuring data systems are optimized for speed and efficiency.
  • Data Security: Implement robust security measures to safeguard sensitive data within the NoSQL environment.
  • Collaboration: Collaborate with cross-functional teams including data engineers, developers, data scientists, and business analysts to ensure data solutions meet business needs.
  • Documentation: Create and maintain comprehensive documentation for data models, design decisions, and system configurations.

Prashant Tyagi

Technical Recruiter

Phone: 201-627-7240, Cell/Text: 712-796-0595

Email: Prashant@stellentit.com

Gtalk: Prashant@stellentit.com

https://www.linkedin.com/in/prashant-tyagi-790119210/

Note : We respect your Online Privacy. Under Bill S.1618 Title III passed by the 105th U.S. Congress, this mail cannot be considered Spam as long as we include contact information and a method to be removed from our mailing list. If you are not interested in receiving our emails, then please send an email to remove@stellentit.com with ‘Remove’ in the subject line, we will remove your email ID from our list and send you a confirmation. We apologize for any inconvenience.

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Digital Ocean Launches its Managed Kafka Service

MMS Founder
MMS Sergio De Simone

Article originally posted on InfoQ. Visit InfoQ

Digital Ocean enters the arena of fully managed Kafka services with its new offering aimed at simplifying management and maintenance of the popular event streaming platform. Digital Ocean Kafka targets startups and SMBs by offering them an all-inclusive, flat-rate pricing model.

According to Digital Ocean product marketing manager Neel Chopra, the time spent configuring and maintaining Kafka themselves can often outweigh the cost savings of a self-managed version. Indeed, to configure a Kafka service you usually need to provision a multi-node cluster with many fine-tuned parameters. Management is made complex by the need for manual updating, securing, scaling, and logging. Additionally,

When an issue does occur, it can be challenging to troubleshoot due to the system complexity of a multi-node architecture. Any mistake can result in data loss, operational downtime, and security issues, any of which can be devastating to a growing business.

Chopra lists a number of advantages provided by Digital Ocean managed Kafka service over a self-managed deployment, including ease of scalability, fast node provisioning, easy monitoring and logging, automated updates, and more. To ensure stable performance it is also possible to use a dedicated vCPU with guaranteed hyper-thread access. Additionally, Digital Ocean provides a flexible and predictable pricing, which starts at $147/month for a three-node cluster using shared vCPUs with 6GB of memory.

Digital Ocean is not the first Cloud provider to launch a fully managed Kafka solution. Alternative offerings from other providers are AWS Managed Kafka, Amazon MKS Serverless, Upstash Kefka, Ubuntu Managed Kafka, while Confluent provides Kafka services for both Azure and Google Cloud. All of the here mentioned managed Kafka services claim to provide benefits similar to Digital Ocean Kafka’s.

Originally developed at LinkedIn and now under the Apache Software Foundation, Kafka is an open-source distributed event store and stream processing platform aiming to enable processing real-time data at scale. Kafka architecture rests on three key concepts: producers, consumers, and topics, which are streaming channels which producers and consumers can publish or subscribe to. To get started with Digital Ocean Managed Kafka, Digital Ocean provides a series of how-tos explaining a number of basic tasks, such as creating and connecting clusters, creating topics, securing clusters, and so on.

About the Author

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


MongoDB Launches Advanced Data Management Capabilities to Run Applications Anywhere

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

MongoDB Atlas for the Edge enables organizations to build, deploy, and manage highly reliable, data-driven applications anywhere—across devices, on-premises data centers, and the cloud

 

AWS and Cloneable among partners and customers working with MongoDB Atlas for the Edge

 

MongoDB, Inc. (NASDAQ: MDB) today at MongoDB.local London announced MongoDB Atlas for the Edge, a set of capabilities that make it easier for organizations to deploy applications closer to where real-time data is generated, processed, and stored—across devices, on-premises data centers, and major cloud providers. With MongoDB Atlas for the Edge, data is securely stored and synchronized in real time across data sources and destinations to provide highly available, resilient, and reliable applications. Organizations can now use MongoDB Atlas for the Edge to build, deploy, and manage applications that are accessible virtually anywhere for use cases like connected vehicles, smart factories, and supply chain optimization—without the complexity typically associated with operating distributed applications at the edge. To get started with MongoDB Atlas for the Edge, visit mongodb.com/use-cases/edge-computing.

“Flexibility and abstracting away complexity are one of the key attributes of a development experience that our customers have come to expect from us,” said Sahir Azam, Chief Product Officer at MongoDB. “Atlas for the Edge delivers a consistent development experience across the data layer for applications running anywhere—from mobile devices, kiosks in retail locations, remote manufacturing facilities, and on-premises data centers all the way to the cloud. Now, customers can more easily build and manage distributed applications securely using data at the edge with high availability, resilience, and reliability—and without the complexity and heavy lifting of managing complex edge deployments.”

Advancements in edge computing offer significant opportunities for organizations to deploy distributed applications to reach end users anywhere with real-time experiences. However, many organizations today that want to take advantage of edge computing lack the technical expertise to manage the complexity of networking and high volumes of distributed data required to deliver reliable applications that run anywhere. Many edge deployments involve stitching together hardware and software solutions from multiple vendors, resulting in complex and fragile systems that are often built using legacy technology that is limited by one-way data movement and requires specialized skills to manage and operate. Further, edge devices may require constant optimization due to their constraints—like limited data storage and intermittent network access—which makes keeping operational data in sync between edge locations and the cloud difficult. Edge deployments can also be prone to security vulnerabilities, and data stored and shared across edge locations must be encrypted in transit and at rest with centralized access management controls to ensure data privacy and compliance. As a result of this complexity, many organizations struggle to deploy and run distributed applications that can reach end users with real-time experiences wherever they are.

MongoDB Atlas for the Edge eliminates this complexity, providing capabilities to build, manage, and deploy distributed applications that can securely use real-time data in the cloud and at the edge with high availability, resilience, and reliability. Tens of thousands of customers and millions of developers today rely on MongoDB Atlas to run business-critical applications for real-time inventory management, predictive maintenance, and high-volume financial transactions. With MongoDB Atlas for the Edge, organizations can now use a single, unified interface to deliver a consistent and frictionless development experience from the edge to the cloud—and everything in between—with the ability to build distributed applications that can process, analyze, and synchronize virtually any type of data across locations. Together, the capabilities included with MongoDB Atlas for the Edge allow organizations to significantly reduce the complexity of building, deploying, and managing the distributed data systems that are required to run modern applications anywhere:

  • Deploy MongoDB on a variety of edge infrastructure for high reliability with ultra-low latency: With MongoDB Atlas for the Edge, organizations can run applications on MongoDB using a wide variety of infrastructure, including self-managed on-premises servers, such as those in remote warehouses or hospitals, in addition to edge infrastructure managed by major cloud providers including Amazon Web Services (AWS), Google Cloud, and Microsoft Azure. For example, data stored in MongoDB Enterprise Advanced on self-managed servers can be automatically synced with MongoDB Atlas Edge Server on AWS Local Zones and MongoDB Atlas in the cloud to deliver real-time application experiences to edge devices with high reliability and single-digit millisecond latency. MongoDB Atlas for the Edge allows organizations to deploy applications anywhere, even in remote, traditionally disconnected locations—and keep data synchronized between edge devices, edge infrastructure, and the cloud—to enable data-rich, fault-tolerant, real-time application experiences.
  • Run applications in locations with intermittent network connectivity: With MongoDB Atlas Edge Server and Atlas Device Sync, organizations can use a pre-built, local-first data synchronization layer for applications running on kiosks or on mobile and IoT devices to prevent data loss and improve offline application experiences. MongoDB Atlas Edge Servers can be deployed in remote locations to allow devices to sync directly with each other—without the need for connectivity to the cloud—using built-in network management capabilities. Once network connectivity is available, data is automatically synchronized between devices and the cloud to ensure applications are up to date for use cases like inventory and package tracking across supply chains, optimizing delivery routes in remote locations, and accessing electronic health records with intermittent network connectivity.
  • Build and deploy AI-powered applications at the edge: MongoDB Atlas for the Edge provides integrations with generative AI and machine learning technologies to provide low-latency, intelligent functionality at the edge directly on devices—even when network connectivity is unavailable. For example, MongoDB Atlas Search and Atlas Vector Search make it faster and easier to build intelligent applications with search and generative AI capabilities that take advantage of vector embeddings (numeric representations of data such as text, images, and audio) and large language models. Once embeddings are generated and stored in MongoDB Atlas, edge applications running on the Atlas Device SDK (formerly Realm)—a fast, scalable platform with mobile-to-cloud data synchronization that makes building real-time, reactive mobile applications easy—can use embeddings stored locally for use cases like real-time image similarity search and classification to identify potential product defects on factory lines. Developers can also use the Atlas Device SDK to build, train, deploy, and manage machine learning models on edge devices using popular frameworks like CoreML, TensorFlow, and PyTorch for customized applications that take advantage of real-time data.
  • Store and process real-time and batch data from IoT devices to make it actionable: With MongoDB Atlas Stream Processing, organizations can ingest and process high-velocity, high-volume data from millions of IoT devices (e.g., equipment sensors, factory machinery, medical devices) in real-time streams or in batches when network connectivity is available. Data can then be easily aggregated, stored, and analyzed using MongoDB Time Series collections for use cases like predictive maintenance and anomaly detection with real-time reporting and alerting capabilities. MongoDB Atlas for the Edge provides all of the tools necessary to process and synchronize virtually any type of data across edge locations and the cloud to ensure consistency and availability.
  • Easily secure edge applications for data privacy and compliance: MongoDB Atlas for the Edge helps organizations ensure their edge deployments are secure with built-in security capabilities. The Atlas Device SDK provides out-of-the-box data encryption at rest, on devices, and in transit over networks to ensure data is protected and secure. Additionally, Atlas Device Sync provides fine-grained role-based access, with built-in identity and access management (IAM) capabilities that can also be combined with third-party IAM services to easily integrate edge deployments with existing security and compliance solutions.

“High reliability and ultra-low latency are key requirements that impact customers’ ability to access and process their data. This is where AWS’s edge services help meet customers’ data-intensive workload needs,” said Amir Rao, Director of Product Management for Telco at AWS. “With MongoDB Atlas for the Edge, customers can take advantage of managed edge infrastructure like AWS Local Zones, AWS Wavelength, and AWS Outposts to process data closer to end users and power applications across generative AI and machine learning, IoT, and robotics—making it easier for them to build, manage, and deploy their applications anywhere.”

Cloneable provides low/no-code tools to enable instant deployment of AI applications to a spectrum of devices—mobile, IoT devices, robots, and beyond. “We collaborated with MongoDB because Atlas for the Edge provided capabilities that allowed us to move faster while providing enterprise-grade experiences,” said Tyler Collins, CTO at Cloneable. “For example, the local data persistence and built-in cloud synchronization provided by Atlas Device Sync enables real-time updates and high reliability, which is key for Cloneable clients bringing complex, deep tech capabilities to the edge. Machine learning models distributed down to devices can provide low-latency inference, computer vision, and augmented reality. Atlas Vector Search enables vector embeddings from images and data collected from various devices to allow for improved search and analyses. MongoDB supports our ability to streamline and simplify heavy data processes for the enterprise.”

 

About MongoDB Atlas
MongoDB Atlas is the leading multi-cloud developer data platform that accelerates and simplifies building applications with data. MongoDB Atlas provides an integrated set of data and application services in a unified environment that enables development teams to quickly build with the performance and scale modern applications require. Tens of thousands of customers and millions of developers worldwide rely on MongoDB Atlas every day to power their business-critical applications. To get started with MongoDB Atlas, visit mongodb.com/atlas.

About MongoDB
Headquartered in New York, MongoDB’s mission is to empower innovators to create, transform, and disrupt industries by unleashing the power of software and data. Built by developers, for developers, our developer data platform is a database with an integrated set of related services that allow development teams to address the growing requirements for today’s wide variety of modern applications, all in a unified and consistent user experience. MongoDB has tens of thousands of customers in over 100 countries. The MongoDB database platform has been downloaded hundreds of millions of times since 2007, and there have been millions of builders trained through MongoDB University courses. To learn more, visit mongodb.com.

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


CHICAGO TRUST Co NA Takes Position in MongoDB, Inc. (NASDAQ:MDB) – MarketBeat

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

CHICAGO TRUST Co NA purchased a new stake in MongoDB, Inc. (NASDAQ:MDBFree Report) in the second quarter, according to its most recent disclosure with the Securities & Exchange Commission. The institutional investor purchased 745 shares of the company’s stock, valued at approximately $306,000.

Several other hedge funds also recently added to or reduced their stakes in MDB. Moody National Bank Trust Division boosted its stake in MongoDB by 2.9% during the 2nd quarter. Moody National Bank Trust Division now owns 1,346 shares of the company’s stock worth $553,000 after purchasing an additional 38 shares during the period. CWM LLC raised its stake in MongoDB by 2.4% during the 1st quarter. CWM LLC now owns 2,235 shares of the company’s stock valued at $521,000 after acquiring an additional 52 shares during the last quarter. First Horizon Advisors Inc. lifted its position in MongoDB by 29.5% during the 1st quarter. First Horizon Advisors Inc. now owns 228 shares of the company’s stock worth $53,000 after acquiring an additional 52 shares during the period. Bleakley Financial Group LLC grew its stake in shares of MongoDB by 5.3% in the 1st quarter. Bleakley Financial Group LLC now owns 1,144 shares of the company’s stock valued at $267,000 after purchasing an additional 58 shares during the last quarter. Finally, Cetera Advisor Networks LLC increased its holdings in shares of MongoDB by 7.4% in the second quarter. Cetera Advisor Networks LLC now owns 860 shares of the company’s stock valued at $223,000 after purchasing an additional 59 shares during the period. 88.89% of the stock is currently owned by hedge funds and other institutional investors.

Analysts Set New Price Targets

A number of research analysts recently weighed in on MDB shares. Barclays upped their price objective on MongoDB from $421.00 to $450.00 and gave the stock an “overweight” rating in a report on Friday, September 1st. VNET Group restated a “maintains” rating on shares of MongoDB in a report on Monday, June 26th. Capital One Financial initiated coverage on shares of MongoDB in a report on Monday, June 26th. They issued an “equal weight” rating and a $396.00 price target on the stock. JMP Securities lifted their price objective on MongoDB from $425.00 to $440.00 and gave the stock a “market outperform” rating in a research report on Friday, September 1st. Finally, Truist Financial boosted their price target on MongoDB from $420.00 to $430.00 and gave the stock a “buy” rating in a report on Friday, September 1st. One investment analyst has rated the stock with a sell rating, three have given a hold rating and twenty-one have given a buy rating to the company. According to data from MarketBeat, the company currently has an average rating of “Moderate Buy” and a consensus target price of $418.08.

Get Our Latest Stock Analysis on MDB

MongoDB Price Performance

Shares of MDB opened at $328.16 on Thursday. MongoDB, Inc. has a 52 week low of $135.15 and a 52 week high of $439.00. The company has a current ratio of 4.48, a quick ratio of 4.48 and a debt-to-equity ratio of 1.29. The firm has a market cap of $23.41 billion, a price-to-earnings ratio of -94.84 and a beta of 1.11. The stock has a 50 day moving average of $375.18 and a 200-day moving average of $323.24.

MongoDB (NASDAQ:MDBGet Free Report) last posted its quarterly earnings results on Thursday, August 31st. The company reported ($0.63) EPS for the quarter, beating the consensus estimate of ($0.70) by $0.07. MongoDB had a negative net margin of 16.21% and a negative return on equity of 29.69%. The business had revenue of $423.79 million during the quarter, compared to analyst estimates of $389.93 million. As a group, analysts forecast that MongoDB, Inc. will post -2.17 EPS for the current year.

Insider Transactions at MongoDB

In related news, CAO Thomas Bull sold 516 shares of the firm’s stock in a transaction that occurred on Monday, July 3rd. The shares were sold at an average price of $406.78, for a total transaction of $209,898.48. Following the completion of the sale, the chief accounting officer now directly owns 17,190 shares in the company, valued at $6,992,548.20. The sale was disclosed in a legal filing with the Securities & Exchange Commission, which is available at this link. In other MongoDB news, CAO Thomas Bull sold 516 shares of the firm’s stock in a transaction dated Monday, July 3rd. The shares were sold at an average price of $406.78, for a total value of $209,898.48. Following the completion of the transaction, the chief accounting officer now directly owns 17,190 shares in the company, valued at $6,992,548.20. The sale was disclosed in a filing with the SEC, which is accessible through this link. Also, CRO Cedric Pech sold 360 shares of the stock in a transaction on Monday, July 3rd. The shares were sold at an average price of $406.79, for a total value of $146,444.40. Following the completion of the sale, the executive now owns 37,156 shares of the company’s stock, valued at approximately $15,114,689.24. The disclosure for this sale can be found here. Insiders have sold 104,694 shares of company stock valued at $41,820,161 in the last quarter. Insiders own 4.80% of the company’s stock.

MongoDB Profile

(Free Report)

MongoDB, Inc provides general purpose database platform worldwide. The company offers MongoDB Atlas, a hosted multi-cloud database-as-a-service solution; MongoDB Enterprise Advanced, a commercial database server for enterprise customers to run in the cloud, on-premise, or in a hybrid environment; and Community Server, a free-to-download version of its database, which includes the functionality that developers need to get started with MongoDB.

Read More

Institutional Ownership by Quarter for MongoDB (NASDAQ:MDB)

This instant news alert was generated by narrative science technology and financial data from MarketBeat in order to provide readers with the fastest and most accurate reporting. This story was reviewed by MarketBeat’s editorial team prior to publication. Please send any questions or comments about this story to contact@marketbeat.com.

Before you consider MongoDB, you’ll want to hear this.

MarketBeat keeps track of Wall Street’s top-rated and best performing research analysts and the stocks they recommend to their clients on a daily basis. MarketBeat has identified the five stocks that top analysts are quietly whispering to their clients to buy now before the broader market catches on… and MongoDB wasn’t on the list.

While MongoDB currently has a “Moderate Buy” rating among analysts, top-rated analysts believe these five stocks are better buys.

View The Five Stocks Here

Metaverse Stocks And Why You Can't Ignore Them Cover

Thinking about investing in Meta, Roblox, or Unity? Click the link to learn what streetwise investors need to know about the metaverse and public markets before making an investment.

Get This Free Report

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


MongoDB Launches Advanced Data Management Capabilities to Run Applications Anywhere

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

MongoDB today at MongoDB.local London announced MongoDB Atlas for the Edge, a set of capabilities that make it easier for organizations to deploy applications closer to where real-time data is generated, processed, and stored—across devices, on-premises data centers, and major cloud providers. With MongoDB Atlas for the Edge, data is securely stored and synchronized in real time across data sources and destinations to provide highly available, resilient, and reliable applications. Organizations can now use MongoDB Atlas for the Edge to build, deploy, and manage applications that are accessible virtually anywhere for use cases like connected vehicles, smart factories, and supply chain optimization—without the complexity typically associated with operating distributed applications at the edge. To get started with MongoDB Atlas for the Edge, visit mongodb.com/use-cases/edge-computing.

“Flexibility and abstracting away complexity is one of the key attributes of a development experience that our customers have come to expect from us,” said Sahir Azam, Chief Product Officer at MongoDB. “Atlas for the Edge delivers a consistent development experience across the data layer for applications running anywhere—from mobile devices, kiosks in retail locations, remote manufacturing facilities, and on-premises data centers all the way to the cloud. Now, customers can more easily build and manage distributed applications securely using data at the edge with high availability, resilience, and reliability—and without the complexity and heavy lifting of managing complex edge deployments.”

Advancements in edge computing offer significant opportunities for organizations to deploy distributed applications to reach end users anywhere with real-time experiences. However, many organizations today that want to take advantage of edge computing lack the technical expertise to manage the complexity of networking and high volumes of distributed data required to deliver reliable applications that run anywhere. Many edge deployments involve stitching together hardware and software solutions from multiple vendors, resulting in complex and fragile systems that are often built using legacy technology that is limited by one-way data movement and requires specialized skills to manage and operate. Further, edge devices may require constant optimization due to their constraints—like limited data storage and intermittent network access—which makes keeping operational data in sync between edge locations and the cloud difficult. Edge deployments can also be prone to security vulnerabilities, and data stored and shared across edge locations must be encrypted in transit and at rest with centralized access management controls to ensure data privacy and compliance. As a result of this complexity, many organizations struggle to deploy and run distributed applications that can reach end users with real-time experiences wherever they are.

MongoDB Atlas for the Edge eliminates this complexity, providing capabilities to build, manage, and deploy distributed applications that can securely use real-time data in the cloud and at the edge with high availability, resilience, and reliability. Tens of thousands of customers and millions of developers today rely on MongoDB Atlas to run business-critical applications for real-time inventory management, predictive maintenance, and high-volume financial transactions. With MongoDB Atlas for the Edge, organizations can now use a single, unified interface to deliver a consistent and frictionless development experience from the edge to the cloud—and everything in between—with the ability to build distributed applications that can process, analyze, and synchronize virtually any type of data across locations.

Together, the capabilities included with MongoDB Atlas for the Edge allow organizations to significantly reduce the complexity of building, deploying, and managing the distributed data systems that are required to run modern applications anywhere:

• Deploy MongoDB on a variety of edge infrastructure for high reliability with ultra-low latency: With MongoDB Atlas for the Edge, organizations can run applications on MongoDB using a wide variety of infrastructure, including self-managed on-premises servers, such as those in remote warehouses or hospitals, in addition to edge infrastructure managed by major cloud providers including Amazon Web Services (AWS), Google Cloud, and Microsoft Azure. For example, data stored in MongoDB Enterprise Advanced on self-managed servers can be automatically synced with MongoDB Atlas Edge Server on AWS Local Zones and MongoDB Atlas in the cloud to deliver real-time application experiences to edge devices with high reliability and single-digit millisecond latency. MongoDB Atlas for the Edge allows organizations to deploy applications anywhere, even in remote, traditionally disconnected locations—and keep data synchronized between edge devices, edge infrastructure, and the cloud—to enable data-rich, fault-tolerant, real-time application experiences.

• Run applications in locations with intermittent network connectivity: With MongoDB Atlas Edge Server and Atlas Device Sync, organizations can use a pre-built, local-first data synchronization layer for applications running on kiosks or on mobile and IoT devices to prevent data loss and improve offline application experiences. MongoDB Atlas Edge Servers can be deployed in remote locations to allow devices to sync directly with each other—without the need for connectivity to the cloud—using built-in network management capabilities. Once network connectivity is available, data is automatically synchronized between devices and the cloud to ensure applications are up to date for use cases like inventory and package tracking across supply chains, optimizing delivery routes in remote locations, and accessing electronic health records with intermittent network connectivity.

• Build and deploy AI-powered applications at the edge: MongoDB Atlas for the Edge provides integrations with generative AI and machine learning technologies to provide low-latency, intelligent functionality at the edge directly on devices—even when network connectivity is unavailable. For example, MongoDB Atlas Search and Atlas Vector Search make it faster and easier to build intelligent applications with search and generative AI capabilities that take advantage of vector embeddings (numeric representations of data such as text, images, and audio) and large language models. Once embeddings are generated and stored in MongoDB Atlas, edge applications running on the Atlas Device SDK (formerly Realm)—a fast, scalable platform with mobile-to-cloud data synchronization that makes building real-time, reactive mobile applications easy—can use embeddings stored locally for use cases like real-time image similarity search and classification to identify potential product defects on factory lines. Developers can also use the Atlas Device SDK to build, train, deploy, and manage machine learning models on edge devices using popular frameworks like CoreML, TensorFlow, and PyTorch for customized applications that take advantage of real-time data.

• Store and process real-time and batch data from IoT devices to make it actionable: With MongoDB Atlas Stream Processing, organizations can ingest and process high-velocity, high-volume data from millions of IoT devices (e.g., equipment sensors, factory machinery, medical devices) in real-time streams or in batches when network connectivity is available. Data can then be easily aggregated, stored, and analyzed using MongoDB Time Series collections for use cases like predictive maintenance and anomaly detection with real-time reporting and alerting capabilities. MongoDB Atlas for the Edge provides all of the tools necessary to process and synchronize virtually any type of data across edge locations and the cloud to ensure consistency and availability.

• Easily secure edge applications for data privacy and compliance: MongoDB Atlas for the Edge helps organizations ensure their edge deployments are secure with built-in security capabilities. The Atlas Device SDK provides out-of-the-box data encryption at rest, on devices, and in transit over networks to ensure data is protected and secure. Additionally, Atlas Device Sync provides fine-grained role-based access, with built-in identity and access management (IAM) capabilities that can also be combined with third-party IAM services to easily integrate edge deployments with existing security and compliance solutions.

“High reliability and ultra-low latency are key requirements that impact customers’ ability to access and process their data. This is where AWS’s edge services help meet customers’ data-intensive workload needs,” said Amir Rao, Director of Product Management for Telco at AWS. “With MongoDB Atlas for the Edge, customers can take advantage of managed edge infrastructure like AWS Local Zones, AWS Wavelength, and AWS Outposts to process data closer to end users and power applications across generative AI and machine learning, IoT, and robotics—making it easier for them to build, manage, and deploy their applications anywhere.”
Cloneable provides low/no-code tools to enable instant deployment of AI applications to a spectrum of devices—mobile, IoT devices, robots, and beyond.

“We collaborated with MongoDB because Atlas for the Edge provided capabilities that allowed us to move faster while providing enterprise-grade experiences,” said Tyler Collins, CTO at Cloneable. “For example, the local data persistence and built-in cloud synchronization provided by Atlas Device Sync enables real-time updates and high reliability, which is key for Cloneable clients bringing complex, deep tech capabilities to the edge. Machine learning models distributed down to devices can provide low-latency inference, computer vision, and augmented reality. Atlas Vector Search enables vector embeddings from images and data collected from various devices to allow for improved search and analyses. MongoDB supports our ability to streamline and simplify heavy data processes for the enterprise.”

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


2023 NoSQL Market Outlook: Innovations, Expansion Plans and Industry Revenue Forecast …

MMS Founder
MMS RSS

Posted on nosqlgooglealerts. Visit nosqlgooglealerts

PRESS RELEASE

Published September 28, 2023

Social-Media-Graph-01

The “NoSQL Market” Report for the year 2023 provides an in-depth and comprehensive analysis of the present business environment, highlighting opportunities for business growth. It extensively covers technological advancements, conducts SWOT and PESTLE analyses, offering detailed insights. The report delves into growth drivers, worldwide technological trends, and profiles key players, providing profound insights. Industry revenue, demand status, and the competitive landscape are thoroughly examined, helping companies in formulating effective strategies to attain long-term success. This report is an indispensable resource for companies seeking to navigate the evolving market, assisting them in charting a successful course and establish successful strategies for the future.

Get a sample PDF of the report at – https://www.industryresearch.co/enquiry/request-sample/22376195

As companies navigate their path forward, this report serves as a crucial resource, enabling them to develop future strategies with confidence. With its wealth of information and comprehensive analysis, businesses can make informed decisions, capitalize on emerging opportunities, and strategically plan for sustainable growth in the ever-evolving NoSQL Market.

The global NoSQL market size was valued at USD 7520.13 million in 2022 and is expected to expand at a CAGR of 31.08% during the forecast period, reaching USD 38144.35 million by 2028.

Key Players covered in the global NoSQL Market are:

  • Microsoft Corporation
  • Neo Technology, Inc.
  • MarkLogic Corporation
  • Aerospike, Inc.
  • DataStax, Inc.
  • Google LLC
  • Amazon Web Services, Inc.
  • PostgreSQL
  • Couchbase, Inc.
  • Objectivity, Inc.
  • MongoDB, Inc.

The report focuses on the NoSQL market size, segment size (mainly covering product type, application, and geography), competitor landscape, recent status, and development trends. Furthermore, the report provides detailed cost analysis, supply chain. Technological innovation and advancement will further optimize the performance of the product, making it more widely used in downstream applications. Moreover, Consumer behavior analysis and market dynamics (drivers, restraints, opportunities) provides crucial information for knowing the NoSQL market.

Get Sample Copy of NoSQL Market Report

Most important types of NoSQL products covered in this report are:

  • Key-Value Store
  • Document Databases
  • Column Based Stores
  • Graph Database

Most widely used downstream fields of NoSQL market covered in this report are:

  • Retail
  • Gaming
  • IT
  • Others

Key Takeaways from the Global NoSQL Market Report:

  • Market Size Estimates: NoSQL market size estimation in terms of value and sales volume from 2018-2030
  • Market Trends and Dynamics: NoSQL market drivers, opportunities, challenges, and risks
  • Macro-economy and Regional Conflict: Influence of global inflation and Russia & Ukraine War on the NoSQL market
  • Segment Market Analysis: NoSQL market value and sales volume by type and by application from 2018-2030
  • Regional Market Analysis: NoSQL market situations and prospects in “North America, Asia Pacific, Europe, Latin America, Middle East, Africa”
  • Country-level Studies on the NoSQL Market: Revenue and sales volume of major countries in each region
  • Trade Flow: Import and export volume of the NoSQL market in major regions.
  • NoSQL Industry Value Chain: NoSQL market raw materials & suppliers, manufacturing process, distributors, downstream customers
  • NoSQL Industry News, Policies & Regulations

Inquire or Share Your Questions If Any Before the Purchasing This Report – https://www.industryresearch.co/enquiry/pre-order-enquiry/22376195

Report Includes Following Chapters –

Chapter 1 starts the report with an overview of the NoSQL market, as well as the definitions of the target market and the subdivisions. Through the presented global market size, regional market sizes, and segment market shares, you will be able to draw an overall and comprehensive picture of the market situation. Meanwhile, the research method and data source will be shared in this chapter.

Chapter 2 and Chapter 3 breaks down the market by different types and applications, with historic data presented in metrics of sales volume, revenue, market share and growth rate.

Chapter 4 elaborates on market dynamics and future trends in the industry, which contains an in-depth analysis of market drivers, opportunities, challenges, and risks. Other essential factors that will have a major impact on the market, i.e., industry news and policies in recent years, global inflation, and regional conflict, are also taken into consideration.

Chapter 5 compares the sales volume and revenue of the major regions across the globe, which enables the readers to understand the regional competitive pattern.

Chapter 6 is the analysis of the trade flow. Import volume and export volume are revealed on a regional level.

Chapters 7-11 focus on country-level studies. Data from the major countries in each region are provided, showing the current development of the industry in different countries. Besides, you will also find qualitative trends analysis under global inflation under each of the 6 regions.

Chapter 12 first up presents the competitive landscape by displaying and comparing the revenues, sales volumes, and market shares of the top players in the market, followed by a company-by-company analysis of all the major market participants with introductions of their products, product applications, company profiles, and business overview. In addition, their competitiveness is manifested through numbers of sales volume, revenue, price, gross and gross margin.

Chapter 13 looks into the whole market industrial chain, ranging from the upstream key raw materials and their suppliers to midstream distributors and downstream customers, with influences of global inflation taken into consideration.

Chapter 14 is perfect for those who wish to develop new projects in the industry. This chapter sheds a light on industry entry barriers and gives suggestions on new project investments.

Chapter 15 forecasts the future trend of the market from the perspective of different types, applications, and major regions.

Chapter 16 is the conclusion of the report which helps the readers sum up the main findings and insights.

To Understand How Covid-19 Impact Is Covered in This Report – https://industryresearch.co/enquiry/request-covid19/22376195

Some of the key questions answered in this report:

  • What are the anticipated growth rates for the NoSQL market in the upcoming years, and what factors are driving this growth?
  • How do consumers perceive and adopt different types of NoSQLs in the market?
  • How do regulatory policies and government initiatives impact the growth of the NoSQL market?
  • What is the current market share of the top 5 players in the NoSQL market, and how is it expected to evolve in the future?
  • What are the emerging technologies and innovations shaping the landscape of the NoSQL market?
  • How do macroeconomic factors such as inflation, GDP, and exchange rates impact the NoSQL market?
  • What are the supply chain and logistics challenges faced by NoSQL market players, and how are they addressing them?
  • How does changing consumer behavior and preferences influence the dynamics of the NoSQL market?
  • What are the potential risks and uncertainties associated with investing in the NoSQL market, and how can they be mitigated?

The report delivers a comprehensive study of all the segments and shares information regarding the leading regions in the market. This report also states import/export consumption, supply and demand Figures, cost, industry share, policy, price, revenue, and gross margins.

Detailed TOC of NoSQL Market Forecast Report 2023-2030:

1 NoSQL Market Overview

1.1 Product Overview

1.2 Market Segmentation

1.2.1 Market by Types

1.2.2 Market by Applications

1.2.3 Market by Regions

1.3 Global NoSQL Market Size (2018-2028)

1.3.1 Global NoSQL Revenue and Growth Rate (2018-2028)

1.3.2 Global NoSQL Sales Volume and Growth Rate (2018-2028)

1.4 Research Method and Logic

2 Global NoSQL Market Historic Revenue and Sales Volume Segment by Type

2.1 Global NoSQL Historic Revenue by Type (2018-2023)

2.2 Global NoSQL Historic Sales Volume by Type (2018-2023)

3 Global NoSQL Historic Revenue and Sales Volume by Application (2018-2023)

3.1 Global NoSQL Historic Revenue by Application (2018-2023)

3.2 Global NoSQL Historic Sales Volume by Application (2018-2023)

4 Market Dynamic and Trends

4.1 Industry Development Trends under Global Inflation

4.2 Impact of Russia and Ukraine War

4.3 Driving Factors for NoSQL Market

4.4 Factors Challenging the Market

4.5 Opportunities

4.6 Risk Analysis

4.7 Industry News and Policies by Regions

4.7.1 NoSQL Industry News

4.7.2 NoSQL Industry Policies

5 Global NoSQL Market Revenue and Sales Volume by Major Regions

5.1 Global NoSQL Sales Volume by Region (2018-2023)

5.2 Global NoSQL Market Revenue by Region (2018-2023)

6 Global NoSQL Import Volume and Export Volume by Major Regions

6.1 Global NoSQL Import Volume by Region (2018-2023)

6.2 Global NoSQL Export Volume by Region (2018-2023)

7 North America NoSQL Market Current Status (2018-2023)

7.1 Overall Market Size Analysis (2018-2023)

7.1.1 North America NoSQL Revenue and Growth Rate (2018-2023)

7.1.2 North America NoSQL Sales Volume and Growth Rate (2018-2023)

7.2 North America NoSQL Market Trends Analysis Under Global Inflation

7.3 North America NoSQL Sales Volume and Revenue by Country (2018-2023)

8 Asia Pacific NoSQL Market Current Status (2018-2023)

8.1 Overall Market Size Analysis (2018-2023)

8.1.1 Asia Pacific NoSQL Revenue and Growth Rate (2018-2023)

8.1.2 Asia Pacific NoSQL Sales Volume and Growth Rate (2018-2023)

8.2 Asia Pacific NoSQL Market Trends Analysis Under Global Inflation

8.3 Asia Pacific NoSQL Sales Volume and Revenue by Country (2018-2023)

9 Europe NoSQL Market Current Status (2018-2023)

9.1 Overall Market Size Analysis (2018-2023)

9.1.1 Europe NoSQL Revenue and Growth Rate (2018-2023)

9.2.1 Europe NoSQL Sales Volume and Growth Rate (2018-2023)

9.2 Europe NoSQL Market Trends Analysis Under Global Inflation

9.3 Europe NoSQL Sales Volume and Revenue by Country (2018-2023)

10 Latin America NoSQL Market Current Status (2018-2023)

10.1 Overall Market Size Analysis (2018-2023)

10.1.1 Latin America NoSQL Revenue and Growth Rate (2018-2023)

10.1.2 Latin America NoSQL Sales Volume and Growth Rate (2018-2023)

10.2 Latin America NoSQL Market Trends Analysis Under Global Inflation

10.3 Latin America NoSQL Sales Volume and Revenue by Country (2018-2023)

11 Middle East and Africa NoSQL Market Current Status (2018-2023)

11.1 Overall Market Size Analysis (2018-2023)

11.2 Middle East and Africa NoSQL Market Trends Analysis Under Global Inflation

11.3 Middle East and Africa NoSQL Sales Volume and Revenue by Country (2018-2023)

11.4 GCC Countries

11.5 Africa

12 Market Competition Analysis and Key Companies Profiles

13 Value Chain of the NoSQL Market

13.1 Value Chain Status

13.2 Key Raw Materials and Suppliers

13.3 Manufacturing Cost Structure Analysis

13.4 Major Distributors by Region

13.5 Customer Analysis

14 New Project Feasibility Analysis

14.1 Industry Barriers and New Entrants SWOT Analysis

14.2 Analysis and Suggestions on New Project Investment

15 Global NoSQL Market Revenue and Sales Volume Forecast Segment by Type, Application and Region

15.1 Global NoSQL Revenue and Sales Volume Forecast by Type (2023-2028)

15.2 Global NoSQL Revenue and Sales Volume Forecast by Application (2023-2028)

15.3 Global NoSQL Sales Volume Forecast by Region (2023-2028)

15.4 Global NoSQL Revenue Forecast by Region (2023-2028)

16 Research Findings and Conclusion

Purchase this Report (Price 3250 USD for a Single-User License) – https://industryresearch.co/purchase/22376195

Contact Us:

Industry Research

Phone: US +14242530807

UK +44 20 3239 8187

Email: [email protected]

Web: https://www.industryresearch.co

PRWireCenter

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Presentation: The Eternal Sunshine of the Toil-Less Prod

MMS Founder
MMS Sasha Rosenbaum

Article originally posted on InfoQ. Visit InfoQ

Transcript

Rosenbaum: I am Sasha Rosenbaum. I’m a Director of Cloud Services Black Belt at Red Hat. Since you are on the effective SRE track, I presume you’ve heard of SRE before, and that you’ve heard some definition of toil. I’m going to start this presentation with giving one more definition, which is pretty common in the industry, which is that toil is the work that tends to be manual, repetitive, automatable, tactical, devoid of enduring value, and that it scales linearly as the service grows. We also know that for most teams, the SRE team usually aims to be under 50% of toil out of their toil-work balance, because toil is considered to be a bad thing, that we want to minimize. Then, I want to ask you a question that comes from one of my awesome coworkers, Byron Miller. The question is, if an employee is told that 50% of their work has no enduring value, how does this affect their productivity and job satisfaction? Just let it sink in, if we’re saying that SRE team has to do usually about 50% of toil, but we’re also saying that toil has no enduring value and is essentially not work that can get you promoted, that can get you raises, that can get you rewards, Then, how are we contributing to essentially the burnout on the SRE teams, which we all know that the SRE teams are experiencing?

Now that I’ve posed this question for you, I’m going to jump in into setting the background for the rest of the presentation. I’ve called this presentation, the Eternal Sunshine of the Toil-less Prod. I’ve done a lot of things in this industry, I think I’m coming up on 18 years in the industry, something like that. I’ve done a degree in computer science. I started off as a developer. I gradually was exposed to more ops type of work. Of course, I got involved in DevOps and DevOpsDays the moment it came out. I’ve done a bunch of other things such as consulting for cloud migrations, DevRel, and technical sales. You can see that I tend to get excited about new things and jump into them.

About Red Hat

I want to also introduce Red Hat. Everybody knows Red Hat as a company that provides Red Hat Linux, but we’ve also been in distributed computing for a really long time. This slide actually doesn’t start early enough. We’ve been involved in providing OpenShift since 2011, which gives us about over 10 years of experience in managing highly available production grade distributed systems. We’re currently providing OpenShift as a managed service in partnership with a number of major public clouds. We’re running at a pretty high scale. This is still not a cloud provider scale, but it’s a pretty high scale.

We have been on this journey, which some of the companies in the industry have definitely been on, which is, we have moved from providing products to providing services. We used to essentially ship software. Then, once we shipped it, it was the clients’ responsibility to run it. We’ve been shifting towards running the software for our clients. Then also, of course, as some of the companies in the industry, we’re providing both products and services at the same time at this moment, which makes it a very interesting environment for us in terms of how we ship and how we think about developing software.

The SRE Discipline

I would like to share some of our SRE experiences, and some of the lessons learned along the way, and as well as I want to set the stage for what I think is the most important thing about SRE as opposed to other things we’ve tried before. What’s the most important thing and innovative thing about the SRE discipline? Because we’ve been doing ops, and then DevOps, and then all whatever we call it today, to provide service to our customers for a really long time. We’re now talking about SRE being a game changer in some ways. What changed? I think, personally, that SRE is about providing explicit agreements that align incentives between different teams, between the vendor and the customer. I’m going to dive in into what makes SRE different than this stuff we’ve done before.

Service Level Agreement (SLA), Service Level Indicator (SLI), and Service Level Objective (SLO)

Probably everyone is familiar with some level of SLA, SLI, and SLO definitions. I’m going to define, not so much the what but the why of these indicators. SLA is financially backed availability. It’s a service level agreement. We’ve been providing this for decades. Every vendor that provides a service to the customer, always has some SLA. This is very familiar. This is an example from one of the Amazon services. It is relatively standard in today’s industry. It’s a 99%, 95% SLA. You can see that if the service availability drops below 95%, the client gets 100% refund. Basically, if the downtime is more than roughly one-and-a-half days a month, the client gets 100% refund. What’s important to think about here in the SLA concept is that SLA isn’t about aligning incentives between vendor and customer. As a customer, if I’m buying something from you, I want to have some type of guarantee that you’re providing a certain level of service. Then SLAs are about financial agreements. At Red Hat, for instance, when we first started offering managed OpenShift, we had a 99% SLA, and that wasn’t enough. Then we gradually moved over a year towards the four nines SLA, which is a higher standard than some of the services in the industry. To keep in mind, SLAs usually include a single metric. Usually, it measures one single thing, which is usually uptime. For financial and reputational reasons, we want to under-promise and overdeliver. When we’re promising four nines, we actually want our actual availability to be higher, because we never want to be in a situation where we have to provide refund to our customers.

SLO is probably the most important indicator. SLO is targeted reliability. The interesting part about SLOs is usually if you’re running a good company providing a good service, you’re measuring SLOs around a lot more things than you’re measuring SLA around. SLA was a single number. At Red Hat, we measure all these SLOs and more. We look at a lot of different metrics, in terms of our ability to assess our own reliability. Then SLI is an actual reliability. This is service level indicator and it measures actual reliability. Again, usually it surrounds many more metrics than just uptime. The important part about SLI, which people often forget, is it requires monitoring. If you’re not monitoring new services, then you have no idea what your actual availability or reliability are. You actually can’t say if you’re breaking your SLAs or not. In addition to that, you also have to have good monitoring. Because if your monitoring is very basic, say you just have a Pingdom pointed at your service that all you know is if your service is returning a 200 OK, and that’s not enough because usually the customers don’t come to your website or your application just to load it and get a 200 OK response, they actually come to get a service from your company. Without good monitoring, you actually don’t know if the service does what the users expect it to do. It’s important to design your SLAs and SLOs around things that are actually informative for your company to know if the service you’re providing is actually meeting your customers’ expectation.

Then, the other thing that’s very important about SLIs is signal to noise ratio. If you are looking at a lot of noise, the signal drowns in that, and so you are not going to be able to distinguish between things that are actually a problem and things that are not. In Red Hat example, for instance, early on, we had a major monitoring problem. We are providing availability for customers’ clusters, and monitoring customers’ clusters. The customer can take the cluster offline intentionally. Early on, we would get a lot of alerts, and we would wake people up at night to deal with these alerts. Then, we had to learn to identify when the shutdown of a cluster was intentional versus unintentional, so that we could reduce noise on these alerts. Without good monitoring, you’re potentially overloading your SRE with unwarranted emergencies, and then you actually don’t recognize real incidents. If you’re dogfooding at your company, then, periodically, incidents may even be caught by internal users. Your monitoring system might not identify the incidents, but someone in your company calls you and says, my cluster is down. This is fine, as long as you implement improvements to your monitoring system. Whenever you catch a problem, the ideal situation is you modify your monitoring systems to reflect tracking that problem in the future.

I’m going to come back to SLO because SLO is probably the key metric that was introduced by the SRE discipline. What’s important about it is it’s business approved reliability. We used to say that we’re striving for 100% reliability, all the time. Now, as an industry, we came to a realization that it’s unattainable, unnecessary, and of course, extremely expensive. Even the five nines. Five nines gives you 5.26 minutes of downtime a year. This used to be a holy grail that a lot of people strived for. Actually, will your users even notice that you’re providing a five nines level of service? The resounding answer is actually, they probably will not, because if you’re providing a web application, your internet service provider background error rate can be up to 1%. You can be striving to provide five nines, but in reality, your users actually never get that level of service that can get closer to four nines anyway, no matter how hard you’re working on it. SLOs are actually about explicitly aligning incentives between business and engineering. We go into this with our eyes open, and we tell the business, the level of availability or reliability we want to provide is four nines, or 99%, 95%. Then we argue if that’s enough, and if that’s a good level of service, and if that’s what’s acceptable in the industry right now, and if this is something we can reasonably expect to provide to our customers. Then, if we have some level of downtime, we have this agreement that provides us the ability to talk about it, not as in all ops are bad because there is some downtime in the system, but we have stayed within our desired SLO.

Error Budgets

This brings in the next metric, which is error budgets. Error budget is an acceptable level of unreliability. Error budget is defined as one minus SLO. If I could give you an example, on quarterly basis, error budget of four nines provides you 0.01% of downtime, and then 13 minutes of downtime a quarter. In these 13 minutes, you can either have an incident that is very quickly addressed, or you can take on some downtime while you’re delivering updates to your system. Everybody recognizes that those 13 minutes are not a problem.

This is the budget that you have for unreliability. Error budgets are about aligning incentives between developers and operations. If developers are measured on the same SLO that SRE people are measured on, then when error budget is drained, ideally, your developers start pushing updates with less speed and testing them more thoroughly. Because otherwise they know they will blow the error budget and not get their bonuses. As long as SRE is the only team incentivized to keep the SLO or SLA, you always have this problem of developers and product managers pushing to move as fast as possible, while SRE is trying to slow things down so they can get tested, verified, and not produce downtime. If you are actually measuring your product managers and developers on the same SLO, you are eliminating that problem. We used to talk in DevOps, about culture of collaboration and working together, and not working towards the same incentives. Measuring people on the same SLO actually provides you a way to write down that incentive, instead of just talking about how culturally you want to align. We’ve written things down that would help. Usually, I love this quote by William Gibson, “The future is already here, it’s just not evenly distributed.” We have companies who are doing an excellent job at SRE. We have companies that are struggling. Most companies are actually somewhere in between, with some pockets of excellence, and some teams that are struggling, or some services that are really difficult to provide good reliability for.

What We All Got Wrong

I want to talk about what I think we all got wrong. One of the things that I think we all got wrong, is the definition of what site reliability is. Unfortunately, the first book defines SRE as what happens when you ask a software engineer to design an operations team. That’s a very elitist and unfortunate take. As a former developer, I wholeheartedly disagree with that. Because actually, we have been talking about DevOps at DevOpsDays, and we’ve been talking since at least 2009, or probably actually a lot longer if you ask the ops people, but automating yourselves out of a job, or actually automating yourselves into a better job. The logical question is, why couldn’t we do it before Google came in and said, let’s assign developers to operations? Effective automation requires consistent APIs. This is something we actually did not have. OS-level APIs actually were not thoroughly available on the overall market. Only 27% of server market used to be Linux based, and Linux is relatively automatable. Windows was by design, not automatable, it was an executable based OS. Windows makers actually believed that people need to be clicking buttons, and that provides the best system administration experience. It brings in one of my favorite transformation stories, actually, with Jeffrey Snover, who pushed for shipping PowerShell as part of Windows, which is a CLI scripting language, which allows automation of parts of Windows operating system. He had a very interesting journey arguing with Microsoft executives for many years about why admins actually want automation and CLIs. He did succeed in the end, and so did many other people pushing for automation. Every wave of automation enables the next wave of automation.

Then we started seeing infrastructure-level APIs. We used to have to click buttons and manually wire servers in data centers, and it was all really manual work that couldn’t be automated. When Borg was first designed, this is also from Google SRE book, central to its success and its conception was the notion of turning cluster management into an entity for which API calls could be issued. In all these companies, Amazon, Azure, Google, all the cloud providers, the infrastructure was designed with automation in mind. Everything in cluster operations, in VM operations could be automatable, and actually, could be issued an API call too, so that actually you didn’t have to go and physically interact with those servers, no matter if you needed a failover, a restart, bringing a new server online, or anything like that. Then because of this push, we also started seeing other companies bringing it to data centers on-prem. We started seeing infrastructure as code automation, which enabled that for more of the traditional data center. The overall push was just to enable consistent APIs in the industry. We did not suddenly get the idea that infrastructure and platform automation were a good idea. We actually just gradually built the tools overall in the industry required to make that automation happen.

Why does this matter? In my opinion, if we get the origin story wrong, we end up working to solve the wrong problem. That’s a huge problem. Corollary 1 of this, hiring developers to do operations work does not equal effective SRE. This is a mistake that I see many companies made. This is a problem. Even at Red Hat we actually started saying, all we need to do to run SRE is just hire people with developer experience. That did not go really well. We eventually arrived at the fact that we want to hire well-rounded folks. We want to hire developers who have done some ops work before, or operations people with a mind for automation and coding, or QE people that are actually exposed to developer experience. More of an overall, well-rounded expertise and desire to solve problems, is a better profile for hiring SREs.

Corollary 2 is that the desire to automate infrastructure and platform operations is insufficient. You need consistent APIs and reliable monitoring to unblock the automation. If you are just hiring, even the best SREs, but you’re giving them a platform that cannot be automated, then they’re not going to be able to do it. Basically, what you need to do is provide them with the tools to make this automation possible. One example of this, early on at Red Hat, we had to move the cloud services build system from on-prem to the cloud, because actually, it wasn’t automatable or reliable to meet the targets of the new cloud services. It worked fine for shipping on-prem at a much slower rate, but it stopped working once we wanted to move with the speed of the cloud. You actually need to be basing your systems on infrastructure and platforms that provide the ability to automate them.

Second thing, then, and this is I think, what we started this presentation with, is that we got this idea that toil is unequivocally bad. We’re again talking about devoid of enduring value, it doesn’t actually provide us any benefits, and that we want to limit it. I even heard people say that we want to actually completely eliminate toil. We want to limit toil to 20% instead of 50%. That has been voiced by certain teams in certain companies. My question is, are we striving for a human-less system? Should we be striving for a human-less system? I want to bring up the only thing I remember from physics, and that is the second law of thermodynamics. It’s both very educational and also highly depressing. It says that with time, the net entropy, which is the degree of disorder, of any isolated system will increase. We know that every system left to its own devices will over time strive towards more disorder. We know this, on a basic level, we know that if we leave something alone to its own devices, it will gradually just get distracted by the forces of natural chaos. We know that entropy always wins.

I want to bring in a definition from Richard Cook, recently deceased, who’s done a lot of work on resilience, and a lot of work on explaining the effects of resilience in IT. There’s this concept of being above or below the line of representation. The above the line of representation is basically people who are working to operate the system. People working above the line of representation continuously build and refresh their models of what lies below the line. That activity is critical to the resilience of internet facing systems and the principal source of adaptive capacity. If we look at the things that humans are doing above the line, we see observing, inferring, anticipating, planning, troubleshooting, diagnosing, correcting, modifying, and reacting. What does this sound like? It sounds suspiciously like what we call toil. If we talk about resilience, and this is a quote from an absolutely excellent talk from 10 years ago, by Richard Cook, these are the metrics we look at for understanding systems’ adaptive capacities and resilience: learning, monitoring, responding, and adapting. What we call toil is a major part of resilience and adaptive capacity of our systems. Perhaps we need a better way to look at toil, and perhaps we need to stop saying that it is so detrimental and we need to minimize it, and it’s completely evil. We know that SRE folks worry that if they spend significant parts of their day focusing on toil, it will negatively affect their bonuses, chances of promotions. Everybody wants to write code, because that’s what gets you promoted. Again, I’m coming back to a quote from Byron, if an employee saw that 50% of their work has no enduring value, how does this affect their productivity and job satisfaction? I think, in general, toil gets us a lot of learning experiences and is critically edifying with our adaptive capacity and resilience. We need to restructure SRE teams to be encouraged to do some of the toil work and rewarded for doing some of the toil work.

SRE Work Allocation (Red Hat)

This brings me to one more of the stories from Red Hat, and it’s SRE work allocation. We’ve been trying out different modes of work allocation over the years. We started actually with naming one team SRE P and SRE O. One of the teams was essentially doing most of the development. One of the teams was essentially doing most of the ops. What does this sound like? Of course, it sounds like traditional IT with Dev and Ops and the wall of confusion and people throwing tickets at each other. Of course, it didn’t work very well. We proceeded from that, and we actually said, we are dealing with too much ops work, we actually want to reduce ops work, because it’s so detrimental. We actually want to put people on-call at most once a month, and all the rest of the time, they will be actually doing more developer-oriented work. Actually, in a surprise thing, SRE teams went to management and asked to be on call more, because they were actually forgetting how to be on-call, they were forgetting their operational experience with the system, and this wasn’t enough. They actually wanted to be on call rotation a little bit more. One of the other work allocations that we tried is rotating engineers working on toil reduction tasks. We said, ok, we’re rotating engineers through ops work, so we’re also going to rotate engineers from working on automation related implementation. Of course, probably you could predict this, but sometimes smart people make less than good decisions, and lack of continuity actually severely impacted the SRE team’s ability to deliver on those toil and automation reduction tasks, because they kept being reassigned from one engineer to another without sufficient context. That significantly slowed us down. In terms of work allocations, Red Hat is still looking for the perfect system. We have a little bit of different practice on different teams, and we are constantly learning to see what works best. I think this is not different from most of the companies in the industry who are continuously trying new things and trying to improve the SRE discipline.

Where Do We Go from Here?

Next question is, where do we go from here? I think to emphasize a couple of insights that we’ve arrived at. Effective automation requires consistent APIs. I think I’m a big proponent of cloud. I think cloud provides us with an industry standard for consistent infrastructure-level APIs. I think if you are not in a data center management business, that for me, the choice would be really clear today that you can go to the cloud and let your cloud provider manage infrastructure for you. Also, I think Kubernetes is a wave that is riding in the industry. From last year’s Red Hat Open Source Report, 85% of global IT leaders look at Kubernetes as a major part of their application strategies. Kubernetes could provide the industry standard for consistent platform level API. I don’t think it provides it quite yet. If building PaaS isn’t your company’s core business, allow your provider to toil for you. I’m biased and I’m paid to present this slide, but we are providing managed services, and so are some other people in the industry. You can outsource your infrastructure services and your platform services to your provider. Then, your operational work, your toil work is in the software services that are a key, core component of your business value. We said toil wasn’t necessarily evil, but if company A automates most of the basic infrastructure and platform tasks, and the toil is reduced to only operating business critical applications, it is going to do a lot better than the company that still toils to just deploy a new software. I would like to advise you to get your skills above the API and toil in the space that provides your company with business value.

If building PaaS is your company’s core business, then your situation is a little bit different. Then, I would really remember that SRE is about explicit agreement that align incentives between different teams. You want to explicitly write down your SLOs and understand exactly what you’re measuring. You want to explicitly write down your error budgets and make sure that you actually stop development when you blow your error budget and stuff like that. You want to actually leverage the tools that SRE discipline gave us to make your SRE team life better. Focus your toil where your business value is. I believe that ideas are open source, so Red Hat is starting a new initiative which is called Operate First. This is a concept of incorporating operational experience into software development. We have a place where we hope the community can come and share their experiences and talk about best practices of SRE teams.

See more presentations with transcripts

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.