Mobile Monitoring Solutions


MongoDB Q1 2024 Earnings Preview – Seeking Alpha

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

To ensure this doesn’t happen in the future, please enable Javascript and cookies in your browser.

Is this happening to you frequently? Please report it on our feedback forum.

If you have an ad-blocker enabled you may be blocked from proceeding. Please disable your ad-blocker and refresh.

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


R Systems Appoints Nitesh Bansal as CEO – DataCenterNews Asia Pacific

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

R Systems (NSE: RSYSTEMS; BSE: 532735) has announced the appointment of Nitesh Bansal as the new Managing Director and Chief Executive Officer, effective May 30, 2023. Mr. Bansal, an industry veteran, boasts a distinguished 25-year career, including 23 years at Infosys, where he served in multiple leadership roles.

During his tenure at Infosys, Bansal served as a Senior Vice President and Global Head of Engineering Services, gaining experience across India, Europe, and the Americas. His credentials also include Chartered Accountancy and executive leadership courses at renowned institutions, INSEAD and Stanford Graduate School of Business.

Upon his appointment, Bansal remarks, “It is with great enthusiasm that I assume the leadership of R Systems, which has established itself as a key player in digital and product engineering services.” He expressed his excitement to work with the team at R Systems and his confidence in the company’s growth potential.

Blackstone Private Equity’s Senior Managing Director, Mukesh Mehta, welcomed Bansal, emphasizing his rich industry experience and credibility. “Nitesh’s expertise in building businesses at scale will make him an invaluable asset to the company,” Mehta says, also acknowledging the legacy of the former CEO, Dr. Satinder Singh Rekhi, for building an excellent company.

Dr. Rekhi praised Bansal as “an industry veteran with a strong business acumen and deep understanding of technology,” stating his confidence in Bansal’s leadership for the next phase of R Systems’ growth. “I am excited with Blackstone’s participation going forward given their global experience in enabling companies to scale,” he adds, extending his full support to Bansal’s tenure as the new CEO.

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Minecraft Welcomes Its First LLM-Powered Agent

MMS Founder
MMS Daniel Dominguez

Article originally posted on InfoQ. Visit InfoQ

Researchers from Caltech, Stanford, the University of Texas, and NVIDIA have collaboratively developed and released Voyager, an LLM power agent that utilizes GPT-4 to engage in Minecraft gameplay. Voyager demonstrates remarkable capabilities by learning, retaining knowledge, and showcasing exceptional expertise in Minecraft.

Voyager operates autonomously, continuously exploring the virtual world, acquiring diverse skills, and making groundbreaking discoveries without any human intervention. Voyager’s innovation lies in its automatic curriculum that optimizes exploration, an ever-expanding skill library for storing and retrieving complex behaviors, and an iterative prompting mechanism that incorporates environment feedback, execution errors, and self-verification for program enhancement.

Voyager consists of three key components: an automatic curriculum for open-ended exploration, a skill library for increasingly complex behaviors, and an iterative prompting mechanism that uses code as action space.

Utilizing blackbox queries to interact with GPT-4, Voyager circumvents the need for model parameter fine-tuning. The skills developed by Voyager are both temporally extended and interpretable, resulting in rapid compound growth of the agent’s capabilities and mitigating catastrophic forgetting.

According to Jim Fan, one of the researchers of the project, GPT-4 experiment in Minecraft is a good place to start when creating effective AI agents. Autonomous agents with broad capabilities are the next step in artificial intelligence. They are motivated by curiosity and survival to explore, plan, and learn new abilities in open environments.

Compared with baselines, Voyager unlocks the wooden level 15.3x faster in terms of the prompting iterations, the stone level 8.5x faster, the iron level 6.4x faster, and Voyager is the only one to unlock the diamond level of the tech tree.

An unparalleled attribute of Voyager is its ability to utilize the learned skill library in a fresh Minecraft world to solve novel tasks from scratch, a feat that other approaches struggle to achieve when generalizing.

Lifelong learning agents are AI models designed to acquire knowledge and skills continuously throughout their operational lifespan. They possess the ability to adapt, learn, and improve as they encounter new information and experiences. Lifelong learning agents excel in retaining and transferring knowledge, allowing them to handle diverse tasks and domains effectively. Their capacity for continuous learning makes them valuable in various fields, including gaming, robotics, healthcare, and education.

With Voyager, Minecraft enters a new era of innovation, laying the foundation for future advancements in embodied lifelong learning agents.

About the Author

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


GEN YTD Stock Return Of -20% Underperforms PCTY by – Trefis

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

©Copyright 2019 Insight Guru Inc. All Rights Reserved.

By using the Site, you agree to be bound by our Terms of
Use. Financial market data powered by Quotemedia.com.
Consensus EPS estimates are from QuoteMedia and are updated every weekday.
All rights reserved.

NYSE/AMEX data delayed 20 minutes. NASDAQ and other
data delayed 15 minutes unless indicated.

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Upcoming Quarterly Earnings Report for MongoDB What Investors Need to Know

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

On May 31, 2023, MongoDB (NASDAQ: MDB) is set to unveil its latest quarterly earnings report, which will be eagerly anticipated by investors. According to analysts, the company is expected to announce an earnings per share (EPS) of $0.19, a figure that will be closely scrutinized by market watchers. However, it’s not just the EPS that investors will be looking for, as positive guidance or forecasted growth for the next quarter will also be important factors.

It’s worth noting that the impact on stock prices is not always determined by whether a company beats or misses earnings estimates, but also by the guidance provided. In the previous quarter, MongoDB beat EPS estimates by $0.50, but this was followed by an 8.36% drop in share price the following day. This highlights the importance of guidance in influencing investor sentiment.

Despite the recent volatility, MongoDB’s shares have performed well over the last 52 weeks, with an increase of 2.31%. To stay up-to-date with all of MongoDB’s earnings releases, investors can visit the earnings calendar on the NASDAQ website.

MDB (MongoDB Inc.) Stock Update: May 31, 2023 and Earnings Growth Forecast

On May 31, 2023, MDB (MongoDB Inc.) opened at $290.85, a decrease of 1.31% from the previous close of $292.57. The day’s range was between $289.50 and $298.57, with a volume of 28,978 shares traded. The market cap for MDB was $20.1B.

MDB’s earnings growth for the previous year was -5.89%, but it had a significant increase of 27.87% in earnings growth for the current year. The company is expected to have a steady earnings growth rate of 8.00% for the next five years. The revenue growth for the past year was positive at 46.95%.

MDB’s price-to-earnings (P/E) ratio was not available (NM), but its price-to-sales ratio was 11.45, and its price-to-book ratio was 26.89.

MDB’s next reporting date was on June 1, 2023, with an EPS forecast of $0.18 for the current quarter. The company operates in the packaged software industry within the technology services sector.

Investors should keep an eye on the company’s upcoming quarterly report to see if it meets the EPS forecast and any updates on its financials.

MDB Stock Update: Analysts Offer 12-Month Price Forecasts Despite Decrease in Median Target Price

On May 31, 2023, MongoDB Inc (MDB) stock closed at a price of 293.24. According to data from CNN Money, the 20 analysts offering 12-month price forecasts for MDB have a median target of 250.00, with a high estimate of 363.00 and a low estimate of 180.00.

Despite the decrease in the median target price, the current consensus among 25 polled investment analysts is to buy stock in MDB. This rating has held steady since May, when it was unchanged from a buy rating.

Looking at the current quarter, MDB reported earnings per share of $0.18 and sales of $347.8M. The reporting date for this quarter is June 01, 2023.

It is important to note that stock performances can be affected by a variety of factors, including market trends, company news, and global events. As such, the performance of MDB stock on May 31, 2023, should not be taken as a definitive indicator of future performance.

Investors should always conduct their own research and analysis before making any investment decisions. This includes considering factors such as the company’s financial health, industry trends, and overall market conditions. By doing so, investors can make informed decisions that align with their investment goals and risk tolerance.

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Analyst Ratings and Price Targets for MongoDB – Best Stocks

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

As of May 31, 2023, MongoDB (NASDAQ:MDB) has received a range of analyst ratings over the last quarter. Out of a total of 17 analysts, 5 have given bullish ratings, 10 have given somewhat bullish ratings, 1 has given an indifferent rating, and only 1 has given a bearish rating. These ratings indicate how positive or negative analysts are feeling about the stock.

Over the last 3 months, these analysts have offered 12-month price targets for MongoDB, with an average price target of $264.76. The highest price target is $365.00, while the lowest is $180.00. This range of price targets gives investors an idea of what to expect in terms of future stock performance.

Analyst ratings are crucial for investors as they provide insights into how a company is performing and what its future prospects may be. These ratings are determined by specialists within banking and financial systems who gather information from various sources such as company conference calls, financial statements, and conversations with important insiders.

In addition to ratings, analysts also offer predictions for helpful metrics such as earnings, revenue, and growth estimates to provide further guidance for investors. As of the past month, the average price target for MongoDB has increased by 1.97%, indicating positive sentiment towards the stock.

MDB Stock Analysis: Market Cap, Earnings Growth, Revenue Growth, and More

On May 31, 2023, MDB’s stock price fluctuated between 289.83 and 298.57 with a volume of 13,227 shares traded, considerably lower than the three-month average volume of 1,682,893 shares. MDB’s market cap was $20.1B, and the company’s earnings growth was -5.89% last year, but it has been positive this year, with a growth of 27.87%. MDB’s revenue growth was 46.95% last year, indicating its ability to generate revenue. MDB’s price-to-earnings (P/E) ratio was not available (NM), but the price-to-sales ratio was 11.45, and the price-to-book ratio was 26.89. MDB’s next reporting date is June 1, 2023, and the company is expected to report earnings per share (EPS) of $0.18 this quarter. MDB operates in the packaged software industry, which is highly competitive.

MongoDB Inc (MDB) Stock Analysis: Price Targets, Consensus Ratings, and Earnings Report

On May 31, 2023, MongoDB Inc (MDB) stock closed at a price of 296.14. According to data from CNN Money, the 20 analysts offering 12-month price forecasts for MDB have a median target of 250.00, with a high estimate of 363.00 and a low estimate of 180.00.

Despite the lower price target, the current consensus among 25 polled investment analysts is to buy stock in MongoDB Inc. This rating has held steady since May, when it was unchanged from a buy rating.

Looking at the current quarter’s earnings, MongoDB Inc reported earnings per share of $0.18 and sales of $347.8 million. The company is set to report its earnings on June 01, 2023.

It is important to note that stock performance is subject to a variety of factors, including economic conditions, industry trends, and company-specific news. While the analysts’ price targets and consensus rating provide insight into the market’s expectations for MDB, investors should conduct their own research and consider their own risk tolerance before making investment decisions.

In conclusion, while the median price target for MDB suggests a potential decrease in stock price, the current consensus rating to buy indicates that investors still have confidence in the company’s future growth potential. The upcoming earnings report on June 01, 2023, will provide further insight into the company’s performance and potential impact on stock prices.

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


MongoDB vs. PostgreSQL vs. ScyllaDB: Tractian’s Experience – The New Stack

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

<meta name="x-tns-categories" content="Data / Machine Learning / Operations“><meta name="x-tns-authors" content=" and “>

MongoDB vs. PostgreSQL vs. ScyllaDB: Tractian’s Experience – The New Stack

Modal Title

2023-05-31 06:10:04

MongoDB vs. PostgreSQL vs. ScyllaDB: Tractian’s Experience

sponsor-scylladb,sponsored-post-contributed,

How the industrial monitoring systems vendor selected the best database for its real-time machine learning environment.


May 31st, 2023 6:10am by


and
Featued image for: MongoDB vs. PostgreSQL vs. ScyllaDB: Tractian’s Experience

Tractian is a machine intelligence company that provides industrial monitoring systems. Last year, we faced the challenge of upgrading our real-time machine learning (ML) environment and analytical dashboards to support an aggressive increase in our data throughput, as we managed to expand our customers database and data volume by 10 times.

We recognized that to stay ahead in the fast-paced world of real-time machine learning, we needed a data infrastructure that was flexible, scalable and highly performant. We believed that ScyllaDB would provide us with the capabilities we lacked, enabling us to push our product and algorithms to the next level.

But you probably are wondering why ScyllaDB was the best fit. We’d like to show you how we transformed our engineering process to focus on improving our product’s performance. We’ll cover why we decided to use ScyllaDB, the positive outcomes we’ve seen as a result and the obstacles we encountered during the transition.

How We Compared NoSQL Databases

When talking about databases, many options come to mind. However, we started by deciding to focus on those with the largest communities and applications. This left three direct options: two market giants and a newcomer that has been surprising competitors. We looked at four characteristics of those databases — data model, query language, sharding and replication — and used these characteristics as decision criteria for our next steps.

First off, let’s give you a deeper understanding of the three databases using the defined criteria:

MongoDB NoSQL

  • Data model: MongoDB uses a document-oriented data model where data is stored in BSON (Binary JSON) format. Documents in a collection can have different fields and structures, providing a high degree of flexibility. The document-oriented model enables basically any data modeling or relationship modeling.
  • Query language: MongoDB uses a custom query language called MongoDB Query Language (MQL), which is inspired by SQL but with some differences to match the document-oriented data model. MQL supports a variety of query operations, including filtering, grouping and aggregation.
  • Sharding: MongoDB supports sharding, which is the process of dividing a large database into smaller parts and distributing the parts across multiple servers. Sharding is performed at the collection level, allowing for fine-grained control over data placement. MongoDB uses a config server to store metadata about the cluster, including information about the shard key and shard distribution.
  • Replication: MongoDB provides automatic replication, allowing for data to be automatically synchronized between multiple servers for high availability and disaster recovery. Replication is performed using a replica set, where one server is designated as the primary member and the others as secondary members. Secondary members can take over as the primary member in case of a failure, providing automatic fail recovery.

ScyllaDB NoSQL

  • Data model: ScyllaDB uses a wide column-family data model, which is similar to Apache Cassandra. Data is organized into columns and rows, with each column having its own value. This model is designed to handle large amounts of data with high write and read performance.
  • Query language: ScyllaDB uses the Cassandra Query Language (CQL), which is similar to SQL but with some differences to match the wide column-family data model. CQL supports a variety of query operations, including filtering, grouping and aggregation.
  • Sharding: ScyllaDB uses sharding, which is the process of dividing a large database into smaller parts and distributing the parts across multiple nodes (and down to individual cores). The sharding is performed automatically, allowing for seamless scaling as the data grows. ScyllaDB uses a consistent hashing algorithm to distribute data across the nodes (and cores), ensuring an even distribution of data and load balancing.
  • Replication: ScyllaDB provides automatic replication, allowing for data to be automatically synchronized between multiple nodes for high availability and disaster recovery. Replication is performed using a replicated database cluster, where each node has a copy of the data. The replication factor can be configured, allowing for control over the number of copies of the data stored in the cluster.

PostgreSQL

  • Data model: PostgreSQL uses a relational data model, which organizes data into tables with rows and columns. The relational model provides strong support for data consistency and integrity through constraints and transactions.
  • Query language: PostgreSQL uses structured query language (SQL), which is the standard language for interacting with relational databases. SQL supports a wide range of query operations, including filtering, grouping and aggregation.
  • Sharding: PostgreSQL does not natively support sharding, but it can be achieved through extensions and third-party tools. Sharding in PostgreSQL can be performed at the database, table or even row level, allowing for fine-grained control over data placement.
  • Replication: PostgreSQL provides synchronous and asynchronous replication, allowing data to be synchronized between multiple servers for high availability and disaster recovery. Replication can be performed using a variety of methods, including streaming replication, logical replication and file-based replication.

What Were Our Conclusions of the Benchmark?

In terms of performance, ScyllaDB is optimized for high performance and low latency, using a shared-nothing architecture and multithreading to provide high throughput and low latencies.

MongoDB is optimized for ease of use and flexibility, offering a more accessible and developer-friendly experience and has a huge community to help with future issues.

PostgreSQL, on the other hand, is optimized for data integrity and consistency, with a strong emphasis on transactional consistency and ACID (atomicity, consistency, isolation, durability) compliance. It is a popular choice for applications that require strong data reliability and security. It also supports various data types and advanced features such as stored procedures, triggers and views.

When choosing between PostgreSQL, MongoDB and ScyllaDB, it is essential to consider your specific use case and requirements. If you need a powerful and reliable relational database with advanced data management features, then PostgreSQL may be the better choice. However, if you need a flexible and easy-to-use NoSQL database with a large ecosystem, then MongoDB may be the better choice.

But we were looking for something really specific: a highly scalable and high-performance NoSQL database. The answer was simple: ScyllaDB is a better fit for our use case.

MongoDB vs. ScyllaDB vs. PostgreSQL: Comparing Performance

After the research process, our team was skeptical about using just written information to make a decision that would shape the future of our product. We started digging to be sure about our decision in practical terms.

First, we built an environment to replicate our data acquisition pipeline, but we did it aggressively. We created a script to simulate a data flow bigger than the current one. At the time, our throughput was around 16,000 operations per second, and we tested the database with 160,000 operations per second (so basically 10x).

To be sure, we also tested the write and read response times for different formats and data structures; some were similar to the ones we were already using at the time.

You can see our results below with the new optimal configuration using ScyllaDB and the configuration using what we had with MongoDB (our old setup) applying the tests mentioned above:

MongoDB vs. ScyllaDB P90 Latency (Lower Is Better)

MongoDB vs. ScyllaDB Request Rate/Throughput (Higher Is Better)

The results were overwhelming. With similar infrastructure costs, we achieved much better latency and capacity; the decision was clear and validated. We had a massive database migration ahead of us.

Migrating from MongoDB to ScyllaDB NoSQL

As soon as we decided to start the implementation, we faced real-world difficulties. Some things are important to mention.

In this migration, we added new information and formats, which affected all production services that consume this data directly or indirectly. They would have to be refactored by adding adapters in the pipeline or recreating part of the processing and manipulation logic.

During the migration journey, both services and databases had to be duplicated, since it is not possible to use an outage event to swap between old and new versions to validate our pipeline. It’s part of the issues that you have to deal with in critical real-time systems: An outage is never permitted, even if you are fixing or updating the system.

The reconstruction process should go through the data science models, so that they can take advantage of the new format, increasing accuracy and computational performance.

Given these guidelines, we created two groups. One was responsible for administering and maintaining the old database and architecture. The other group performed a massive reprocessing of our data lake and refactored the models and services to handle the new architecture.

The complete process, from designing the structure to the final deployment and swap of the production environment, took six months. During this period, adjustments and significant corrections were necessary. You never know what lessons you’ll learn along the way.

NoSQL Migration Challenges

ScyllaDB can achieve this kind of performance because it is designed to take advantage of high-end hardware and very specific data modeling. The final results were astonishing, but it took some time to achieve them. Hardware has a significant impact on performance. ScyllaDB is optimized for modern multicore processors and uses all available CPU cores to process data. It uses hardware acceleration technologies such as AVX2 (Advanced Vector Extensions 2) and AES-NI (Advanced Encryption Standard New Instructions); it also depends on the type and speed of storage devices, including solid-state disks and NVMe (nonvolatile memory express) drives.

In our early testing, we messed up some hardware configurations, leading to performance degradation. When those problems were fixed, we stumbled upon another problem: data modeling.

ScyllaDB uses the Cassandra data model, which heavily dictates the performance of your queries. If you make incorrect assumptions about the data structures, queries or the data volume, as we did at the beginning, the performance will suffer.

In practice, the first proposed data format ended up exceeding the maximum size recommended for a ScyllaDB partition in some cases, which made the database perform poorly.

Our main difficulty was understanding how to translate our old data modeling to one that would perform on ScyllaDB. We had to restructure the data into multiple tables and partitions, sometimes duplicating data to achieve better performance.

Lessons Learned: Comparing and Migrating NoSQL Databases

In short, we learned three lessons during this process: Some came from our successes and others from our mistakes.

When researching and benchmarking the databases, it became clear that many of the specifications and functionalities present in the different databases have specific applications. Your specific use case will dictate the best database for your application. And that truth is only discovered by carrying out practical tests and simulations of the production environment in stressful situations. We invested a lot of time, and our choice to use the most appropriate database paid off.

When starting a large project, it is crucial to be prepared for a change of route in the middle of the journey. If you developed a project that did not change after its conception, you probably didn’t learn anything during the construction process, or you didn’t care about the unexpected twists. Planning cannot completely predict all real-world problems, so be ready to adjust your decisions and beliefs along the way.

You shouldn’t be afraid of big changes. Many people were against the changes we were proposing due to the risk it brought and the inconvenience it caused to developers (by changing a tool already owned by the team to a new tool that was completely unknown to the team).

Ultimately, the decision was driven based on its impact on our product improvements — not on our engineering team, even though it was one of the most significant engineering changes we have made to date.

It doesn’t matter what architecture or system you are using. The real concern is whether it will be able to take your product into a bright future.

This is, in a nutshell, our journey in building one of the bridges for the future of Tractian’s product. If you have any questions or comments, feel free to contact us.

Group
Created with Sketch.

TNS owner Insight Partners is an investor in: Pragma.

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Baird Analyst Raises MongoDB Price Target to 290 with Outperform Rating – Best Stocks

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

On May 31, 2023, Baird analyst William Power announced his continued endorsement of MongoDB (NASDAQ:MDB) with an Outperform rating. In addition, he raised the company’s price target from $230 to $290, indicating his confidence in the company’s future growth. MongoDB is a leading provider of a versatile database platform that caters to a wide range of industries. As of March 9, 2023, the company’s shares closed at $194.08, with a market capitalization of $15.31 billion and 69.29 million outstanding shares. According to TipRanks, 100 analysts have provided price forecasts for MongoDB, with a median target of $369.10, a high estimate of $700.00, and a low estimate of $170.00. The average price target for the company is $256.26, with a high forecast of $369.10 and a low forecast of $170.00.

MDB Stock Analysis: Strong Revenue Growth and Expected Earnings Growth Make it an Attractive Investment Option

On May 31, 2023, MDB stock opened at $291.46, up from the previous day’s close of $283.36. The day’s range was between $285.87 and $296.32, with a volume of 1,531,394 shares traded. The market cap for the company was $20.1 billion.

MDB is a technology services company that operates in the packaged software industry. The company has reported a revenue growth of 46.95% in the last year, and its earnings growth for this year is expected to be 27.87%. Over the next five years, the company is expected to have an earnings growth of 8.00%.

MDB’s price-to-earnings (P/E) ratio is not available, but its price-to-sales ratio is 11.45, and its price-to-book ratio is 26.89. These ratios indicate that the company’s stock is trading at a premium compared to its peers in the industry.

MDB’s next reporting date is June 1, 2023, and the company is expected to report earnings per share (EPS) of $0.18 for this quarter. The company’s strong revenue growth and expected earnings growth make it an attractive investment option for those looking for long-term growth potential in the technology sector.

MongoDB Inc (MDB) Stock Analysis: Strong Growth Outlook and Buy Recommendation from Investment Analysts

On May 31, 2023, MongoDB Inc (MDB) was trading at a price of $292.57. According to the data source, CNN Money, the 20 analysts offering 12-month price forecasts for MDB have a median target of $250.00, with a high estimate of $363.00 and a low estimate of $180.00. The median estimate represents a -14.55% decrease from the last price of $292.57. The current consensus among 25 polled investment analysts is to buy stock in MongoDB Inc. On June 1, 2023, MDB is scheduled to report its current quarter earnings. The company has a strong growth outlook, driven by its innovative product offerings and expanding customer base. Investors should keep an eye on the stock and consider buying it for long-term gains.

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Microsoft Authentication Library 4.54.0 Supports Managed Identities

MMS Founder
MMS Edin Kapic

Article originally posted on InfoQ. Visit InfoQ

Version 4.54.0 of Microsoft Authentication Library (MSAL) for .NET brings official support for using managed identities when authenticating services that run in Azure. Furthermore, it features better error information for UWP applications and several bug fixes.

The most important feature added in this version is the general availability of support for managed identities in Azure. Managed identities are Azure Active Directory identities that are automatically provisioned by Azure and can be used from workloads running in Azure without explicit authentication with application secrets and keys.

Using managed identities instead of standalone application identities can significantly lower the developer time needed to authenticate services in Azure when accessing other resources. As Jimmy Bogard, creator of MediatR and AutoMapper libraries, mentions in his tweet from July last year, using managed identities in the same MSAL library fixes the problem of using two libraries for the same task.

In the following code sample, developers can specify that the application uses a system-assigned managed identity that Azure creates automatically for each resource and then retrieve the authentication token by calling the AcquireTokenForManagedIdentity method.

IManagedIdentityApplication mi = ManagedIdentityApplicationBuilder.Create(ManagedIdentityId.SystemAssigned)
    .Build();

AuthenticationResult result = await mi.AcquireTokenForManagedIdentity(resource)
    .ExecuteAsync()
    .ConfigureAwait(false);

Users can also create user-assigned managed identities as resources in Azure and assign them to the services that should use them, allowing multiple resources to have the same managed identity.

The support for managed identities in MSAL was added in December 2022 for version 4.49.0. It was an experimental feature without support for using it in production environments. Half a year later, version 4.54.0 is generally available and can be used in production workloads.

Another added feature in this version is the automatic refresh of the authentication tokens for confidential clients that use an app token provider extension called WithAppTokenProvider. From the documentation comments, it seems that this enhancement was required for wrapping the Azure SDK usage of the library to allow for managed identity authentication.

The MsalException class has a new property called AdditionalExceptionData, which holds any extra error information coming from the underlying providers. Currently, the property is only filled for exceptions from the Windows 10/11 Web account manager (WAM) broker mechanism. The WAM broker is only used in Universal Windows Platform (UWP) applications. Windows applications that use MAUI don’t use the WAM broker, as the integration happens on the NET 6 runtime level.

For telemetry purposes, there is a new enum value for long-running requests that use OBO (on-behalf-of) authentication flows. It helps with the correct assessment of authentication failures.

Among the bug fixes in this release, two are related to iOS-specific errors. When using ahead-of-time compilation (AOT), JSON deserialisation with overflow properties would break. This behaviour was added in version 4.52.0, and now it’s fixed. The other iOS bug that is fixed is the incorrect referencing of the Microsoft.iOS library due to using several package repositories internally for MAUI applications.

Finally, a small bug in the interactive token retrieval (using the AcquireTokenInteractive method) was fixed. It failed if the user chose another account from the Microsoft login dialogue account chooser if the code calls the WithAccount method to preselect the user in the UI. Now the code checks for the returned user account and succeeds even if another account is selected in the UI.

Two weeks after the release of version 4.54.0, the team released an updated build with a few bug fixes and a minor feature that exposes the cache details in the telemetry logged automatically by MSAL. This build has version number 4.54.1.

The MSAL GitHub project has 170 open issues and 2048 closed issues at the moment. The updated MSAL library is distributed as a NuGet package called Microsoft.Identity.Client.

About the Author

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Article: Effective Test Automation Approaches for Modern CI/CD Pipelines

MMS Founder
MMS Craig Risi

Article originally posted on InfoQ. Visit InfoQ

Key Takeaways

  • Shifting left is popular in domains such as security, but it is also essential for achieving better test automation for CI/CD pipelines
  • By shifting left, you can design for testability upfront and get testing experts involved earlier in your unit tests leading to a better result
  • Not all your tests should be automated for your CI/CD pipelines; instead focus on the tests that return the best value while minimizing your CI/CD runtimes
  • Other tests can then be run on a scheduled basis to avoid cluttering and slowing the main pipeline
  • Become familiar with the principles of good test design for writing more efficient and effective tests

The rise of CI/CD has had a massive impact on the software testing world. With developers requiring pipelines to provide quick feedback on whether their software update has been successful or not, it has forced many testing teams to revisit their existing test automation approaches and find ways of being able to speed up their delivery without compromising on quality. These two factors often contradict each other in the testing world as time is often the biggest enemy in a tester’s quest to be as thorough as possible in achieving their desired testing coverage.

So, how do teams deal with this significant change to ensure they are able to deliver high-quality automated tests while delivering on the expectation that the CI pipeline returns feedback quickly? Well, there are many different ways of looking at this, but what is important to understand is that the solutions are less technical and more cultural ones – with the approach to testing needing to shift rather than big technical enhancements to the testing frameworks.

Shifting Left

Perhaps the most obvious thing to do is to shift left. The idea of “shifting left” (where testing is moved earlier in the development cycle – primarily at a design and unit testing level) is already a common one in the industry that is pushed by many organizations and is becoming increasingly commonplace. Having a strong focus on unit tests is a good way of testing code quickly and providing fast feedback. After all, unit tests execute in a fraction of the time (as they can execute during compilation and don’t require any further integration with the rest of the system) and can provide good testing coverage when done right.

I’ve seen many testers shy away from the notion of unit testing because it’s writing tests for a very small component of the code and there is a danger that key things will be missed. This is often just a fear due to the lack of visibility in the process or a lack of understanding of unit tests rather than a failure of unit tests themselves. Having a strong base of unit tests works as they can execute quickly as the code builds in the CI pipeline. It makes sense to have as many as possible and to cover every type of scenario as possible.  

The biggest problem is that many teams don’t always know how to get it right. Firstly, unit testing shouldn’t be treated as some check box activity but rather approached with the proper analysis and commitment to test design that testers would ordinarily apply. And this means that rather than just leaving unit testing in the hands of the developers, you should get testers involved in the process. Even if a tester is not strong in coding, they can still assist in identifying what parameters to look for in testing and the right places to assert to deliver the right results for the integrated functionality to be tested later.
 
Excluding your testing experts from being involved in the unit testing approach means it’s possible unit tests could miss some key validation areas. This is often why you might hear many testers give unit tests a bad wrap. It’s not because unit testing is ineffectual, but rather simply that they often didn’t cover the right scenarios.

A second benefit of involving testers early is adding visibility to the unit testing effort. The amount of time (and therefore money) potentially wasted by teams duplicating testing efforts because testers end up simply testing something that was already covered by automated testing is probably quite high. That’s not to say independent validation shouldn’t occur, but it shouldn’t be excessive if scenarios have already been covered. Instead, the tester can focus on being able to provide better exploratory testing as well as focus their own automation efforts on integration testing those edge cases that they might never have otherwise covered.

It’s all about design and preparation

To do this effectively though requires a fair amount of deliberate effort and design. It’s not just about making an effort to focus more on the unit tests and perhaps getting a person with strong test analysis skills in to ensure test scenarios are suitably developed. It also requires user stories and requirements to be more specific to allow for appropriate testing. Often user stories can end up high-level and only focus on the detail from a user level and not a technical level. How individual functions should behave and interact with their corresponding dependencies needs to be clear to allow for good unit testing to take place.

Much of the criticism that befalls unit testing from the testing community is the poor integration it offers. Just because a feature works in isolation doesn’t mean it will work in conjunction with its dependencies. This is often why testers find so many defects early in their testing effort. This doesn’t need to be the case, as more detailed specifications can lead to more accurate mocking allowing for the unit tests to behave realistically and provide better results. There will always be “mocked” functionality that is not accurately known or designed, but with enough early thought, this amount of rework is greatly reduced.

Design is not just about unit tests though. One of the biggest barriers to test automation executing directly in the pipeline is that the team that deals with the larger integrated system only starts a lot of their testing and automation effort once the code has been deployed into a bigger environment. This wastes critical time in the development process, as certain issues will only be discovered later and there should be enough detail to allow testers to at least start writing the majority of their automated tests while the developers are coding on their side.

This doesn’t mean that manual verification, exploratory testing, and actually using the software shouldn’t take place. Those are critical parts of any testing process and are important steps to ensuring software behaves as desired. These approaches are also effective at finding faults with the proposed design. However, automating the integration tests allows the process to be streamlined. These tests can then be included in the initial pipelines thereby improving the overall quality of the delivered product by providing quicker feedback to the development team of failures without the testing team even needing to get involved.

So what actually needs to be tested then?

I’ve spoken a lot about specific approaches to design and shiting left to achieve the best testing results. But you still can’t go ahead and automate everything you test, because it’s simply not feasible and adds too much to the execution time of the CI/CD pipelines. So knowing which scenarios need to be appropriately unit or integration tested for automation purposes is crucial while trying to alleviate unnecessary duplication of the testing effort.

Before I dive into these different tests, it’s worth noting that while the aim is to remove duplication, there is likely to always be a certain level of duplication that will be required across tests to achieve the right level of coverage. You want to try and reduce it as much as possible, but erring on the side of duplication is safer if you can’t figure out a better way to achieve the test coverage you need.  

Areas to be unit tested

When it comes to building your pipeline, your unit tests and scans should typically fall into the CI portion of your pipeline, as they can all be evaluated as the code is being built.

Entry and exit points: All code receives input and then provides an output. Essentially, what you are looking to unit test is everything that a piece of code can receive, and then you must ensure it sends out the correct output. By catching everything that flows through each piece of code in the system, you greatly reduce the number of failures that are likely to occur when they are integrated as a whole.

Isolated functionality: While most code will operate on an integrated level, there are many functions that will handle all computation internally. These can be unit-tested exclusively and teams should aim to hit 100% unit test coverage on these pieces of code. I have mostly come across isolated functions when working in microservice architectures where authentication or calculator functions have no dependencies. This means that they can be unit tested with no need for additional integration.

Boundary value validations: Code behaves the same when it receives valid or invalid arguments, regardless of whether it is entered from a UI, some integrated API, or directly through the code. There is no need for testers to go through exhaustive scenarios when much of this can be covered in unit tests.

Clear data permutations: When the data inputs and outputs are clear, it makes that code or component an ideal candidate for a unit test. If you’re dealing with complex data permutations, then it is best to tackle these at an integration level. The reason for this is that complex data is often difficult to mock, slow to process, and will slow down your coding pipeline.

Security and performance: While the majority of load, performance, and security testing happens at an integration level, these can also be tested at a unit level. Each piece of code should be able to handle an invalid authentication, redirection, or SQL/code injection and transmit code efficiently. Unit tests can be created to validate against these. After all, a system’s security and performance are only as effective as its weakest part, so ensuring there are no weak parts is a good place to start.

Areas for integration automation

These are tests that will typically run post-deployment of your code into a bigger environment – though it doesn’t have to be a permanent environment and something utilizing containers works equally well. I’ve seen many teams still try and test everything in this phase though and this can lead to a very long portion of your pipeline execution. Something which is not great if you’re looking to deploy into production on a regular basis each day.

So, the importance is to only test those areas where your unit tests are going to cover satisfactorily, while also focusing on functionality and performance in your overall test design. Some design principles that I give later in this article will help with this.

Positive integration scenarios: We still need to automate integration points to ensure they work correctly. However, the trick is to not focus too much on exhaustive error validation, as these are often triggered by specific outputs that can be unit tested. Rather focus on ensuring successful integration takes place.

Test backend over frontend: Where possible, focus your automation effort on backend components than frontend components. While the user might be using the front end more often, it is typically not where a lot of the functional complexity lies, and backend testing is a lot faster and therefore better for your test automation execution.

Security: One of the common mistakes is that teams rely on security scans for the majority of their security testing and then don’t automate some other critical penetration tests that are performed on the software. And while some penetration tests can’t be executed in a pipeline effectively, many can and these should be automated and run regularly given their importance, especially when dealing with any functionality that covers access, payment, or data privacy. These are areas that can’t be compromised and should be covered.

Are there automated tests that shouldn’t be included in the CI/CD pipelines?

When it comes to automation, it’s not just about understanding what to automate, but also what not to automate, or even if there are tests that are automated, they shouldn’t always land in your CI/CD pipelines. And while the goal is to always shift left as much as possible and avoid these areas, for some architectures it’s not always possible and there may be some additional level of validation required to satisfy the needed test coverage.

This doesn’t mean that tests shouldn’t be automated or placed in pipelines, rather just that they should be separated from your CI/CD processes and rather executed on a daily basis as part of a scheduled execution and not part of your code delivery.

End-to-end tests with high data requirements: Anything that requires complex data scenarios to test should be reserved for execution in a proper test environment outside of a pipeline. While these tests can be automated, they are often too complex or specific for regular execution in a pipeline, plus will take a long time to execute and validate, making them not ideal for pipelines.

Visual regression: Outside of functional testing it is important to often perform visual regression against any site UI to ensure it looks consistent across a variety of devices, browsers, and resolutions. This is an important aspect of testing that often gets overlooked. However, as it doesn’t deal with actual functional behavior, it is often best to execute this outside of your core CI/CD pipelines, though still a requirement before major releases or UI updates.

Mutation testing: Mutation testing is a fantastic way of being able to check the coverage of your unit testing efforts and see what may have been missed by adjusting different decisions in your code and see what it misses. However, the process is quite lengthy and is best done as part of a review process rather than forming part of your pipelines.

Load and stress testing: While it is important to test the performance of different parts of code, you don’t want to put a system under any form of load or stress in a pipeline. To best do this testing, you need a dedicated environment and specific conditions that will stress the limits of your application under test. Not the sort of thing you want to do as part of your pipelines.   

Designing effective tests

So, it’s clear that we need a shift-left approach that relies heavily on unit tests with high coverage, but then also a good range of tests covering all areas to get the quality that is likely needed. It still seems like a lot though and there is always the risk that the pipelines can still take a considerable time to execute, especially at a CD level where the more time-intensive integration tests will be executed post-code deployment.

There is also a manner of how you design your tests though that will help make this effective. Automating tests that are unnecessary is a big waste of time, but so are inefficiently written tests. The biggest problem here is that often testers don’t have a full understanding of the efficiency of their test automation, focusing on execution rather than looking for the most processor and memory-effective way of doing it.

The secret to making all tests work is simplicity. Automated tests shouldn’t be complicated. Perform an action, and get a response. And so it is important to stick to that when designing your tests. The following below attributes are important things to follow in designing your tests to keep them both simple and performant.

1. Naming your tests

You might not think naming tests are important, but it matters when it comes to the maintainability of the tests. While test names might not have anything to do with the test functionality or speed of execution, it does help others know what the test does. So when failures occur in a test or something needs to be fixed, it makes the maintenance process a lot quicker and that is important when waging through the many thousands of tests your pipeline is likely to have.

Tests are useful for more than just making sure that your code works, they also provide documentation. Just by looking at the suite of unit tests, you should be able to infer the behavior of your code. Additionally, when tests fail, you can see exactly which scenarios did not meet your expectations.

The name of your test should consist of three parts:

  • The name of the method being tested
  • The scenario under which it’s being tested
  • The behavior expected when the scenario is invoked

By using these naming conventions, you ensure that it’s easy to identify what any test or code is supposed to do while also speeding up your ability to debug your code.

2. Arranging your tests

Readability is one of the most important aspects of writing a test. While it may be possible to combine some steps and reduce the size of your test, the primary goal is to make the test as readable as possible. A common pattern to writing simple, functional tests is “Arrange, Act, Assert”. As the name implies, it consists of three main actions:

  • Arrange your objects, by creating and setting them up in a way that readies your code for the intended test
  • Act on an object
  • Assert that something is as expected

By clearly separating each of these actions within the test, you highlight:

  • The dependencies required to call your code/test
  • How your code is being called, and
  • What you are trying to assert.

This makes tests easy to write, understand and maintain while also improving their overall performance as they perform simple operations each time.

3. Write minimally passing tests

Too often the people writing automated tests are trying to utilize complex coding techniques that can cater to multiple different behaviors, but in the testing world, all it does is introduce complexity. Tests that include more information than is required to pass the test have a higher chance of introducing errors and can make the intent of the test less clear. For example, setting extra properties on models or using non-zero values when they are not required, only detracts from what you are trying to prove. When writing tests, you want to focus on the behavior. To do this, the input that you use should be as simple as possible.

4. Avoid logic in tests

When you introduce logic into your test suite, the chance of introducing a bug through human error or false results increases dramatically. The last place that you want to find a bug is within your test suite because you should have a high level of confidence that your tests work. Otherwise, you will not trust them and they do not provide any value.

When writing your tests, avoid manual string concatenation and logical conditions such as if, while, for, or switch, because this will help you avoid unnecessary logic. Similarly, any form of calculation should be avoided – your test should rely on an easily identifiable input and clear output – otherwise, it can easily become flaky based on certain criteria – plus it adds to maintenance as when the code logic changes, the test logic will also need to change.

Another important thing here is to remember that pipeline tests should execute quickly and logic tends to cost more processing time. Yes, it might seem insignificant at first, but with several hundreds of tests, this can add up.

5. Use mocks and stubs wherever possible

A lot of testers might frown on this, as the thought of using lots of mocks and stubs can be seen as avoiding the true integrated behavior of an application. This is true for end-to-end testing which you still want to automate, but not ideal for pipeline execution. Not only does it slow down pipeline execution, but creates flakiness in your test results as external functions are not operational or out of sync with your changes.

The best way to ensure that your test results are more reliable, along with allowing you to take greater control of your testing effort and improve coverage is to build mocking into your test framework and rely on stubs to intercept complex data patterns that an external function to do it for you.

6. Prefer helper methods to Setup and Teardown

In unit testing frameworks, a Setup function is called before each and every unit test within your test suite. Each test will generally have different requirements in order to get the test up and running. Unfortunately, Setup forces you to use the exact same requirements for each test. While some may see this as a useful tool, it generally ends up leading to bloated and hard-to-read tests. If you require a similar object or state for your tests, rather use an existing helper method than leveraging Setup and Teardown attributes.

This will help by introducing:

  • Less confusion when reading the tests, since all of the code is visible from within each test.
  • Less chance of setting up too much or too little for the given test.
  • Less chance of sharing state between tests which would otherwise create unwanted dependencies between them.

7. Avoid multiple asserts

When introducing multiple assertions into a test case, it is not guaranteed that all of them will be executed. This is because the test will likely fail at the end of an earlier assertion, leaving the rest of the tests unexecuted. Once an assertion fails in a unit test, the proceeding tests are automatically considered to be failing, even if they are not. The result of this is then that the location of the failure is unclear, which also wastes debugging time.

When writing your tests, try to only include one assert per test. This helps to ensure that it is easy to pinpoint exactly what failed and why. Teams can easily make the mistake of trying to write as few tests as possible that achieve high coverage, but in the end, all it does is make future maintenance a nightmare.

This ties into removing test duplication as well. You don’t want to repeat tests through the pipeline execution and making what they test more visible helps the team to ensure this objective can be achieved.

8. Treat your tests like production code

While test code may not be executed in a production setting, it should be treated just the same as any other piece of code. And that means it should be updated and maintained on a regular basis. Don’t write tests and assume that everything is done. You will need to put in the work to keep your tests functional and healthy, while also keeping all libraries and dependencies up to date too. You don’t want technical debt in your code- don’t have it in your tests either.  

9. Make test automation a habit

Okay, so this last one is less of an actual design principle and more of a tip on good test writing. Like with all things coding-related, knowing the theory is not enough and it requires practice to get good and build a habit, so these testing practices will take time to get right and feel natural. The skill of writing a proper test though is incredibly undervalued and one that will add a lot of value to the quality of the code so the effort and extra effort required are certainly worth it.

Conclusion – it’s all about good test design

As you can see, test automation across your full stack can still work within your pipeline and provide you with a high level of regression coverage while not breaking or slowing down your pipeline unnecessarily. What it does require though is a good test design to work effectively and so the unit and automated tests will need to be well-written to be of most value.

A good DevOps testing strategy requires a solid base of unit tests to provide most of the coverage with mocking to help drive the rest of the automation effort up, leaving only the need for a few end-to-end automated tests to ensure everything works in order and allow your team to take confidence that the pipeline tests will successfully deliver on their quality needs.

About the Author

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.