Mobile Monitoring Solutions


Learnings from Measuring Psychological Safety

MMS Founder
MMS Ben Linders

Article originally posted on InfoQ. Visit InfoQ

Asking people how they feel about taking certain types of risks can give insight into the level of psychological safety and help uncover issues. Discussing the answers can strengthen the level of safety of more mature teams and help less mature teams to understand how they could improve.

Jitesh Gosai shared his experience with psychological safety in his talk at Lean Agile Scotland 2022.

Measuring psychological safety levels in teams is challenging as it’s hard to judge if someone feels like they are taking an interpersonal risk. Most people don’t think about risk-taking this way, and usually take a chance based on their situation and feelings, Gosai said.

One way to do it is to use a proxy measure of how people think about taking certain types of risks and if they feel supported to do so in their team:

While this still doesn’t give you a definitive scale of psychological safety in a team, or even if people will take interpersonal risks, it can at least indicate if you have any issues that would prevent people from doing so.

Using this scale is a very simple mechanism for understanding where a team is with psychological safety, Gosai mentioned. How people interpret questions can vary and how they respond can depend on how they feel that day.

Gosai mentioned two risks that could come from trying to measure psychological safety in teams. One is that if the responses are not anonymous, an individual could be singled out for pulling down the team average. This could make them feel they are being scrutinised for speaking up, potentially making them less likely to do so in the future. Another consequence of this could be that if others see this happening, they too may hold back, Gosai said.

The other risk Gosai mentioned is if these ratings are used to compare teams and their leads, rather than used to offer help to teams. Then they become just another metric to try and game, giving an even more skewed view of the team.

Gosai mentioned that most teams, instead of trying to measure psychological safety, would benefit from just starting to build a shared understanding of what it is first:

It could seem wasteful for teams with high levels, but they still stand to benefit by strengthening what they have. In contrast, those teams with low levels have everything to gain.

InfoQ interviewed Jitesh Gosai about measuring psychological safety.

InfoQ: How do you measure psychological safety?

Jitesh Gosai: Amy Edmundson provided a set of sample questions in her book The Fearless Organisation that team members can answer using a seven-point Likert scale which can be used as a starting point to gauge where teams are with psychological safety.

Based on how they respond, leaders can see if they have low, medium or high levels of psychological safety. For instance, if they score on the low end of the scale, then that could be seen as low levels of psychological safety, and leaders should take immediate action.

InfoQ: What did you learn from measuring the level of psychological safety?

Gosai: An ideal situation would be that questions related to psychological safety are incorporated into regular team health check questionnaires so that teams can see if there is an overall trend.

If they are trending down, then the team can look to see what has happened recently to cause this, but also, if they are trending up, they can see what they improved to do so. This way, the team can see the benefit of answering the questions as accurately as possible, rather than just another tick-box exercise.

When working with leaders, I find they know how their behaviours affect teams, but they often haven’t made that link to psychological safety. So helping them make that connection could immediately impact psychological safety in teams.

In Learnings from Applying Psychological Safety Across Teams, Gosai explained how they applied ideas from psychological safety to make it safe for people to take interpersonal risks. Creating Environments High in Psychological Safety with a Combined Top-Down and Bottom-Up Approach describes how complementing leadership with team workshops in communication skills can enable people to speak up and feel safe to fail.

About the Author

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Is MySQL HeatWave Oracle’s “Killer App” ? – Forbes

MMS Founder
MMS RSS

Posted on nosqlgooglealerts. Visit nosqlgooglealerts

Since Oracle launched its MySQL HeatWave service in December 2020, it has continuously driven differentiation in the database-as-a-service market. While competing against some of the biggest names in the cloud space, the company has shown what appears to be “too good to be true” price-for-performance against competitors such as Snowflake.

With the latest update to MySQL HeatWave, Oracle shows no signs of slowing down. And I think this service may be the “killer app” for Oracle Cloud Infrastructure (OCI)—or even Oracle as a whole. I’ll explain my reasoning in this article.

MySQL HeatWave changed the analytics game

Most IT folks who have spent time around databases understand the pervasiveness of MySQL in the enterprise. Since it was released under the GNU General Public License (GPL) in 2000, the database platform has exploded in popularity among companies of all sizes, from smaller organizations that want a SQL database without the considerable cost, to enterprise organizations deploying large departmental databases. My first exposure to the platform came in 2003, when I worked as an IT executive in a large state government organization. We used MySQL everywhere possible to lower costs without sacrificing performance and functionality.

Over the years, MySQL became the world’s most popular open-source database and the second-most popular database overall (after Oracle Database). Yes, while a lot of the buzz is for the likes of MongoDB and other “cool” NoSQL platforms, the top two database distributions—by far—are owned by Oracle. And they are both SQL-based.

The challenge I saw when running a state IT organization is the same challenge that IT orgs have been facing ever since: data silos and the fact that MySQL is not optimized for analytics. When a business has hundreds or even thousands of MySQL instances, integrating all of that data and gleaning insights from it via analytics is painful. All too often, business users must rely on IT to perform time-consuming and costly—yet error-prone! —extract, load and transform (ETL) processes to bring all the data to one central location for analysis. Or brave business users might attempt this themselves, then reach out to IT a few weeks later when they’ve given up trying. By that time, they’re analyzing old data.

The MySQL development team at Oracle recognized this challenge impacting customers, and MySQL HeatWave was born. The idea was simple: deploy a cloud service whereby customers of all sizes could run real-time analytics on all of their data—both transactional and historical—and enable it with “point-and-click” simplicity, and without needing the dreaded ETL.The challenge I saw when running a state IT organization is the same challenge that IT orgs have been facing ever since: data silos and the fact that MySQL is not optimized for analytics. When a business has hundreds or even thousands of MySQL instances, integrating all of that data and gleaning insights from it via analytics is painful. All too often, business users must rely on IT to perform time-consuming and costly—yet error-prone! —extract, load and transform (ETL) processes to bring all the data to one central location for analysis. Or brave business users might attempt this themselves, then reach out to IT a few weeks later when they’ve given up trying. By that time, they’re analyzing old data.

The MySQL development team at Oracle recognized this challenge impacting customers, and MySQL HeatWave was born. The idea was simple: deploy a cloud service whereby customers of all sizes could run real-time analytics on all of their data—both transactional and historical—and enable it with “point-and-click” simplicity, and without needing the dreaded ETL.

When I look at MySQL HeatWave, I consider two things—the richness of features and the ability of an average user to take advantage of these capabilities. From both perspectives, I’m impressed. As touched on above, the performance numbers are almost too good to be true. More than that, using this service is simple: no refactoring or rearchitecting of applications, no new analytics or visualization tools to learn. Just point the tool in the right direction, click a couple of times, and you have a database environment that supports online transactional processing (OLTP) and analytics.

MySQL Autopilot drives real automated operations

Some product manager once said, “You never know how your product is going to perform until it’s in the hands of paying customers.” Maybe that product manager was me in a previous part of my career. In any case, this truism is obvious to anyone who has ever launched a product.

When Oracle launched its first update to the MySQL HeatWave service in mid-2021, it focused on automating the data management lifecycle using machine learning (ML). In fact, MySQL HeatWave Autopilot was the first service I saw that automated many DBA functions that would previously consume hours a week.

Database tuning is an art form requiring both technical depth and something like clairvoyance. Deploying and provisioning databases is hard enough but tuning them is a never-ending process—one that can consume database professionals. To alleviate this, MySQL Autopilot combined deep analytics and finely tuned ML models to drive a continually optimized and always resilient MySQL database environment.

This update of MySQL HeatWave is also where I started to pay attention to Oracle’s competitive performance comparisons. Once again, the numbers initially seemed too good to be true. Suffice it to say that HeatWave outperformed every major cloud provider in analytics.

Of special note was its performance relative to the very popular Snowflake. When running TPC-H (a decision support benchmark), MySQL HeatWave showed an incredible 35x price/performance advantage over Snowflake. In terms of raw performance, HeatWave had a 6.8x advantage, and in terms of price, a 5.2x advantage. Pretty compelling, right? Benchmarks can be manipulated—or so I would think. Except that Oracle publishes its test harness on GitHub so customers can see for themselves. That shows real confidence in their capabilities.

HeatWave AutoML and multi-cloud—a natural next step

After introducing MySQL Autopilot, Oracle’s next big act with HeatWave was the integration of ML into MySQL HeatWave for model training purposes—aptly named HeatWave AutoML. This gave HeatWave the ability to automate the building, training, tuning, and explaining of ML models in real-time based on the data residing in MySQL. Again, HeatWave is democratizing machine learning by enabling this functionality for all, so that companies that don’t have teams of data scientists or TensorFlow developers can gain the same kinds of insights and automation usually reserved for larger organizations.

Additionally, Oracle embraced the notion of multi-cloud by releasing MySQL HeatWave on AWS. That means the entire MySQL HeatWave environment—core capabilities, Autopilot and ML—is available for customers to use on AWS. Oracle did this to bring the value of MySQL HeatWave to customers already using AWS who might otherwise be priced out of the service because of data egress fees. (To use HeatWave in-memory analytics, you would have to move all your data from AWS into OCI.) So, rather than force this difficult decision, Oracle stood up the HeatWave service natively—including the control plane, data plane, and console— to run on AWS. Is performance as good as it would be on OCI? No. But it’s still really good. And there is no sense that Oracle is delivering an underpowered product to ultimately convince customers to move to OCI.

If your data resides on Microsoft Azure instead, life is equally easy for you. Because Oracle and Microsoft have deployed a low-latency layer-2 network connection, you can simply use HeatWave on OCI while your applications still reside in Azure. No latency hit, no cost-prohibitive ingress and egress fees.

As an analyst who was also an Oracle customer for some time, I feel like I’m witnessing a new company with a new attitude. In the past, Oracle was not known for its emphasis on making life easy for customers. In comparison to that, its trajectory with MySQL HeatWave is a breath of fresh air.

MySQL HeatWave’s latest release: more ML goodness

In the latest update to HeatWave, the team at Oracle has doubled down on ML by driving usability and automation. The democratization of ML only really happens when ML functions are practically available to all—meaning that they don’t require a team of developers and data scientists. And Oracle has delivered even further on that promise with its latest release.

The result of Oracle’s work is automated machine learning that is highly performant and fully automated. And here’s what’s interesting: because of the highly parallelized architecture of MySQL HeatWave AutoML, these models are running on commodity CPUs. This drives down costs considerably—savings that are passed on to customers. So that price performance advantage I discussed earlier? Now it’s even better.

In addition to focusing on usability, Oracle has delivered three new ML capabilities in this latest update that are worth quickly highlighting.

  1. Unsupervised anomaly detection. This feature identifies events deviating from the norm using a single algorithm. Banks looking to detect fraud, operational technology organizations looking for IoT sensor outliers, and cybersecurity teams focused on intrusion detection are all use cases that would benefit from this ability. HeatWave is the only solution to perform this detection at the local, global, and cluster levels with a single algorithm and a fully automated process. This makes it faster and more accurate than manually selecting, testing, and tuning multiple individual algorithms.
  2. Recommender system availability. This capability automates the use of recommender systems—which recommend movies to watch and products to buy—in HeatWave, driving significantly faster training time coupled with a lower error rate (as compared with a public-domain solution). By comparison, other services only recommend algorithms, putting the burden on users to select the most appropriate one, and then manually tune it.
  3. Multivariate time series forecasting. This feature automates the process for companies to accurately predict time series observations based on a number of variables. This would apply, for instance, if a power company is trying to determine seasonal electricity demands when considering other energy sources, weather, and so on. What once would require a team of data scientists is now just a few mouse clicks away.

Additionally, Oracle has made enhancements to MySQL Autopilot that further improve ML automation in a workload-aware way. Whether an organization is using HeatWave for OLTP, OLAP, or data lakehouse purposes, Autopilot removes mundane and tedious chores from the hands of administrators.

If the above graphic looks familiar, it is an update covering available Autopilot capabilities. The MySQL team at Oracle has added considerable functionality as MySQL Autopilot has evolved into a data management platform.

MySQL moves to a Lakehouse

Last October at CloudWorld 2022 Oracle announced MySQL HeatWave Lakehouse (currently in beta), continuing to extend HeatWave’s database capabilities. It enables customers to process and query hundreds of terabytes of data in object store in a variety of file formats, such as CSV and Parquet, as well as Aurora and Redshift export files.

Keeping with tradition, MySQL HeatWave Lakehouse delivers significantly better performance than competitive cloud database services for running queries (17X faster than Snowflake) and loading data (2.7X faster than Snowflake) on a 400TB TPC-H benchmark.

In addition, in a single query, customers can query transactional data in the MySQL database and combine it with data in the object store using standard MySQL syntax. New MySQL Autopilot capabilities that improve performance and make MySQL HeatWave Lakehouse easy to use also became available, increasing administrators’ productivity.

Is MySQL HeatWave really Oracle’s killer app?

Maybe I was being a little hyperbolic with my blog title. But MySQL HeatWave was revolutionary from the moment Oracle released it in late 2020. And it continues to separate itself from competing, but single-focus, database cloud solutions as Oracle adds functionality. MySQL HeatWave is evolving from an OLTP + in-memory analytics tool into something much bigger—if not a data management platform, then a fully featured OLTP, data analytics, machine learning, and lakehouse platform within one integrated database.

My only question is, what’s next? Considering how quickly Oracle has been innovating with HeatWave, I don’t think it will take long to find out.

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Investor Takes Pessimistic Stance on MongoDB – Best Stocks

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

As per recent reports by TradingView News, a big-time investor has adopted a pessimistic stance on MongoDB, the article highlights that almost 68% of investors have opened trades with a bearish outlook, while only 31% have taken a bullish view. The report also sheds light on the specific details of some of the works made by investors, including putting businesses with a bearish outlook and call transactions with a bearish outlook. Some put trades include a $350.00 strike price with an expiration date of January 19, 2024, valued at $404.7K, and another put trade with the same strike price and expiration date valued at $202.4K. A put sweep trade with a strike price of $145.00 expiring on January 20, 2023, valued at $129.0K, was also made. Lastly, a call trade with a $135.00 strike price expiring on December 16, 2022, valued at $93.0K was placed.

MDB Stock Performance on May 14, 2021: Positive Trend with Strong Financial Performance

The stock performance of MDB (MongoDB Inc.) on May 14, 2021, shows a positive trend. The previous close of MDB was $213.83, and it opened today at $216.02. The day’s range was between $215.46 and $219.49, with a volume of 26,725. The average volume of the last three months was 1,935,606, and the market cap was $15.2B.

MDB’s earnings growth for last year was -5.89%, but it has shown a positive trend this year with a growth rate of +26.99%. The expected earnings growth for the next five years is +8.00%. The revenue growth rate for last year was +46.95%, indicating a solid financial performance.

MDB’s P/E ratio is NM, indicating that it has no earnings or negative earnings per share. The price/sales ratio is 11.45, and the price/book ratio is 20.45, meaning the stock trades at a premium compared to its peers.

MDB’s stock performance is favorable compared to other technology services companies. The stock price of SPLK (Splunk Inc.) increased by +1.98%, PTC (PTC Inc.) increased by +0.65%, and ZS (Zscaler Inc.) increased by +0.62%.

MDB’s next reporting date is June 1, 2023, and the EPS forecast for this quarter is $0.20. The annual revenue for last year was $1.3B, but the company reported a loss of -$345.4M. The net profit margin was -26.90%, indicating that the company’s expenses exceeded its revenue.

MDB is a packaged software company in the technology services sector. The corporate headquarters is located in New York, New York. However, there are no executives to display.

In conclusion, MDB’s stock performance on May 14, 2021, shows a positive trend with increased stock price and volume. The company’s financial performance has been positive this year but reported a loss last year. Investors should monitor the company’s financial performance and future earnings reports to make informed investment decisions.

MongoDB Inc: A Promising Investment Opportunity in the Tech Sector

, 2021

MongoDB Inc (MDB) is a popular database software provider making waves in the tech industry. The company’s stock has been performing well, with a steady increase in price over the past year. The 22 analysts offering 12-month price forecasts for MDB have a median target of 247.50, with a high estimate of 290.00 and a low estimate of 180.00. This indicates that the stock is expected to continue its upward trend in the coming months.

The consensus among 27 polled investment analysts is to buy stock in MongoDB Inc. This rating had held steady since March, when it was unchanged from a buy rating. This suggests that investors are optimistic about the company’s prospects and are confident in its ability to deliver strong returns.

Looking at the company’s recent financial performance, MongoDB Inc reported earnings per share of $0.20 and sales of $348.0M for the current quarter, with a reporting date of June 01, 2021. These impressive figures indicate that the company is on track to achieve its financial targets.

Overall, the outlook for MDB stock is positive, with analysts predicting a continued increase in price over the next 12 months. The company’s strong financial performance and positive investor sentiment suggest it is a solid investment opportunity for those looking to invest in the tech sector. However, as with any investment, conducting thorough research and analysis is essential before making decisions.

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Diversified Trust Co Invests $243,000 in MongoDB, Inc. (NASDAQ:MDB) – Defense World

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

Diversified Trust Co bought a new stake in MongoDB, Inc. (NASDAQ:MDBGet Rating) during the fourth quarter, according to its most recent filing with the Securities & Exchange Commission. The firm bought 1,234 shares of the company’s stock, valued at approximately $243,000.

Several other hedge funds have also recently bought and sold shares of MDB. Ieq Capital LLC increased its stake in MongoDB by 23.2% during the 3rd quarter. Ieq Capital LLC now owns 1,962 shares of the company’s stock valued at $390,000 after buying an additional 370 shares during the period. Cubist Systematic Strategies LLC lifted its holdings in shares of MongoDB by 198.1% during the second quarter. Cubist Systematic Strategies LLC now owns 69,217 shares of the company’s stock valued at $17,962,000 after purchasing an additional 45,994 shares during the last quarter. Asset Management One Co. Ltd. lifted its stake in MongoDB by 3.1% during the third quarter. Asset Management One Co. Ltd. now owns 61,243 shares of the company’s stock valued at $12,106,000 after buying an additional 1,829 shares in the last quarter. Whittier Trust Co. of Nevada Inc. boosted its holdings in shares of MongoDB by 8.0% in the third quarter. Whittier Trust Co. of Nevada Inc. now owns 15,645 shares of the company’s stock worth $3,106,000 after buying an additional 1,156 shares during the period. Finally, Oppenheimer & Co. Inc. increased its position in shares of MongoDB by 14.8% in the third quarter. Oppenheimer & Co. Inc. now owns 4,672 shares of the company’s stock worth $927,000 after acquiring an additional 602 shares in the last quarter. 84.86% of the stock is currently owned by institutional investors.

Wall Street Analysts Forecast Growth

A number of equities analysts recently commented on MDB shares. Tigress Financial decreased their price objective on shares of MongoDB from $575.00 to $365.00 and set a “buy” rating for the company in a research note on Thursday, December 15th. Credit Suisse Group decreased their price objective on MongoDB from $305.00 to $250.00 and set an “outperform” rating for the company in a research report on Friday, March 10th. UBS Group upped their target price on MongoDB from $200.00 to $215.00 and gave the company a “buy” rating in a report on Wednesday, December 7th. Sanford C. Bernstein started coverage on shares of MongoDB in a report on Friday, February 17th. They set an “outperform” rating and a $282.00 price target for the company. Finally, The Goldman Sachs Group decreased their target price on shares of MongoDB from $325.00 to $280.00 and set a “buy” rating for the company in a research note on Thursday, March 9th. Four research analysts have rated the stock with a hold rating and twenty have given a buy rating to the stock. According to MarketBeat, the stock has a consensus rating of “Moderate Buy” and an average price target of $253.87.

MongoDB Stock Performance

Shares of NASDAQ:MDB opened at $213.93 on Wednesday. The firm’s 50 day simple moving average is $213.90 and its 200 day simple moving average is $195.51. The company has a debt-to-equity ratio of 1.54, a current ratio of 3.80 and a quick ratio of 3.80. The company has a market capitalization of $14.98 billion, a price-to-earnings ratio of -42.45 and a beta of 1.00. MongoDB, Inc. has a fifty-two week low of $135.15 and a fifty-two week high of $471.96.

MongoDB (NASDAQ:MDBGet Rating) last announced its earnings results on Wednesday, March 8th. The company reported ($0.98) earnings per share (EPS) for the quarter, beating analysts’ consensus estimates of ($1.18) by $0.20. MongoDB had a negative return on equity of 48.38% and a negative net margin of 26.90%. The business had revenue of $361.31 million for the quarter, compared to analyst estimates of $335.84 million. As a group, sell-side analysts forecast that MongoDB, Inc. will post -4.04 earnings per share for the current year.

Insider Activity at MongoDB

In related news, CEO Dev Ittycheria sold 40,000 shares of the stock in a transaction that occurred on Wednesday, March 1st. The stock was sold at an average price of $207.86, for a total transaction of $8,314,400.00. Following the completion of the transaction, the chief executive officer now directly owns 190,264 shares of the company’s stock, valued at $39,548,275.04. The sale was disclosed in a legal filing with the Securities & Exchange Commission, which is accessible through the SEC website. In other news, insider Thomas Bull sold 399 shares of MongoDB stock in a transaction that occurred on Tuesday, January 3rd. The stock was sold at an average price of $199.31, for a total transaction of $79,524.69. Following the completion of the transaction, the insider now directly owns 16,203 shares in the company, valued at approximately $3,229,419.93. The sale was disclosed in a filing with the Securities & Exchange Commission, which is accessible through this hyperlink. Also, CEO Dev Ittycheria sold 40,000 shares of the business’s stock in a transaction on Wednesday, March 1st. The shares were sold at an average price of $207.86, for a total value of $8,314,400.00. Following the completion of the sale, the chief executive officer now owns 190,264 shares of the company’s stock, valued at $39,548,275.04. The disclosure for this sale can be found here. Over the last 90 days, insiders sold 110,994 shares of company stock valued at $22,590,843. Corporate insiders own 5.70% of the company’s stock.

MongoDB Company Profile

(Get Rating)

MongoDB, Inc engages in the development and provision of a general-purpose database platform. The firm’s products include MongoDB Enterprise Advanced, MongoDB Atlas and Community Server. It also offers professional services including consulting and training. The company was founded by Eliot Horowitz, Dwight A.

Further Reading

Institutional Ownership by Quarter for MongoDB (NASDAQ:MDB)



Receive News & Ratings for MongoDB Daily – Enter your email address below to receive a concise daily summary of the latest news and analysts’ ratings for MongoDB and related companies with MarketBeat.com’s FREE daily email newsletter.

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


MongoDB: New Vulnerability! Vulnerability allows manipulation of files

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

@media screen and (min-width: 1201px) {
.dbemy6423ebf50b74d {
display: block;
}
}
@media screen and (min-width: 993px) and (max-width: 1200px) {
.dbemy6423ebf50b74d {
display: block;
}
}
@media screen and (min-width: 769px) and (max-width: 992px) {
.dbemy6423ebf50b74d {
display: block;
}
}
@media screen and (min-width: 768px) and (max-width: 768px) {
.dbemy6423ebf50b74d {
display: block;
}
}
@media screen and (max-width: 767px) {
.dbemy6423ebf50b74d {
display: block;
}
}

There is an IT security warning for MongoDB. Here you can find out which vulnerabilities are involved, which products are affected and what you can do.

The Federal Office for Security in der Informationstechnik (BSI) published an update on March 28th, 2023 to a vulnerability for MongoDB that became known on June 14th, 2021. The operating systems UNIX, Linux and Windows as well as the products Red Hat Enterprise Linux and Open Source MongoDB are affected by the vulnerability.

The latest manufacturer recommendations regarding updates, workarounds and security patches for this vulnerability can be found here: Red Hat Security Advisory RHSA-2023:1409 (Status: 03/27/2023). Other useful sources are listed later in this article.

Security advisory for MongoDB – risk: medium

Risk level: 3 (medium)
CVSS Base Score: 6,8
CVSS Temporal Score: 5,9
Remoteangriff: Ja

The Common Vulnerability Scoring System (CVSS) is used to assess the severity of security vulnerabilities in computer systems. The CVSS standard makes it possible to compare potential or actual security vulnerabilities based on various criteria in order to better prioritize countermeasures. The attributes “none”, “low”, “medium”, “high” and “critical” are used for the severity of a vulnerability. The base score assesses the prerequisites for an attack (including authentication, complexity, privileges, user interaction) and its consequences. The Temporal Score also takes into account changes over time with regard to the risk situation. According to the CVSS, the risk of the vulnerability discussed here is rated as “medium” with a base score of 6.8.

MongoDB Bug: Vulnerability allows manipulation of files

MongoDB is an open source document database.

A remote, authenticated attacker can exploit a vulnerability in MongoDB to manipulate files.

The vulnerability is identified with the unique CVE serial number (Common Vulnerabilities and Exposures) CVE-2021-20329 traded.

Systems affected by the MongoDB vulnerability at a glance

operating systems
UNIX, Linux, Windows

Products
Red Hat Enterprise Linux (cpe:/o:redhat:enterprise_linux)
Open Source MongoDB GO Driver 1.5.1 (cpe:/a:mongodb:mongodb)

General recommendations for dealing with IT vulnerabilities

  1. Users of the affected systems should keep them up to date. When security vulnerabilities become known, manufacturers are required to remedy them as quickly as possible by developing a patch or a workaround. If security patches are available, install them promptly.

  2. For information, consult the sources listed in the next section. These often contain further information on the latest version of the software in question and the availability of security patches or tips on workarounds.

  3. If you have any further questions or are uncertain, please contact your responsible administrator. IT security officers should regularly check the sources mentioned to see whether a new security update is available.

Manufacturer information on updates, patches and workarounds

Here you will find further links with information about bug reports, security fixes and workarounds.

Red Hat Security Advisory RHSA-2023:1409 vom 2023-03-27 (28.03.2023)
For more information, see: https://access.redhat.com/errata/RHSA-2023:1409

MongoDB Github vom 2021-06-13 (14.06.2021)
For more information, see: https://github.com/mongodb/mongo-go-driver/releases/tag/v1.5.1

Version history of this security alert

This is the 3rd version of this IT security notice for MongoDB. If further updates are announced, this text will be updated. You can read about changes or additions in this version history.

06/14/2021 – Initial version
2021-10-05 – Reference(s) added: GHSA-F6MQ-5M25-4R72
03/28/2023 – Added new updates from Red Hat

+++ Editorial note: This text was created with AI support based on current BSI data. We accept feedback and comments at [email protected]news.de. +++

follow News.de already at Facebook, Twitter, Pinterest and YouTube? Here you will find the latest news, the latest videos and the direct line to the editors.

roj/news.de

@media screen and (min-width: 1201px) {
.sgttt6423ebf50b7eb {
display: block;
}
}
@media screen and (min-width: 993px) and (max-width: 1200px) {
.sgttt6423ebf50b7eb {
display: block;
}
}
@media screen and (min-width: 769px) and (max-width: 992px) {
.sgttt6423ebf50b7eb {
display: block;
}
}
@media screen and (min-width: 768px) and (max-width: 768px) {
.sgttt6423ebf50b7eb {
display: block;
}
}
@media screen and (max-width: 767px) {
.sgttt6423ebf50b7eb {
display: block;
}
}

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Q&A with Ken Muse: Designing Azure Modern Data Warehouse Solutions

MMS Founder
MMS RSS

Posted on nosqlgooglealerts. Visit nosqlgooglealerts

Q&A

Q&A with Ken Muse: Designing Azure Modern Data Warehouse Solutions

The Modern Data Warehouse (MDW) pattern makes it easier than ever to deal with the increasing volume of enterprise data by enabling massive, global-scale writes operations while making the information instantly available for reporting and insights.

As such, it’s a natural fit for cloud computing platforms and the field of DataOps, where practitioners apply DevOps principles to data pipelines built according to the architectural pattern on platforms like Microsoft Azure.

In fact, Microsoft has published DataOps for the modern data warehouse guidance and a GitHub repo featuring DataOps for the Modern Data Warehouse as part of its Azure Samples offerings.


[Click on image for larger view.] Architecture of an Enterprise Data Warehouse (source: Microsoft).

And, speaking of GitHub, one of the leading experts in the MDW field is Ken Muse, a senior DevOps architect at the Microsoft-owned company who has published an entire series of articles on the pattern titled “Intro to the Modern Data Warehouse.” There, he goes into detail on storage, ingestion and so on.

It just so happens that Muse will be sharing his knowledge at a big, five-day VSLive! Developer Conference in Nashville in May. The title of his presentation is Designing Azure Modern Data Warehouse Solutions, a 75-minute session scheduled for May 16.

Attendees will learn:

  • Define and understand how to implement the MDW architecture pattern
  • How to determine appropriate Azure SQL and NoSQL solutions for a workload
  • Understand how to ingest and report against high volume data

We caught up with Muse, a four-time Microsoft Azure MVP and a Microsoft Certified Trainer, to learn more about the MDW pattern in a short Q&A.

VisualStudioMagazine: What defines a Modern Data Warehouse in the Azure cloud?

Muse:
A Modern Data Warehouse combines complementary data platform services to provide a secure, scalable and highly available solution for ingesting, processing, analyzing and reporting on large volumes of data. This architectural pattern supports high-volume data ingestion as well as flexible data processing and reporting. In the Azure cloud, it often takes advantage of services such as Azure Data Lake Storage, Azure Synapse Analytics, Azure Data Factory, Azure Databricks, Azure Cosmos DB, Azure Analysis Services, and Azure SQL.

How does the cloud improve a MDW approach as opposed to an on-premises implementation?
The cloud provides an elastic infrastructure that can dynamically scale to meet ingestion and analysis needs. Teams only pay for what they need, and they have access to virtually limitless storage and compute capacity that can be provisioned in minutes. This makes it faster and easier to turn data into actionable insights.

With an on-premises environment, the infrastructure must be sized to meet the peak needs of the application. This often results in over-provisioning and wasted resources. Hardware failures and long supply chain lead times can restrict teams from scaling quickly or exploring new approaches.


“Maintaining and optimizing each service can be time-consuming and complex. The cloud eliminates these issues by providing optimized environments on demand.”

Ken Muse, Senior DevOps Architect, GitHub

In addition, maintaining and optimizing each service can be time-consuming and complex. The cloud eliminates these issues by providing optimized environments on demand.

As developers often struggle to figure out the right tools — like SQL vs. NoSQL — for implementation, can you briefly describe what goes into making that choice, like the benefits and/or drawbacks of each?
The choice between SQL and NoSQL is often driven by the type of data you need to store and the types of queries you need to run. SQL databases are optimized for highly structured data, complex queries, strong consistency, and ACID transactions. They are natively supported in nearly every development language, making it easy to get started quickly. They can be an optimal choice for applications that commit multiple related rows in a single transaction, perform frequent point-updates, or need to dynamically query structured datasets. The strong consistency model is often easier for developers to understand. At the same time, horizontal scaling can be challenging and expensive, and performance can degrade as the database grows.

NoSQL (“not only SQL”) solutions are optimized for unstructured and semi-structured data, rapidly changing schemas, eventual consistency, high read/write volumes, and scalability. They are often a good choice for applications that need to store large amounts of data, perform frequent reads and writes, or need to dynamically query semi-structured data. They can ingest data at extremely high rates, easily scale horizontally, and work well with large datasets. They are often the best choice for graph models and understanding complex, hidden relationships.

At the same time, eventual consistency can be challenging for developers to understand. NoSQL systems frequently lack support for ACID transactions, which can make it more difficult to implement business logic. Because they not designed as a relational store, they are often not an ideal choice for self-service reporting solutions such as Power BI.

This is why the MDW pattern is important. It rely on the strengths of each tool and selects the right one for each job. It enables using both NoSQL and SQL together to support complex storage, data processing, and reporting needs.

What are a couple of common mistakes developers make in implementing the MDW pattern?
There are three common mistakes developers make in implementing the MDW pattern:

  • Using the wrong storage type for ingested data: Teams frequently fail to understand the differences between Azure storage solutions such as Azure Blob Storage, Azure Files, and Azure Data Lake Storage. Picking the wrong one for the job can create unexpected performance problems.
  • Forgetting that NoSQL solutions rely on data duplication: NoSQL design patterns are not the same as relational design patterns. Using NoSQL effectively often relies on having multiple copies of the data for optimal querying and security. Minimizing the number of copies can restrict performance, limit security, and increase costs.
  • Using Azure Synapse Analytics for dynamic reporting: Azure Synapse Analytics is a powerful tool for data processing and analysis, but it is not designed for high-concurrency user queries. Direct querying from self-service reporting solutions such as Power BI is generally not recommended. It can provide a powerful solution for building the data models that power self-service reporting when used correctly or combined with other services.

With the massive amounts of data being housed in the cloud, what techniques are useful to ingest and report against high-volume data?
For ingesting high volume data as it arrives, queue-based and streaming approaches are often the most effective way to capture and land data. For example, Azure Event Hubs can be used to receive data, store it in Azure Data Lake Storage, and optionally deliver it as a stream to other services, including Azure Stream Analytics. For larger datasets, it can be advisable to store the data directly into Azure Blob Storage or Azure Data Lake Storage. The data can then be processed using Azure Data Factory, Azure Synapse Analytics, or Azure Databricks. They key is to land the data as quickly as possible to minimize the risk of data loss and enable downstream rapid analysis.

For reporting, it’s important to optimize the data models for the queries that will be run. The optimal structures for reporting are rarely the same as those used for ingestion or CRUD operations. For example, it’s often more efficient to denormalize data for reporting than it is to store it in a normalized form. In addition, column stores generally perform substantially better than row-based storage for reporting. As a result, separating the data capture and data reporting aspects can help optimize the performance of each.

How can developers support high volumes of read/write operations without compromising the performance of an application?
An important consideration for developers is the appropriate separation of the read and write operations. When read and write operations overlap, it creates contention and bottlenecks. By separating the data capture and data reporting aspects, you can optimize the performance of each. You can also select services which are optimized for that scenario, minimizing the development effort required.

For applications that need to support CRUD (create, read, update, delete) operations, this can require changing the approach. For example, it may be necessary to use a NoSQL solution that supports eventual consistency. It may also be necessary to persist the data in multiple locations or use change feeds to propagate updates to other services.

In other cases, tools such as Azure Data Factory may be more appropriate. It can periodically copy the data to a different data store during off-peak hours. This can help minimize the impact of the changes to the application. This can be important when the needs of the application change suddenly or when the application does not have to provide up-to-the-moment reporting data.

What are some key Azure services that help with the MDW pattern?
The key services used in the MDW pattern are typically Azure Data Lake Storage Gen2, Azure Synapse Analytics, Azure Databricks, Azure SQL, and Azure Event Hubs.

That said, there are many other services that can be used to support specific application and business requirements within this model. For example, Azure Machine Learning can be used to quickly build insights and models from the data. Azure Cosmos DB can be used to support point-queries and updates with low latency. Services like Azure Purview can be used to understand your data estate and apply governance. The MDW pattern is about understanding the tradeoffs between the different services to appropriate select ones that support the business requirements.

As AI is all the rage these days, do any of those Azure services use hot new technology like large language models or generative AI?
Absolutely! A key part of the Modern Data Warehouse pattern is supporting machine learning, and that includes generative AI and new techniques that development teams might be working to create themselves.

Azure’s newest offering, Azure OpenAI Service, is a fully managed service that provides access to the latest state-of-the-art language models from OpenAI. It is designed to help developers and data scientists quickly and easily build intelligent applications that can understand, generate, and respond to human language.

In addition, Azure recently announced the preview of the ND H100 v5 virtual machine series. These are optimized to support the training of large language models and generative AI. These virtual machines boost the performance for large-scale deployments by providing eight H100 Tensor Core CPUs, 4th generation Intel Xeon Processors, and high-speed interconnects with 3.6 TBps of bidirectional bandwidth among the eight local GPUs. You can learn more here.

About the Author


David Ramel is an editor and writer for Converge360.

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


ChatGPT Is Fun, but the Future Is Fully Autonomous AI for Code at QCon London

MMS Founder
MMS Roland Meertens

Article originally posted on InfoQ. Visit InfoQ

At the recent QCon London conference, Mathew Lodge, CEO of DiffBlue, gave a presentation on the advancements in artificial intelligence (AI) for writing code. Lodge highlighted the differences between Large Language Models and Reinforcement Learning approaches, emphasizing what both approaches can and can’t do. The session gave an overview of the state of the current state of AI-powered code generation and its future trajectory. 

In his presentation, Lodge delved into the differences between AI-powered code generation tools and unit test writing tools. Code generation tools like GitHub Copilot, TabNine, and ChatGPT primarily focus on completing code snippets or suggesting code based on the context provided. These tools can greatly speed up the development process by reducing the time and effort needed for repetitive tasks. On the other hand, unit test writing tools such as DiffBlue aim to improve the quality and reliability of software by automatically generating test cases for a given piece of code. Both types of tools leverage AI to enhance productivity and code quality but target different aspects of the software development lifecycle.

Lodge explained how code completion tools, particularly those based on transformer models, predict the next word or token in a sequence by analyzing the given text. These transformer models have evolved significantly over time, with GPT-2, one of the first open-source models, being released in February 2019. Since then, the number of parameters in these models has scaled dramatically, from 1.5 billion in GPT-2 to 175 billion in GPT-3.5, released in November 2022.

OpenAI Codex, a model with approximately 5 billion parameters used in GitHub CoPilot, was specifically trained on open-source code, allowing it to excel in tasks such as generating boilerplate code from simple comments and calling APIs based on examples it has seen in the past. The one-shot prediction accuracy of these models has reached levels comparable to explicitly trained language models. Unfortunately, information regarding the development of GPT-4 remains undisclosed. Both training data and information around the number of parameters is not published which makes it a black box. 

Lodge also discussed the shortcomings of AI-powered code generation tools, highlighting that these models can be unpredictable and heavily reliant on prompts. As they are essentially statistical models of textual patterns, they may generate code that appears reasonable but is fundamentally flawed. Models can also lose context, or generate incorrect code that deviates from the existing code base calling functions or APIs which do not exist. Lodge showed an example of code for a so-called perceptron model which had two difficult to spot bugs in them which essentially made the code unusable. 

GPT-3.5, for instance, incorporates human reinforcement learning in the loop, where answers are ranked by humans to yield improved results. However, the challenge remains in identifying the subtle mistakes produced by these models, which can lead to unintended consequences, such as the ChatGPT incident involving the German coding company OpenCage.

Additionally, Large Language Models (LLMs) do not possess reasoning capabilities and can only predict the next text based on their training data. Consequently, the models’ limitations persist regardless of their size, as they will never generate text that has not been encoded during their training. Lodge highlighted that these problems do not go away, no matter how much training data and parameters are actually used during the training of these models. 

Lodge then shifted the focus to reinforcement learning and its application in tools like DiffBlue. Reinforcement learning differs from the traditional approach of LLMs by focusing on learning by doing, rather than relying on pre-existing knowledge. In the case of DiffBlue Cover, a feedback loop is employed where the system predicts a test, runs the test, and then evaluates its effectiveness based on coverage, other metrics, and existing Java code. This process allows the system to iteratively improve and generate tests with higher coverage and better readability, ultimately resulting in a more effective and efficient testing process for developers. Lodge also mentioned that their representation of test coverage allows them to only run relevant tests when changing code, resulting in a decrease of about 50% of testing costs.

To demonstrate the capabilities of DiffBlue Cover, Lodge conducted a live demo featuring a simple Java application designed to find owners. The application had four cases for which tests needed to be created. Running entirely on a local laptop, DiffBlue Cover generated tests within 1.5 minutes. The resulting tests appeared in IntelliJ as a new file, which included mocked tests for scenarios such as single owner return, double owner return, no owner, and an empty array list.

In conclusion, the advancements in AI-powered code generation and reinforcement learning-based testing, as demonstrated by tools like DiffBlue Cover, have the potential to greatly impact the software development and testing landscape. By understanding the strengths and limitations of these approaches, developers and architects can make informed decisions on how to best utilize these technologies to enhance code quality, productivity, and efficiency while reducing the risk of subtle errors and unintended consequences.
 

About the Author

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


PyTorch 2.0 Compiler Improves Model Training Speed

MMS Founder
MMS Anthony Alford

Article originally posted on InfoQ. Visit InfoQ

The PyTorch Foundation recently released PyTorch version 2.0, a 100% backward compatible update. The main API contribution of the release is a compile function for deep learning models, which speeds up training. Internal benchmarks on 163 open-source AI projects showed that the models ran on average 43% faster during training.

Plans for the 2.0 release were announced at the PyTorch Conference in December 2022. Besides the new compile function, the release also includes performance improvement for Transformer-based models, such as large language models and diffusion models, via a new implementation of scaled dot product attention (SDPA). Training on Apple silicon is accelerated via improved Metal Performance Shaders (MPS), now with 300 operations implemented in MPS. Besides the core release, the domain libraries, including TorchAudio, TorchVision, and TorchText, were updated with new beta features. Overall, the 2.0 release includes over 4,500 commits from 428 developers since the 1.13.1 release. According to the PyTorch Foundation blog,

We are excited to announce the release of PyTorch® 2.0 which we highlighted during the PyTorch Conference on 12/2/22! PyTorch 2.0 offers the same eager-mode development and user experience, while fundamentally changing and supercharging how PyTorch operates at compiler level under the hood with faster performance and support for Dynamic Shapes and Distributed.

In his keynote speech at the PyTorch Conference 2022, PyTorch co-creator Soumith Chintala pointed out that thanks to increases in GPU compute capacity, many existing PyTorch workloads are constrained by memory bandwidth or by PyTorch framework overhead. Previously the PyTorch team had addressed performance problems by writing some of their core components in C++; Chintala described PyTorch as “basically a C++ codebase,” and said that he “hates” contributing to the C++ components.

The new compile feature is based on four underlying components written in Python:

  • TorchDynamo – performs graph acquisition by rewriting Python code representing deep learning models into blocks of computational graphs
  • AOTAutograd – performs “ahead of time” automatic differentiation for the backprop step
  • PrimTorch – canonicalizes the over 2k PyTorch operators down to a fixed set of around 250 primitive operators
  • TorchInductor – generates fast hardware-specific backend code for accelerators

To demonstrate the performance improvements and ease of use of the compile function, the PyTorch team identified 163 open-source deep learning projects to benchmark. These included implementations of a wide variety of tasks including computer vision, natural language processing, and reinforcement learning. The team made no changes to the code besides the one-line call to the compile function. This single change worked in 93% of the projects, and the compiled models ran 43% faster when trained on NVIDIA A100 GPUs.

In a Hacker News discussion about the release, one user noted:

A big lesson I learned from PyTorch vs other frameworks is that productivity trumps incremental performance improvement. Both Caffe and MXNet marketed themselves for being fast, yet apparently being faster here and here by some percentage simply didn’t matter that much. On the other hand, once we make a system work and make it popular, the community will close the performance gap sooner than competitors expect. Another lesson is probably old but worth repeating: investment and professional polishing [matter] to open source projects.

The PyTorch code and version 2.0 release notes are available on GitHub.

About the Author

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


The AI Revolution Is Just Getting Started: Leslie Miley Bids Us to Act Now against Its Bias and CO2

MMS Founder
MMS Olimpiu Pop

Article originally posted on InfoQ. Visit InfoQ

At his inaugural keynote of the QCON London conference, Leslie Miley – Technical Advisor for the CTO at Microsoft, spoke about AI Bias and Sustainability. How the march towards transformative technologies, like large-scale AI and even crypto, has an inherent cost in the increased CO2 that comes with deployment at scale.

His presentation started by creating the appropriate frame: why he chose to give this talk. Growing up and now living in Silicon Valley, allowed him to see the impact(positive or negative) that transformative technologies have on communities.

You cannot name a technology that was transformative without being damaging

Making reference to articles in the media he stated that generative AI has a dirty secret because it requires more energy than other cloud services. Even if big technology companies like Google, Meta or Microsoft are making efforts to ensure that most of their new data centres are as green as possible, the amount of energy consumed is too high. To emphasize the impact the thirst for the energy of new technological trends has, he underlines that coal power plants that didn’t operate in years are now in operation again.

To ensure that the impact is properly understood he makes the connection between the growing CO2 emissions and global warming and the extreme weather conditions that are recorded all over the globe. He states that the high amount of rainfall and snow was only once matched in the recorded history of California.

One of the fascinating things is human beings have this great ability to solve their problems with more complexity. We know it will emit more CO2, but we will make it more efficient. 

He continues to underline, that until humanity manages to find a solution people are affected now. Due to flooding, for instance, fragile communities were affected.

How do you fix somebody who doesn’t have a home anymore?

Next, he brings up the problem in the engineering space stating that generative AI will need a different infrastructure. According to him, we need to rethink the way we build infrastructure, one that will support the new ChatGTP era. To think of a new data centre design that allows us to benefit from machine learning(ML) in a manner that doesn’t impact the environment. HyperScale Data Centers might be a solution as they:

  • Move Data Faster
  • Own Energy Sources
  • Are Eco Friendlier

He compares the building of the interstate highway network in the US and its impact with the building of new data centres for generative AI. The technology will have multiple benefits, but the impact that will have on the local communities should not be ignored. He references the work of Dr Timnit Gebru from Distributed AI Research institute and that of Dr Joy Buolamwini from Algorithm Justice League when it comes to AI bias and how to ensure its fairness.

We know that AI is biased. The data we feed it with is biased. What do we say? We’ll fix it later!

He continuously encourages action now especially as we can make decisions that would help everybody “Not because it is expedient, but because it is the right thing to do”. Similar calls to action could be heard in other formal presentations or informal open conversations on security and gender equality. Rebeca Parsons, ThoughtWorks CTO used the following quote from Weapons of Math Destruction:

Cathy O’Neil: We can make our technology more responsible if we believe we can and we insist that we do

The last part focused on mitigation strategies each one can use. Using smaller models with enhanced societal context might provide a better output than big, resource-consuming models. Knowing Your Data(KYD) and Knowing Your Data Centre(KYDC) will allow you to take better decisions. All big cloud vendors provide dashboards for CO2 footprint measuring.

His closing statement reads:

When ChatGPT occurred I knew it is something crazy big. Something seminal like the advent of the World Wide Web. Technology is meeting people where they were at. We have an obligation to meet it with compassion, and humility, and try to understand the social and cultural impact before we do it. We have the only chance for it. Otherwise, the world will look different than what we want it to be.

About the Author

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Why MongoDB (MDB) Might be Well Poised for a Surge – Yahoo Finance

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

MongoDB (MDB) appears an attractive pick given a noticeable improvement in the company’s earnings outlook. The stock has been a strong performer lately, and the momentum might continue with analysts still raising their earnings estimates for the company.

Analysts’ growing optimism on the earnings prospects of this database platform is driving estimates higher, which should get reflected in its stock price. After all, empirical research shows a strong correlation between trends in earnings estimate revisions and near-term stock price movements. Our stock rating tool — the Zacks Rank — is principally built on this insight.

The five-grade Zacks Rank system, which ranges from a Zacks Rank #1 (Strong Buy) to a Zacks Rank #5 (Strong Sell), has an impressive externally-audited track record of outperformance, with Zacks #1 Ranked stocks generating an average annual return of +25% since 2008.

For MongoDB, strong agreement among the covering analysts in revising earnings estimates upward has resulted in meaningful improvement in consensus estimates for the next quarter and full year.

The chart below shows the evolution of forward 12-month Zacks Consensus EPS estimate:

12 Month EPS

Current-Quarter Estimate Revisions

The earnings estimate of $0.19 per share for the current quarter represents a change of -5% from the number reported a year ago.

Over the last 30 days, five estimates have moved higher for MongoDB while one has gone lower. As a result, the Zacks Consensus Estimate has increased 8.36%.

Current-Year Estimate Revisions

The company is expected to earn $1.03 per share for the full year, which represents a change of +27.16% from the prior-year number.

The revisions trend for the current year also appears quite promising for MongoDB, with eight estimates moving higher over the past month compared to no negative revisions. The consensus estimate has also received a boost over this time frame, increasing 12.93%.

Favorable Zacks Rank

The promising estimate revisions have helped MongoDB earn a Zacks Rank #2 (Buy). The Zacks Rank is a tried-and-tested rating tool that helps investors effectively harness the power of earnings estimate revisions and make the right investment decision. You can see the complete list of today’s Zacks #1 Rank (Strong Buy) stocks here.

Our research shows that stocks with Zacks Rank #1 (Strong Buy) and 2 (Buy) significantly outperform the S&P 500.

Bottom Line

Investors have been betting on MongoDB because of its solid estimate revisions, as evident from the stock’s 5.3% gain over the past four weeks. As its earnings growth prospects might push the stock higher, you may consider adding it to your portfolio right away.

Want the latest recommendations from Zacks Investment Research? Today, you can download 7 Best Stocks for the Next 30 Days. Click to get this free report

MongoDB, Inc. (MDB) : Free Stock Analysis Report

To read this article on Zacks.com click here.

Zacks Investment Research

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.