Mobile Monitoring Solutions

Search
Close this search box.

Article: Effective Test Automation Approaches for Modern CI/CD Pipelines

MMS Founder
MMS Craig Risi

Article originally posted on InfoQ. Visit InfoQ

Key Takeaways

  • Shifting left is popular in domains such as security, but it is also essential for achieving better test automation for CI/CD pipelines
  • By shifting left, you can design for testability upfront and get testing experts involved earlier in your unit tests leading to a better result
  • Not all your tests should be automated for your CI/CD pipelines; instead focus on the tests that return the best value while minimizing your CI/CD runtimes
  • Other tests can then be run on a scheduled basis to avoid cluttering and slowing the main pipeline
  • Become familiar with the principles of good test design for writing more efficient and effective tests

The rise of CI/CD has had a massive impact on the software testing world. With developers requiring pipelines to provide quick feedback on whether their software update has been successful or not, it has forced many testing teams to revisit their existing test automation approaches and find ways of being able to speed up their delivery without compromising on quality. These two factors often contradict each other in the testing world as time is often the biggest enemy in a tester’s quest to be as thorough as possible in achieving their desired testing coverage.

So, how do teams deal with this significant change to ensure they are able to deliver high-quality automated tests while delivering on the expectation that the CI pipeline returns feedback quickly? Well, there are many different ways of looking at this, but what is important to understand is that the solutions are less technical and more cultural ones – with the approach to testing needing to shift rather than big technical enhancements to the testing frameworks.

Shifting Left

Perhaps the most obvious thing to do is to shift left. The idea of “shifting left” (where testing is moved earlier in the development cycle – primarily at a design and unit testing level) is already a common one in the industry that is pushed by many organizations and is becoming increasingly commonplace. Having a strong focus on unit tests is a good way of testing code quickly and providing fast feedback. After all, unit tests execute in a fraction of the time (as they can execute during compilation and don’t require any further integration with the rest of the system) and can provide good testing coverage when done right.

I’ve seen many testers shy away from the notion of unit testing because it’s writing tests for a very small component of the code and there is a danger that key things will be missed. This is often just a fear due to the lack of visibility in the process or a lack of understanding of unit tests rather than a failure of unit tests themselves. Having a strong base of unit tests works as they can execute quickly as the code builds in the CI pipeline. It makes sense to have as many as possible and to cover every type of scenario as possible.  

The biggest problem is that many teams don’t always know how to get it right. Firstly, unit testing shouldn’t be treated as some check box activity but rather approached with the proper analysis and commitment to test design that testers would ordinarily apply. And this means that rather than just leaving unit testing in the hands of the developers, you should get testers involved in the process. Even if a tester is not strong in coding, they can still assist in identifying what parameters to look for in testing and the right places to assert to deliver the right results for the integrated functionality to be tested later.
 
Excluding your testing experts from being involved in the unit testing approach means it’s possible unit tests could miss some key validation areas. This is often why you might hear many testers give unit tests a bad wrap. It’s not because unit testing is ineffectual, but rather simply that they often didn’t cover the right scenarios.

A second benefit of involving testers early is adding visibility to the unit testing effort. The amount of time (and therefore money) potentially wasted by teams duplicating testing efforts because testers end up simply testing something that was already covered by automated testing is probably quite high. That’s not to say independent validation shouldn’t occur, but it shouldn’t be excessive if scenarios have already been covered. Instead, the tester can focus on being able to provide better exploratory testing as well as focus their own automation efforts on integration testing those edge cases that they might never have otherwise covered.

It’s all about design and preparation

To do this effectively though requires a fair amount of deliberate effort and design. It’s not just about making an effort to focus more on the unit tests and perhaps getting a person with strong test analysis skills in to ensure test scenarios are suitably developed. It also requires user stories and requirements to be more specific to allow for appropriate testing. Often user stories can end up high-level and only focus on the detail from a user level and not a technical level. How individual functions should behave and interact with their corresponding dependencies needs to be clear to allow for good unit testing to take place.

Much of the criticism that befalls unit testing from the testing community is the poor integration it offers. Just because a feature works in isolation doesn’t mean it will work in conjunction with its dependencies. This is often why testers find so many defects early in their testing effort. This doesn’t need to be the case, as more detailed specifications can lead to more accurate mocking allowing for the unit tests to behave realistically and provide better results. There will always be “mocked” functionality that is not accurately known or designed, but with enough early thought, this amount of rework is greatly reduced.

Design is not just about unit tests though. One of the biggest barriers to test automation executing directly in the pipeline is that the team that deals with the larger integrated system only starts a lot of their testing and automation effort once the code has been deployed into a bigger environment. This wastes critical time in the development process, as certain issues will only be discovered later and there should be enough detail to allow testers to at least start writing the majority of their automated tests while the developers are coding on their side.

This doesn’t mean that manual verification, exploratory testing, and actually using the software shouldn’t take place. Those are critical parts of any testing process and are important steps to ensuring software behaves as desired. These approaches are also effective at finding faults with the proposed design. However, automating the integration tests allows the process to be streamlined. These tests can then be included in the initial pipelines thereby improving the overall quality of the delivered product by providing quicker feedback to the development team of failures without the testing team even needing to get involved.

So what actually needs to be tested then?

I’ve spoken a lot about specific approaches to design and shiting left to achieve the best testing results. But you still can’t go ahead and automate everything you test, because it’s simply not feasible and adds too much to the execution time of the CI/CD pipelines. So knowing which scenarios need to be appropriately unit or integration tested for automation purposes is crucial while trying to alleviate unnecessary duplication of the testing effort.

Before I dive into these different tests, it’s worth noting that while the aim is to remove duplication, there is likely to always be a certain level of duplication that will be required across tests to achieve the right level of coverage. You want to try and reduce it as much as possible, but erring on the side of duplication is safer if you can’t figure out a better way to achieve the test coverage you need.  

Areas to be unit tested

When it comes to building your pipeline, your unit tests and scans should typically fall into the CI portion of your pipeline, as they can all be evaluated as the code is being built.

Entry and exit points: All code receives input and then provides an output. Essentially, what you are looking to unit test is everything that a piece of code can receive, and then you must ensure it sends out the correct output. By catching everything that flows through each piece of code in the system, you greatly reduce the number of failures that are likely to occur when they are integrated as a whole.

Isolated functionality: While most code will operate on an integrated level, there are many functions that will handle all computation internally. These can be unit-tested exclusively and teams should aim to hit 100% unit test coverage on these pieces of code. I have mostly come across isolated functions when working in microservice architectures where authentication or calculator functions have no dependencies. This means that they can be unit tested with no need for additional integration.

Boundary value validations: Code behaves the same when it receives valid or invalid arguments, regardless of whether it is entered from a UI, some integrated API, or directly through the code. There is no need for testers to go through exhaustive scenarios when much of this can be covered in unit tests.

Clear data permutations: When the data inputs and outputs are clear, it makes that code or component an ideal candidate for a unit test. If you’re dealing with complex data permutations, then it is best to tackle these at an integration level. The reason for this is that complex data is often difficult to mock, slow to process, and will slow down your coding pipeline.

Security and performance: While the majority of load, performance, and security testing happens at an integration level, these can also be tested at a unit level. Each piece of code should be able to handle an invalid authentication, redirection, or SQL/code injection and transmit code efficiently. Unit tests can be created to validate against these. After all, a system’s security and performance are only as effective as its weakest part, so ensuring there are no weak parts is a good place to start.

Areas for integration automation

These are tests that will typically run post-deployment of your code into a bigger environment – though it doesn’t have to be a permanent environment and something utilizing containers works equally well. I’ve seen many teams still try and test everything in this phase though and this can lead to a very long portion of your pipeline execution. Something which is not great if you’re looking to deploy into production on a regular basis each day.

So, the importance is to only test those areas where your unit tests are going to cover satisfactorily, while also focusing on functionality and performance in your overall test design. Some design principles that I give later in this article will help with this.

Positive integration scenarios: We still need to automate integration points to ensure they work correctly. However, the trick is to not focus too much on exhaustive error validation, as these are often triggered by specific outputs that can be unit tested. Rather focus on ensuring successful integration takes place.

Test backend over frontend: Where possible, focus your automation effort on backend components than frontend components. While the user might be using the front end more often, it is typically not where a lot of the functional complexity lies, and backend testing is a lot faster and therefore better for your test automation execution.

Security: One of the common mistakes is that teams rely on security scans for the majority of their security testing and then don’t automate some other critical penetration tests that are performed on the software. And while some penetration tests can’t be executed in a pipeline effectively, many can and these should be automated and run regularly given their importance, especially when dealing with any functionality that covers access, payment, or data privacy. These are areas that can’t be compromised and should be covered.

Are there automated tests that shouldn’t be included in the CI/CD pipelines?

When it comes to automation, it’s not just about understanding what to automate, but also what not to automate, or even if there are tests that are automated, they shouldn’t always land in your CI/CD pipelines. And while the goal is to always shift left as much as possible and avoid these areas, for some architectures it’s not always possible and there may be some additional level of validation required to satisfy the needed test coverage.

This doesn’t mean that tests shouldn’t be automated or placed in pipelines, rather just that they should be separated from your CI/CD processes and rather executed on a daily basis as part of a scheduled execution and not part of your code delivery.

End-to-end tests with high data requirements: Anything that requires complex data scenarios to test should be reserved for execution in a proper test environment outside of a pipeline. While these tests can be automated, they are often too complex or specific for regular execution in a pipeline, plus will take a long time to execute and validate, making them not ideal for pipelines.

Visual regression: Outside of functional testing it is important to often perform visual regression against any site UI to ensure it looks consistent across a variety of devices, browsers, and resolutions. This is an important aspect of testing that often gets overlooked. However, as it doesn’t deal with actual functional behavior, it is often best to execute this outside of your core CI/CD pipelines, though still a requirement before major releases or UI updates.

Mutation testing: Mutation testing is a fantastic way of being able to check the coverage of your unit testing efforts and see what may have been missed by adjusting different decisions in your code and see what it misses. However, the process is quite lengthy and is best done as part of a review process rather than forming part of your pipelines.

Load and stress testing: While it is important to test the performance of different parts of code, you don’t want to put a system under any form of load or stress in a pipeline. To best do this testing, you need a dedicated environment and specific conditions that will stress the limits of your application under test. Not the sort of thing you want to do as part of your pipelines.   

Designing effective tests

So, it’s clear that we need a shift-left approach that relies heavily on unit tests with high coverage, but then also a good range of tests covering all areas to get the quality that is likely needed. It still seems like a lot though and there is always the risk that the pipelines can still take a considerable time to execute, especially at a CD level where the more time-intensive integration tests will be executed post-code deployment.

There is also a manner of how you design your tests though that will help make this effective. Automating tests that are unnecessary is a big waste of time, but so are inefficiently written tests. The biggest problem here is that often testers don’t have a full understanding of the efficiency of their test automation, focusing on execution rather than looking for the most processor and memory-effective way of doing it.

The secret to making all tests work is simplicity. Automated tests shouldn’t be complicated. Perform an action, and get a response. And so it is important to stick to that when designing your tests. The following below attributes are important things to follow in designing your tests to keep them both simple and performant.

1. Naming your tests

You might not think naming tests are important, but it matters when it comes to the maintainability of the tests. While test names might not have anything to do with the test functionality or speed of execution, it does help others know what the test does. So when failures occur in a test or something needs to be fixed, it makes the maintenance process a lot quicker and that is important when waging through the many thousands of tests your pipeline is likely to have.

Tests are useful for more than just making sure that your code works, they also provide documentation. Just by looking at the suite of unit tests, you should be able to infer the behavior of your code. Additionally, when tests fail, you can see exactly which scenarios did not meet your expectations.

The name of your test should consist of three parts:

  • The name of the method being tested
  • The scenario under which it’s being tested
  • The behavior expected when the scenario is invoked

By using these naming conventions, you ensure that it’s easy to identify what any test or code is supposed to do while also speeding up your ability to debug your code.

2. Arranging your tests

Readability is one of the most important aspects of writing a test. While it may be possible to combine some steps and reduce the size of your test, the primary goal is to make the test as readable as possible. A common pattern to writing simple, functional tests is “Arrange, Act, Assert”. As the name implies, it consists of three main actions:

  • Arrange your objects, by creating and setting them up in a way that readies your code for the intended test
  • Act on an object
  • Assert that something is as expected

By clearly separating each of these actions within the test, you highlight:

  • The dependencies required to call your code/test
  • How your code is being called, and
  • What you are trying to assert.

This makes tests easy to write, understand and maintain while also improving their overall performance as they perform simple operations each time.

3. Write minimally passing tests

Too often the people writing automated tests are trying to utilize complex coding techniques that can cater to multiple different behaviors, but in the testing world, all it does is introduce complexity. Tests that include more information than is required to pass the test have a higher chance of introducing errors and can make the intent of the test less clear. For example, setting extra properties on models or using non-zero values when they are not required, only detracts from what you are trying to prove. When writing tests, you want to focus on the behavior. To do this, the input that you use should be as simple as possible.

4. Avoid logic in tests

When you introduce logic into your test suite, the chance of introducing a bug through human error or false results increases dramatically. The last place that you want to find a bug is within your test suite because you should have a high level of confidence that your tests work. Otherwise, you will not trust them and they do not provide any value.

When writing your tests, avoid manual string concatenation and logical conditions such as if, while, for, or switch, because this will help you avoid unnecessary logic. Similarly, any form of calculation should be avoided – your test should rely on an easily identifiable input and clear output – otherwise, it can easily become flaky based on certain criteria – plus it adds to maintenance as when the code logic changes, the test logic will also need to change.

Another important thing here is to remember that pipeline tests should execute quickly and logic tends to cost more processing time. Yes, it might seem insignificant at first, but with several hundreds of tests, this can add up.

5. Use mocks and stubs wherever possible

A lot of testers might frown on this, as the thought of using lots of mocks and stubs can be seen as avoiding the true integrated behavior of an application. This is true for end-to-end testing which you still want to automate, but not ideal for pipeline execution. Not only does it slow down pipeline execution, but creates flakiness in your test results as external functions are not operational or out of sync with your changes.

The best way to ensure that your test results are more reliable, along with allowing you to take greater control of your testing effort and improve coverage is to build mocking into your test framework and rely on stubs to intercept complex data patterns that an external function to do it for you.

6. Prefer helper methods to Setup and Teardown

In unit testing frameworks, a Setup function is called before each and every unit test within your test suite. Each test will generally have different requirements in order to get the test up and running. Unfortunately, Setup forces you to use the exact same requirements for each test. While some may see this as a useful tool, it generally ends up leading to bloated and hard-to-read tests. If you require a similar object or state for your tests, rather use an existing helper method than leveraging Setup and Teardown attributes.

This will help by introducing:

  • Less confusion when reading the tests, since all of the code is visible from within each test.
  • Less chance of setting up too much or too little for the given test.
  • Less chance of sharing state between tests which would otherwise create unwanted dependencies between them.

7. Avoid multiple asserts

When introducing multiple assertions into a test case, it is not guaranteed that all of them will be executed. This is because the test will likely fail at the end of an earlier assertion, leaving the rest of the tests unexecuted. Once an assertion fails in a unit test, the proceeding tests are automatically considered to be failing, even if they are not. The result of this is then that the location of the failure is unclear, which also wastes debugging time.

When writing your tests, try to only include one assert per test. This helps to ensure that it is easy to pinpoint exactly what failed and why. Teams can easily make the mistake of trying to write as few tests as possible that achieve high coverage, but in the end, all it does is make future maintenance a nightmare.

This ties into removing test duplication as well. You don’t want to repeat tests through the pipeline execution and making what they test more visible helps the team to ensure this objective can be achieved.

8. Treat your tests like production code

While test code may not be executed in a production setting, it should be treated just the same as any other piece of code. And that means it should be updated and maintained on a regular basis. Don’t write tests and assume that everything is done. You will need to put in the work to keep your tests functional and healthy, while also keeping all libraries and dependencies up to date too. You don’t want technical debt in your code- don’t have it in your tests either.  

9. Make test automation a habit

Okay, so this last one is less of an actual design principle and more of a tip on good test writing. Like with all things coding-related, knowing the theory is not enough and it requires practice to get good and build a habit, so these testing practices will take time to get right and feel natural. The skill of writing a proper test though is incredibly undervalued and one that will add a lot of value to the quality of the code so the effort and extra effort required are certainly worth it.

Conclusion – it’s all about good test design

As you can see, test automation across your full stack can still work within your pipeline and provide you with a high level of regression coverage while not breaking or slowing down your pipeline unnecessarily. What it does require though is a good test design to work effectively and so the unit and automated tests will need to be well-written to be of most value.

A good DevOps testing strategy requires a solid base of unit tests to provide most of the coverage with mocking to help drive the rest of the automation effort up, leaving only the need for a few end-to-end automated tests to ensure everything works in order and allow your team to take confidence that the pipeline tests will successfully deliver on their quality needs.

About the Author

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Global NoSQL Software Market Size and Forecast | Amazon, Couchbase, MongoDB Inc …

MMS Founder
MMS RSS

Posted on nosqlgooglealerts. Visit nosqlgooglealerts

New Jersey, United States – The Global NoSQL Software Market is comprehensively and accurately detailed in the report, taking into consideration various factors such as competition, regional growth, segmentation, and market size by value and volume. This is an excellent research study specially compiled to provide the latest insights into critical aspects of the Global NoSQL Software market. The report includes different market forecasts related to market size, production, revenue, consumption, CAGR, gross margin, price, and other key factors. It is prepared with the use of industry-best primary and secondary research methodologies and tools. It includes several research studies such as manufacturing cost analysis, absolute dollar opportunity, pricing analysis, company profiling, production and consumption analysis, and market dynamics.

The competitive landscape is a critical aspect every key player needs to be familiar with. The report throws light on the competitive scenario of the Global NoSQL Software market to know the competition at both the domestic and global levels. Market experts have also offered the outline of every leading player of the Global NoSQL Software market, considering the key aspects such as areas of operation, production, and product portfolio. Additionally, companies in the report are studied based on key factors such as company size, market share, market growth, revenue, production volume, and profits.

Get Full PDF Sample Copy of Report: (Including Full TOC, List of Tables & Figures, Chart) @ https://www.verifiedmarketresearch.com/download-sample/?rid=153255

Key Players Mentioned in the Global NoSQL Software Market Research Report:

Amazon, Couchbase, MongoDB Inc., Microsoft, Marklogic, OrientDB, ArangoDB, Redis, CouchDB, DataStax.

Global NoSQL Software Market Segmentation:  

NoSQL Software Market, By Type

• Document Databases
• Key-vale Databases
• Wide-column Store
• Graph Databases
• Others

NoSQL Market, By Application

• Social Networking
• Web Applications
• E-Commerce
• Data Analytics
• Data Storage
• Others

The report comes out as an accurate and highly detailed resource for gaining significant insights into the growth of different product and application segments of the Global NoSQL Software market. Each segment covered in the report is exhaustively researched about on the basis of market share, growth potential, drivers, and other crucial factors. The segmental analysis provided in the report will help market players to know when and where to invest in the Global NoSQL Software market. Moreover, it will help them to identify key growth pockets of the Global NoSQL Software market.

The geographical analysis of the Global NoSQL Software market provided in the report is just the right tool that competitors can use to discover untapped sales and business expansion opportunities in different regions and countries. Each regional and country-wise Global NoSQL Software market considered for research and analysis has been thoroughly studied based on market share, future growth potential, CAGR, market size, and other important parameters. Every regional market has a different trend or not all regional markets are impacted by the same trend. Taking this into consideration, the analysts authoring the report have provided an exhaustive analysis of specific trends of each regional Global NoSQL Software market.

Inquire for a Discount on this Premium Report @ https://www.verifiedmarketresearch.com/ask-for-discount/?rid=153255

What to Expect in Our Report?

(1) A complete section of the Global NoSQL Software market report is dedicated for market dynamics, which include influence factors, market drivers, challenges, opportunities, and trends.

(2) Another broad section of the research study is reserved for regional analysis of the Global NoSQL Software market where important regions and countries are assessed for their growth potential, consumption, market share, and other vital factors indicating their market growth.

(3) Players can use the competitive analysis provided in the report to build new strategies or fine-tune their existing ones to rise above market challenges and increase their share of the Global NoSQL Software market.

(4) The report also discusses competitive situation and trends and sheds light on company expansions and merger and acquisition taking place in the Global NoSQL Software market. Moreover, it brings to light the market concentration rate and market shares of top three and five players.

(5) Readers are provided with findings and conclusion of the research study provided in the Global NoSQL Software Market report.

Key Questions Answered in the Report:

(1) What are the growth opportunities for the new entrants in the Global NoSQL Software industry?

(2) Who are the leading players functioning in the Global NoSQL Software marketplace?

(3) What are the key strategies participants are likely to adopt to increase their share in the Global NoSQL Software industry?

(4) What is the competitive situation in the Global NoSQL Software market?

(5) What are the emerging trends that may influence the Global NoSQL Software market growth?

(6) Which product type segment will exhibit high CAGR in future?

(7) Which application segment will grab a handsome share in the Global NoSQL Software industry?

(8) Which region is lucrative for the manufacturers?

For More Information or Query or Customization Before Buying, Visit @ https://www.verifiedmarketresearch.com/product/nosql-software-market/ 

About Us: Verified Market Research® 

Verified Market Research® is a leading Global Research and Consulting firm that has been providing advanced analytical research solutions, custom consulting and in-depth data analysis for 10+ years to individuals and companies alike that are looking for accurate, reliable and up to date research data and technical consulting. We offer insights into strategic and growth analyses, Data necessary to achieve corporate goals and help make critical revenue decisions. 

Our research studies help our clients make superior data-driven decisions, understand market forecast, capitalize on future opportunities and optimize efficiency by working as their partner to deliver accurate and valuable information. The industries we cover span over a large spectrum including Technology, Chemicals, Manufacturing, Energy, Food and Beverages, Automotive, Robotics, Packaging, Construction, Mining & Gas. Etc. 

We, at Verified Market Research, assist in understanding holistic market indicating factors and most current and future market trends. Our analysts, with their high expertise in data gathering and governance, utilize industry techniques to collate and examine data at all stages. They are trained to combine modern data collection techniques, superior research methodology, subject expertise and years of collective experience to produce informative and accurate research. 

Having serviced over 5000+ clients, we have provided reliable market research services to more than 100 Global Fortune 500 companies such as Amazon, Dell, IBM, Shell, Exxon Mobil, General Electric, Siemens, Microsoft, Sony and Hitachi. We have co-consulted with some of the world’s leading consulting firms like McKinsey & Company, Boston Consulting Group, Bain and Company for custom research and consulting projects for businesses worldwide. 

Contact us:

Mr. Edwyne Fernandes

Verified Market Research®

US: +1 (650)-781-4080
UK: +44 (753)-715-0008
APAC: +61 (488)-85-9400
US Toll-Free: +1 (800)-782-1768

Email: sales@verifiedmarketresearch.com

Website:- https://www.verifiedmarketresearch.com/

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


SQL to NoSQL: Architecture Differences and Considerations for Migration – InfoQ

MMS Founder
MMS RSS

Posted on nosqlgooglealerts. Visit nosqlgooglealerts

When and how to migrate data from SQL to NoSQL are matters of much debate. It can certainly be a daunting task, but when your SQL systems hit architectural limits or your cloud provider expenses skyrocket, it’s probably time to consider a move.

Many IT organizations have followed the principles in this paper and have migrated successfully from RDBMS to the Scylla NoSQL database.

Read the whitepaper to learn:

  • SQL versus NoSQL Overview
  • Tradeoffs between Flexibility, Scale and Cost
  • Architectural Differences Between SQL and NoSQL
  • Considerations for successful SQL to NoSQL Migrations

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


MongoDB, Inc. (NASDAQ:MDB) Receives Consensus Recommendation of “Moderate Buy …

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

MongoDB, Inc. (NASDAQ:MDBGet Rating) has earned an average recommendation of “Moderate Buy” from the twenty-three ratings firms that are currently covering the firm, MarketBeat reports. One analyst has rated the stock with a sell rating, two have assigned a hold rating and fifteen have given a buy rating to the company. The average 12 month price objective among brokerages that have issued a report on the stock in the last year is $260.27.

MDB has been the topic of a number of recent analyst reports. Stifel Nicolaus dropped their price objective on MongoDB from $256.00 to $240.00 in a report on Thursday, February 23rd. Citigroup lowered their target price on MongoDB from $300.00 to $295.00 and set a “buy” rating on the stock in a research note on Tuesday, March 7th. The Goldman Sachs Group lowered their target price on MongoDB from $325.00 to $280.00 and set a “buy” rating on the stock in a research note on Thursday, March 9th. Oppenheimer lowered their target price on MongoDB from $320.00 to $270.00 and set an “outperform” rating on the stock in a research note on Thursday, March 9th. Finally, Royal Bank of Canada reiterated an “outperform” rating and issued a $235.00 target price on shares of MongoDB in a research note on Thursday, March 9th.

MongoDB Price Performance

MDB opened at $292.37 on Wednesday. The stock’s 50-day moving average price is $240.96 and its 200 day moving average price is $212.13. MongoDB has a 12-month low of $135.15 and a 12-month high of $390.84. The company has a current ratio of 3.80, a quick ratio of 3.80 and a debt-to-equity ratio of 1.54. The firm has a market cap of $20.48 billion, a price-to-earnings ratio of -58.01 and a beta of 1.06.

MongoDB (NASDAQ:MDBGet Rating) last posted its earnings results on Wednesday, March 8th. The company reported ($0.98) earnings per share (EPS) for the quarter, topping the consensus estimate of ($1.18) by $0.20. MongoDB had a negative return on equity of 48.38% and a negative net margin of 26.90%. The firm had revenue of $361.31 million for the quarter, compared to analyst estimates of $335.84 million. As a group, sell-side analysts expect that MongoDB will post -4.04 earnings per share for the current year.

Insiders Place Their Bets

In other news, CAO Thomas Bull sold 605 shares of the company’s stock in a transaction that occurred on Monday, April 3rd. The stock was sold at an average price of $228.34, for a total transaction of $138,145.70. Following the completion of the transaction, the chief accounting officer now directly owns 17,706 shares of the company’s stock, valued at $4,042,988.04. The transaction was disclosed in a filing with the Securities & Exchange Commission, which can be accessed through the SEC website. In other MongoDB news, CRO Cedric Pech sold 15,534 shares of the stock in a transaction on Tuesday, May 9th. The stock was sold at an average price of $250.00, for a total transaction of $3,883,500.00. Following the completion of the transaction, the executive now directly owns 37,516 shares of the company’s stock, valued at approximately $9,379,000. The sale was disclosed in a filing with the SEC, which is accessible through the SEC website. Also, CAO Thomas Bull sold 605 shares of the stock in a transaction on Monday, April 3rd. The shares were sold at an average price of $228.34, for a total value of $138,145.70. Following the transaction, the chief accounting officer now directly owns 17,706 shares of the company’s stock, valued at $4,042,988.04. The disclosure for this sale can be found here. Insiders have sold a total of 81,013 shares of company stock worth $18,896,567 over the last quarter. 4.80% of the stock is owned by corporate insiders.

Hedge Funds Weigh In On MongoDB

A number of institutional investors and hedge funds have recently added to or reduced their stakes in the company. Bessemer Group Inc. purchased a new stake in shares of MongoDB in the fourth quarter valued at $29,000. BI Asset Management Fondsmaeglerselskab A S acquired a new stake in shares of MongoDB in the fourth quarter worth $30,000. Global Retirement Partners LLC boosted its position in shares of MongoDB by 346.7% in the first quarter. Global Retirement Partners LLC now owns 134 shares of the company’s stock worth $30,000 after buying an additional 104 shares during the period. Lindbrook Capital LLC boosted its position in shares of MongoDB by 350.0% in the fourth quarter. Lindbrook Capital LLC now owns 171 shares of the company’s stock worth $34,000 after buying an additional 133 shares during the period. Finally, Y.D. More Investments Ltd acquired a new stake in shares of MongoDB in the fourth quarter worth $36,000. 84.86% of the stock is owned by institutional investors.

MongoDB Company Profile

(Get Rating)

MongoDB, Inc engages in the development and provision of a general-purpose database platform. The firm’s products include MongoDB Enterprise Advanced, MongoDB Atlas and Community Server. It also offers professional services including consulting and training. The company was founded by Eliot Horowitz, Dwight A.

Featured Articles

Analyst Recommendations for MongoDB (NASDAQ:MDB)

This instant news alert was generated by narrative science technology and financial data from MarketBeat in order to provide readers with the fastest and most accurate reporting. This story was reviewed by MarketBeat’s editorial team prior to publication. Please send any questions or comments about this story to contact@marketbeat.com.

Before you consider MongoDB, you’ll want to hear this.

MarketBeat keeps track of Wall Street’s top-rated and best performing research analysts and the stocks they recommend to their clients on a daily basis. MarketBeat has identified the five stocks that top analysts are quietly whispering to their clients to buy now before the broader market catches on… and MongoDB wasn’t on the list.

While MongoDB currently has a “Moderate Buy” rating among analysts, top-rated analysts believe these five stocks are better buys.

View The Five Stocks Here

13 Stocks Institutional Investors Won't Stop Buying Cover

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


MongoDB, Inc. (NASDAQ:MDB) Receives Consensus Recommendation of “Moderate Buy …

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

MongoDB, Inc. (NASDAQ:MDBGet Rating) has earned an average recommendation of “Moderate Buy” from the twenty-three ratings firms that are currently covering the firm, MarketBeat reports. One analyst has rated the stock with a sell rating, two have assigned a hold rating and fifteen have given a buy rating to the company. The average 12 month price objective among brokerages that have issued a report on the stock in the last year is $260.27.

MDB has been the topic of a number of recent analyst reports. Stifel Nicolaus dropped their price objective on MongoDB from $256.00 to $240.00 in a report on Thursday, February 23rd. Citigroup lowered their target price on MongoDB from $300.00 to $295.00 and set a “buy” rating on the stock in a research note on Tuesday, March 7th. The Goldman Sachs Group lowered their target price on MongoDB from $325.00 to $280.00 and set a “buy” rating on the stock in a research note on Thursday, March 9th. Oppenheimer lowered their target price on MongoDB from $320.00 to $270.00 and set an “outperform” rating on the stock in a research note on Thursday, March 9th. Finally, Royal Bank of Canada reiterated an “outperform” rating and issued a $235.00 target price on shares of MongoDB in a research note on Thursday, March 9th.

MongoDB Price Performance

MDB opened at $292.37 on Wednesday. The stock’s 50-day moving average price is $240.96 and its 200 day moving average price is $212.13. MongoDB has a 12-month low of $135.15 and a 12-month high of $390.84. The company has a current ratio of 3.80, a quick ratio of 3.80 and a debt-to-equity ratio of 1.54. The firm has a market cap of $20.48 billion, a price-to-earnings ratio of -58.01 and a beta of 1.06.

MongoDB (NASDAQ:MDBGet Rating) last posted its earnings results on Wednesday, March 8th. The company reported ($0.98) earnings per share (EPS) for the quarter, topping the consensus estimate of ($1.18) by $0.20. MongoDB had a negative return on equity of 48.38% and a negative net margin of 26.90%. The firm had revenue of $361.31 million for the quarter, compared to analyst estimates of $335.84 million. As a group, sell-side analysts expect that MongoDB will post -4.04 earnings per share for the current year.

Insiders Place Their Bets

In other news, CAO Thomas Bull sold 605 shares of the company’s stock in a transaction that occurred on Monday, April 3rd. The stock was sold at an average price of $228.34, for a total transaction of $138,145.70. Following the completion of the transaction, the chief accounting officer now directly owns 17,706 shares of the company’s stock, valued at $4,042,988.04. The transaction was disclosed in a filing with the Securities & Exchange Commission, which can be accessed through the SEC website. In other MongoDB news, CRO Cedric Pech sold 15,534 shares of the stock in a transaction on Tuesday, May 9th. The stock was sold at an average price of $250.00, for a total transaction of $3,883,500.00. Following the completion of the transaction, the executive now directly owns 37,516 shares of the company’s stock, valued at approximately $9,379,000. The sale was disclosed in a filing with the SEC, which is accessible through the SEC website. Also, CAO Thomas Bull sold 605 shares of the stock in a transaction on Monday, April 3rd. The shares were sold at an average price of $228.34, for a total value of $138,145.70. Following the transaction, the chief accounting officer now directly owns 17,706 shares of the company’s stock, valued at $4,042,988.04. The disclosure for this sale can be found here. Insiders have sold a total of 81,013 shares of company stock worth $18,896,567 over the last quarter. 4.80% of the stock is owned by corporate insiders.

Hedge Funds Weigh In On MongoDB

A number of institutional investors and hedge funds have recently added to or reduced their stakes in the company. Bessemer Group Inc. purchased a new stake in shares of MongoDB in the fourth quarter valued at $29,000. BI Asset Management Fondsmaeglerselskab A S acquired a new stake in shares of MongoDB in the fourth quarter worth $30,000. Global Retirement Partners LLC boosted its position in shares of MongoDB by 346.7% in the first quarter. Global Retirement Partners LLC now owns 134 shares of the company’s stock worth $30,000 after buying an additional 104 shares during the period. Lindbrook Capital LLC boosted its position in shares of MongoDB by 350.0% in the fourth quarter. Lindbrook Capital LLC now owns 171 shares of the company’s stock worth $34,000 after buying an additional 133 shares during the period. Finally, Y.D. More Investments Ltd acquired a new stake in shares of MongoDB in the fourth quarter worth $36,000. 84.86% of the stock is owned by institutional investors.

MongoDB Company Profile

(Get Rating)

MongoDB, Inc engages in the development and provision of a general-purpose database platform. The firm’s products include MongoDB Enterprise Advanced, MongoDB Atlas and Community Server. It also offers professional services including consulting and training. The company was founded by Eliot Horowitz, Dwight A.

Featured Articles

Analyst Recommendations for MongoDB (NASDAQ:MDB)



Receive News & Ratings for MongoDB Daily – Enter your email address below to receive a concise daily summary of the latest news and analysts’ ratings for MongoDB and related companies with MarketBeat.com’s FREE daily email newsletter.

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


OpenAI Launches its Official ChatGPT App for iOS

MMS Founder
MMS Sergio De Simone

Article originally posted on InfoQ. Visit InfoQ

OpenAI made its official ChatGPT app available on the US App Store, providing voice-based input, GPT-4 support for paying users, and faster response times. The company said they will soon start the roll out to additional countries and that an Android version is in the making.

According to OpenAI, the launch of their iOS app responds to the aim of making their technology more readily accessible to users. As a side-effect, OpenAI will likely get to establish some order in the App Store, which has been flooded by many ChatGPT clients in the last few months, with several of them allegedly having 7-figure revenues. Although this does not necessarily mean all of those apps are profitable, as Twitter user Andrey Zagoruyko remarks, it clearly hints at the huge interest that ChatGPT has met among the general public.

While several comments on Hacker News point out that the app does not seem much more than a Web view wrapper around OpenAI website, it still provides a couple of distinctive features that bring an improved user experience, including full chat history and chat synchronization across devices, as well as voice input. That leads to think that the app is in fact API-based, rather than a WebView wrapper.

Voice-based input leverages OpenAI Whisper, a 1.6 billion parameter AI model that can transcribe and translate speech audio from 97 different languages. Whisper is open source and available on GitHub, including both source code and pre-trained model files. While Whisper can be used through an API, it appears that OpenAI ChatGPT app is integrating it natively for improved responsiveness.

This is confirmed by a “tear-down” analysis carried through at Emerge Tools, provider of a number of analysis tool for mobile apps. The analysis showed that the app binary takes up 41MB, of which almost the half are debugging symbols, attesting the still “experimental” nature of the implementation. The app includes a number of dependencies, such as MixPanel for analytics, DataDog for logging, Sentry for performance monitoring, and others.

As to chat history, OpenAI has provided the ability of disabling it in its last update, with the main benefit of keeping chats private to devices. When chat history is enabled, indeed, OpenAI will use it to improve their models. If you disable chat history, though, your chats will only be stored for 30 days.

As mentioned, the app is currently available only to US-based iOS users, but it will reach other countries in the next weeks, says OpenAI, while the Android version is coming soon.

About the Author

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Unlocking Software Engineering Potential for Better Products

MMS Founder
MMS Ben Linders

Article originally posted on InfoQ. Visit InfoQ

Becoming an empowered team means solving problems rather than shipping features. Empowering software engineers and involving them early in discovery work can result in better products. If we measure outcomes rather than output, we can also hold teams accountable. Martin Mazur spoke about unlocking engineering potential at NDC Oslo 2023.

Supporting software engineers to empower them means trusting them and getting out of their way, Mazur said. We have, for years, broken down software engineers’ innovation capacity by removing them from the discussion on what to build, he mentioned. Innovation happens when real customer problems meet new technology – nobody knows what tech can do except for the engineers. By involving engineers early in discovery work, we can create products that exceed our customers’ expectations, Mazur said.

In order to cope with empowered work, engineers need to be able to navigate uncertainty, plan their work, have meaningful conversations, and understand how value is created, Mazur said. These are skills that they can learn, but in the context that most engineers have been working, they’ve never had a reason to learn them, he mentioned.

Mazur mentioned that if we look at competence as both width and depth, at some point in your career you really don’t get a huge effect from going deeper. Instead, broadening skills and looking at things, such as business models, design principles and interpersonal skills can raise engineers to new levels, he suggested:

It’s easy to get started by, for example, attending an unusual talk at a conference or picking a book you wouldn’t normally read.

Keeping teams accountable for results can be tricky if you don’t have the right culture and organization, Mazur said. People must feel empowered and in control of their work to accept accountability. He mentioned that the best thing we can do is to measure our team’s success on the outcome, that is, the impact they’ve created for the user, product, or business, not the output they have generated:

If we measure outcomes, we can also hold teams accountable for that outcome. If we measure output, we only know that they’ve worked at a desired pace, not what value that work actually generated.

Mazur suggested that software developers should invest in other types of skills than purely technical skills. These investments have a greater payout for those individuals, their teams, products, and, ultimately, the world, he concluded.

InfoQ interviewed Martin Mazur about how to unlock engineering potential.

InfoQ: What makes solving users’ problems more important than delivering features?

Martin Mazur: It’s all about the value we create with our software. A feature is only valuable to the user, their organization, and ultimately the world if it solves something – i.e., features we build that are never used are a huge waste of human potential.

InfoQ: What do teams need to be able to solve problems?

Mazur: It’s not one single thing, there are several factors that need to be present in order for teams to be able to solve problems. Ultimately, most teams and organizations need a culture change. We need to reach a point where people deeply care about their software’s impact on the end user. This requires organizations that are led with context and not control; teams must be delegated problems, trusted to solve them, and held accountable for the results.

InfoQ: How can teams improve the way that they make decisions?

Mazur: The most important thing around becoming better at making decisions is distinguishing good decisions from good outcomes, and vice versa. A good decision is something that, given all the information at hand, is the correct course of action. That means if you had to redo the decision, you would have made the same call again and again. Good decisions can still lead to bad outcomes.

Once we understand that, we know we have to act on the information we have, not the information we wish we had. To summarize, a decision is like a bet – and just like a bet, it has odds; the correct decision is the one with the best odds.

Often engineers get stuck on all the information they don’t have and end up in analysis paralysis. What happens then is that there is usually no time to wait, and not making a decision is also a decision. We end up with the default option which could be either good or bad – the equivalent of a coin flip.

InfoQ: What’s your advice to teams? And to individual software developers?

Mazur: The best advice for both is always asking two questions about your work.

“What is it for?” and “Who is it for?” and not do a surface-level job answering those questions. Really dig deep and figure out what problem the product solves and for who – this will create a new perspective for your work.

About the Author

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


MongoDB and Alibaba Cloud expand partnership to empower companies to build highly …

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

MongoDB, Inc and Alibaba Cloud, the digital technology and intelligence backbone of Alibaba Group, are extending their partnership to further integrate MongoDB and Alibaba Cloud services to serve even more customers across industries such as gaming, automotive, and content development globally.

MongoDB provides a popular non-relational database, and through this partnership, customers can easily adopt and consume MongoDB-as-a-service—ApsaraDB for MongoDB—from Alibaba Cloud’s data centers globally.

Customers can use ApsaraDB for MongoDB to quickly and easily build modern applications at enterprise scale, according to the vendors.

“MongoDB’s partnership with Alibaba Cloud is valuable for our customers for several reasons,” said Alan Chhabra, executive vice president for worldwide partnerships at MongoDB. “We joined a U.S.-based software innovator and one of the most strategic cloud providers in the world to bring MongoDB’s flexible and scalable data model to developers in China. The past three years have produced tremendous innovation for our joint clients, and we look forward to another four years of driving even more customer success.”

Alibaba Cloud works closely with MongoDB’s technical teams to rapidly develop and launch cloud services for its customers.

Alibaba Cloud’s ApsaraDB for MongoDB takes advantage of the document data model to provide developers a flexible and highly scalable database to easily build applications and quickly ship new features to meet business demands.

“The three years of cooperation with MongoDB have demonstrated how much customers can benefit when we closely integrate MongoDB’s capabilities with Alibaba Cloud’s cloud-native environment, known as ApsaraDB for MongoDB,” said Dr. Li Feifei, president of database business, Alibaba Cloud Intelligence. “By using the MongoDB database with Alibaba Cloud’s distinctive features, customers can rapidly innovate and scale their business while reducing costs and increasing efficiency on ApsaraDB for MongoDB.”

For more information about this news, visit www.mongodb.com.

KMWorld Covers

Please enable JavaScript to view the comments powered by Disqus.

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


MongoDB could ‘shame’ Wall Street with blowout results: Monness, Crespi, Hardt

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

MongoDB office in Silicon Valley

Michael Vi/iStock Editorial via Getty Images

Cloud software provider MongoDB (NASDAQ:MDB) is slated to report first-quarter results on Thursday after the close of trading and the quarterly figures could put Wall Street’s estimates to “shame,” according to investment firm Monness, Crespi, Hardt.

Analyst Brian White, who rates

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


The five-year returns have been massive for MongoDB (NASDAQ:MDB) shareholders …

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

Long term investing can be life changing when you buy and hold the truly great businesses. While not every stock performs well, when investors win, they can win big. For example, the MongoDB, Inc. (NASDAQ:MDB) share price is up a whopping 475% in the last half decade, a handsome return for long term holders. If that doesn’t get you thinking about long term investing, we don’t know what will. It’s also good to see the share price up 35% over the last quarter.

Since it’s been a strong week for MongoDB shareholders, let’s have a look at trend of the longer term fundamentals.

See our latest analysis for MongoDB

MongoDB wasn’t profitable in the last twelve months, it is unlikely we’ll see a strong correlation between its share price and its earnings per share (EPS). Arguably revenue is our next best option. When a company doesn’t make profits, we’d generally expect to see good revenue growth. That’s because fast revenue growth can be easily extrapolated to forecast profits, often of considerable size.

For the last half decade, MongoDB can boast revenue growth at a rate of 37% per year. Even measured against other revenue-focussed companies, that’s a good result. Fortunately, the market has not missed this, and has pushed the share price up by 42% per year in that time. It’s never too late to start following a top notch stock like MongoDB, since some long term winners go on winning for decades. On the face of it, this looks lke a good opportunity, although we note sentiment seems very positive already.

You can see how earnings and revenue have changed over time in the image below (click on the chart to see the exact values).

earnings-and-revenue-growthearnings-and-revenue-growth

earnings-and-revenue-growth

MongoDB is a well known stock, with plenty of analyst coverage, suggesting some visibility into future growth. If you are thinking of buying or selling MongoDB stock, you should check out this free report showing analyst consensus estimates for future profits.

A Different Perspective

It’s good to see that MongoDB has rewarded shareholders with a total shareholder return of 19% in the last twelve months. However, the TSR over five years, coming in at 42% per year, is even more impressive. Potential buyers might understandably feel they’ve missed the opportunity, but it’s always possible business is still firing on all cylinders. It’s always interesting to track share price performance over the longer term. But to understand MongoDB better, we need to consider many other factors. Even so, be aware that MongoDB is showing 3 warning signs in our investment analysis , you should know about…

Of course MongoDB may not be the best stock to buy. So you may wish to see this free collection of growth stocks.

Please note, the market returns quoted in this article reflect the market weighted average returns of stocks that currently trade on American exchanges.

Have feedback on this article? Concerned about the content? Get in touch with us directly. Alternatively, email editorial-team (at) simplywallst.com.

This article by Simply Wall St is general in nature. We provide commentary based on historical data and analyst forecasts only using an unbiased methodology and our articles are not intended to be financial advice. It does not constitute a recommendation to buy or sell any stock, and does not take account of your objectives, or your financial situation. We aim to bring you long-term focused analysis driven by fundamental data. Note that our analysis may not factor in the latest price-sensitive company announcements or qualitative material. Simply Wall St has no position in any stocks mentioned.

Join A Paid User Research Session
You’ll receive a US$30 Amazon Gift card for 1 hour of your time while helping us build better investing tools for the individual investors like yourself. Sign up here

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.