Article: Is Your Test Suite Brittle? Maybe It’s Too DRY

MMS Founder
MMS Kimberly Hendrick

Article originally posted on InfoQ. Visit InfoQ

Key Takeaways

  • Don’t repeat yourself, or “DRY”, is a useful principle to apply to both application code and test code.
  • The misapplication of the DRY technique can make tests hard to understand, maintain, and change.
  • While code duplication may not be so harmful to your tests, allowing duplication of concepts causes the same maintainability problems in test code as in application code.
  • When applying DRY to tests, clearly distinguish between the three steps of a test: arrange, act, and assert.
  • TDD provides many benefits and can promote a shorter feedback loop and better test coverage.

Those of us who write automated tests do so for many reasons and gain several benefits. We gain increased trust in the correctness of the code, confidence that allows us to refactor, and faster feedback from our tests on the design of the application code.

I’m a huge proponent of TDD (Test Driven Development) and believe TDD provides all the benefits stated above, along with an even shorter feedback loop and better test coverage.

One crucial design principle in software development is DRY – Don’t Repeat Yourself. However, as we will see, when DRY is applied to test code, it can cause the test suite to become brittle – difficult to understand, maintain, and change. When the tests cause us maintenance headaches, we may question whether they are worth the time and effort we put into them.

Can this happen because our test suite is “too DRY”? How can we avoid this problem and still benefit from writing tests? In this article, I’ll delve into this topic. I will present some indications that a test suite is brittle, guidelines to follow when reducing test duplication, and better ways to DRY up tests.

Note: I won’t discuss the definitions of different types of tests in this article. Instead, it focuses on tests where duplication is common.

These are often considered unit tests but may also occur in tests that don’t fit a strict definition of a “unit test.” For another viewpoint on types of tests, read A Simpler Testing Pyramid: Getting the Most out of Your Tests.

What is DRY?

DRY is an acronym for “Don’t Repeat Yourself,” coined by Andy Hunt and Dave Thomas in The Pragmatic Programmer. They defined it as the principle that “every piece of knowledge must have a single, unambiguous, authoritative representation within a system.”

The advantage of DRY code is that if a concept changes in the application, it requires a change in only one place. This makes a codebase easier to read and maintain and reduces the chances of bugs. Beautiful, clean designs can emerge when domain concepts are represented in a single place in the application.

DRY Application Code

DRY is not always easy to apply. Indeed, duplication in code that looks similar can tempt us to create unnecessary abstractions, leading to more complicated code instead of a cleaner design. One useful criterion to consider is that DRY is concerned with reducing code duplication from concept duplication and not reducing duplication of typing. This idea may guide its application while avoiding common pitfalls.

For example, we often use literal values in our code. Is the number 60 that appears in several locations an instance of duplication, or does it have different meanings in each case? A helpful evaluation can be to ask: “If the value had to change, would we want it to change everywhere?” 60 will (hopefully) always be the number of seconds in a minute, but 60 somewhere else may represent a speed limit. This integer is not a great candidate to pull into a globally shared variable for the sake of DRY.

As another example, imagine a method that loops over a collection and performs an action. This method might look a lot like another method that loops over the same collection and performs a slightly different action. Should these two methods be extracted to remove the duplication? Perhaps, but not necessarily. One way of looking at it is if a feature change would require them both to change simultaneously, they are most likely closely related and should be combined. But it takes more than looking at the code “shape” to know if it should be DRYed up.

Reasoning in terms of duplication of concepts helps avoid wrong decisions.

DRY Tests

DRY in test code often presents a similar dilemma. While excessive duplication can make tests lengthy and difficult to maintain, misapplying DRY can lead to brittle test suites. Does this suggest that the test code warrants more duplication than the application code?

DRY vs. DAMP/WET

A common solution to brittle tests is to use the DAMP acronym to describe how tests should be written. DAMP stands for “Descriptive and Meaningful Phrases” or “Don’t Abstract Methods Prematurely.” Another acronym (we love a good acronym!) is WET: “Write Everything Twice,” “Write Every Time,” “We Enjoy Typing,” or “Waste Everyone’s Time.”

The literal definition of DAMP has good intention – descriptive, meaningful phrases and knowing the right time to extract methods are essential when writing software. However, in a more general sense, DAMP and WET are opposites of DRY. The idea can be summarized as follows: Prefer more duplication in tests than you would in application code.

However, the same concerns of readability and maintainability exist in application code as in test code. Duplication of concepts causes the same problems of maintainability in test code as in application code.

Brittle Example

Let’s review some brittle test code written in Kotlin.

The below example shows a common pattern that may present differently depending on the testing language and framework. For example, in RSpec, the long setUp() method may be many let! statements instead.

class FilterTest {
   private lateinit var filter: Filter

   private lateinit var book1: Book
   private lateinit var book2: Book
   private lateinit var book3: Book
   private lateinit var book4: Book
   private lateinit var author: Author
   private lateinit var item1: Item
   private lateinit var item2: Item

   @BeforeEach
   fun setUp() {
       book1 = createBook("Test title", "Test subtitle", 
                          "2000-01-01", "2012-02-01")
       book2 = createBook("Not found", "Not found", 
                          "2000-01-15", "2012-03-01")
       book3 = createBook("title 2", "Subtitle 2", null, 
                          "archived", "mst")
       createBookLanguage("EN", book1)
       createBookLanguage("EN", book3)
       author = createAuthor()
       book4 = createBook("Another title 2", "Subtitle 2", 
                          null, "processed", "", "", 
                          listOf("b", "c"), author)
       val user = createUser()
       createProduct(user, null, book4)
       val salesTeam = createSalesTeam()
       createProduct(null, salesTeam, book4)
       val price1 = createPrice(book1)
       val price2 = createPrice(book3)
       item1 = createItem("item")
       createPriceTag(item1, price1)
       item2 = createItem("item2")
       createPriceTag(item2, price2)
       val mstDiscount = createDiscount("mstdiscount")
       val specialDiscount = createDiscount("special")
       createBookDiscount(mstDiscount, book1)
       createBookDiscount(specialDiscount, book2)
       createBookDiscount(mstDiscount, book2)
   }

   @Test
   fun `filter by title`() {
       filter = Filter(searchTerm = "title")
       onlyFindsBooks(filter, book1, book3, book4)
   }

   @Test
   fun `filter by la​st`() {
       filter = Filter(searchTerm = "title", la​st = "5 days")
       onlyFindsBooks(filter, book3)
   }

   @Test
   fun `filter by released from and released to`() {
       filter = Filter(releasedFrom = "2000-01-10", 
                       releasedTo = "2000-01-20")
       onlyFindsBooks(filter, book2)
   }

   @Test
   fun `filter by released from without released to`() {
       filter = Filter(releasedFrom = "2000-01-02")
       onlyFindsBooks(filter, book2, book3, book4)
   }

   @Test
   fun `filter by released to without released from`() {
       filter = Filter(releasedTo = "2000-01-01")
       onlyFindsBooks(filter, book1)
   }

   @Test
   fun `filter by language`() {
       filter = Filter(language = "EN")
       onlyFindsBooks(filter, book1, book3)
   }

   @Test
   fun `filter by author ids`() {
       filter = Filter(authorUuids = author.uuid)
       onlyFindsBooks(filter, book4)
   }

   @Test
   fun `filter by state`() {
       filter = Filter(state = "archived")
       onlyFindsBooks(filter, book3)
   }

   @Test
   fun `filter by multiple item_uuids`() {
       filter = Filter(itemUuids = listOf(item1.uuid, item2.uuid))
       onlyFindsBooks(filter, book1, book3)
   }

   @Test
   fun `filtering by discounts with substring`() {
       filter = Filter(anyDiscount = listOf("discount"))
       assertTrue(filter.results().isEmpty())
   }

   @Test
   fun `filtering by discounts with single discount string`() {
       filter = Filter(anyDiscount = listOf("special"))
       onlyFindsBooks(filter, book2)
   }

   @Test
   fun `filtering by discounts with non-existent discount`() {
       filter = Filter(anyDiscount = listOf("foobar"))
       assertTrue(filter.results().isEmpty())
   }

   @Test
   fun `filtering by discounts with multiple of the same discount`() {
       filter = Filter(anyDiscount = 
           listOf("mstdiscount", "mstdiscount", "special"))
       onlyFindsBooks(filter, book1, book2)
   }

   private fun onlyFindsBooks(filter: Filter, vararg foundBooks: Book) {
       val uuids = foundBooks.map { it.uuid }.toSet()
       assertEquals(uuids, filter.results().map { it.uuid }.toSet())
   }
}

When studying code like this, it’s common to first focus on the setup steps, then digest each test and figure out how they relate to the setup (or vice versa). Looking at only the setup in isolation provides no clarity, nor does focusing on each test individually. This is an indication of a brittle test suite. Ideally, each test can be read as its own little universe with all context defined locally.

In the above example, the setup() method creates all the books and related data for all the tests. As a result, it is unclear which books are required for which tests.  In addition, the numerous details make it challenging to discern which ones are relevant and which are required for book creation in general. Notice how many things would break if the required data for creating books were to change.

When focusing on the tests themselves, each test does the minimum to call the application code and assert the results. The specific book instance(s) referenced in the assertion is buried in the setUp() method at the top. It’s unclear what purpose onlyFindsBooks serves in the tests. You might be tempted to add a comment on these tests to remind you of the relevance of each book’s attributes in each test.

It was clear that the initial developers had good intentions creating the objects all in one place. If the initial feature only had two or three filters available, creating all the objects at the top might have made the code more concise. As the tests and objects grew, however, they outgrew this setup method. Subsequent filter features led developers to add more fields to the books and expect whichever book suited the test to return. Imagine trying to figure out which object was meant to be returned as we began to compose different combinations of the filters together!

To figure out what onlyFindsBooks() does, you’ll need to scroll more to find the hidden assertions. This method has enough logic that it takes a minute to connect the dots between what is passed in from the test and what the assertion is.

Finally, the filter instance declaration is far from the tests.

For example, let’s focus on this test for filtering by language:

@Test
fun `filter by language`() {
   filter = Filter(language = "EN")
   onlyFindsBooks(filter, book1, book3)
}

What makes book1 and book3 match the criteria of language = "EN" that was passed in? Why wouldn’t book2 also come back from this call? To answer those questions, you need to scroll to the setup, load the entire context of all the setup into your mind, and then attempt to spot the similarities and differences between all the books.

Even more challenging is this test:

@Test
fun `filter by la​st`() {
   filter = Filter(searchTerm = "title", la​st = "5 days")
   onlyFindsBooks(filter, book3)
}

Where does “5 days” come from? Is it related to a value hidden in the createBook() method for book3?

The author of this code applied the DRY technique to extract duplication but ended up with a test suite that is hard to understand and will break easily.

What to Look For

Many clues in the above code indicate that DRY has been misapplied. Some indications that tests are brittle and need refactoring include:

  • Tests are not their own little universe (see Mystery Guest): Do you find yourself scrolling up and down to understand each test?
  • Relevant details are not highlighted: Are there comments in tests to clarify relevant test details?
  • The intention of the test is unclear: Is there any boilerplate or “noise” required for setup but not directly related to the test?
  • Duplicate concepts are duplicated: Does changing application code break many tests?
  • Tests are not independent: Do many tests break when modifying one?

Solutions

In this section, we will present two possible solutions to the problems described above: the Three As principle and the use of object methods.

Three As

Tests may be seen as having three high-level parts. Often, these are referred to as the “Three As“:

  • Arrange – any necessary setup, including the variable the test is focused on
  • Act – the call to the application code (aka SUT, Subject Under Test)
  • Assert – the verification step that includes the expectation or assertion.

These steps are also referred to as Given, When, and Then.

The ideal test has only three lines, one for each of the As. This may not be feasible in reality, but it’s still a worthwhile objective to keep in mind. In fact, tests that match this pattern are easier to read:

// Arrange
var object = createObject()

// Act
var result = sut.findObject()

// Assert
assertEquals(object, result)

Object Creation Methods

Strategic use of object creation methods can highlight relevant details and hide irrelevant (but necessary) boilerplate behind meaningful domain names. This strategy is inspired by two others: Builder Pattern and Object Mother. While the example code we reviewed earlier uses methods to build test objects, it lacks some key benefits.

Object creation methods should:

  1. Be named with a domain name that indicates which type of object it creates
  2. Have defaults for all required values
  3. Allow overrides for any values used directly by tests

Let’s change one of the tests from our example code to follow the Three As and use object creation methods:

@Test
fun `filter by language`() {
   var englishBook = createBook()
   createBookLanguage("EN", englishBook)
   var germanBook = createBook()
   createBookLanguage("DE", germanBook)

   var results = Filter(language = "EN").results()
   
   val expectedUuids = listOf(englishBook).map { it.uuid }
   val actualUuids = results.map { it.uuid }
   assertEquals(expectedUuids, actualUuids)
}

The changes made here are:

  • We modified the createBook() method to hide the boilerplate and allow overriding of the relevant details of the language value (the createBook() definition is not shown).
  • We renamed book variables to indicate their relevant differences.
  • We inlined the filter variable to make the Act step visible. This also allows it to be a constant instead of a variable, thus decreasing mutability.
  • We inlined the onlyFindsBooks() method and renamed temporary variables. This allows the separation of the Act step from the Assert step and clarifies the assertion.

Now, the three steps are much easier to identify. We can easily see why we are creating two books and their differences. It is clear that the Act step is looking only for "EN" and that we expect only the book’s English version to be returned.

At four lines of code, the Arrange step is longer than ideal. Even though it is four lines long, they are all relevant to this test, and it’s easy to see why. We could combine creating a book and associating the language into a single method. This makes the test code more complex and tightly couples the creation of books with languages in our test code, so it may cause more confusion than clarity. If, however, “book written in language” is a concept that exists in the domain, this might be the right call.

The logic in the Assert step could be better. That’s enough logic and noise to make it hard to understand if it were to fail.

Let’s extract those two areas and see how it looks:

@Test
fun `filter by language`() {
   val englishBook = createBookWrittenIn("EN")
   val germanBook = createBookWrittenIn("DE")

   val results = Filter(language = "EN").results()

   assertBooksEqual(listOf(englishBook), results)
}

private fun createBookWrittenIn(language: String): Book {
   val book = createBook()
   createBookLanguage(language, book)
  
   return book
}

private fun assertBooksEqual(expected: List, actual: List) {
   val expectedUuids = expected.map { it.uuid }
   val actualUuids = actual.map { it.uuid }
   assertEquals(expectedUuids, actualUuids)
}

This test requires nothing in the setUp() method, making it easy to understand without scrolling. You can dive into the details of the helper methods (createBookWrittenIn and assertBooksEqual), but the test is readable even without doing so.

As we apply these changes throughout the rest of the test suite, we’ll be forced to consider which books with which attributes are required for each test. The relevant details will stand out as we continue.

We may look at all the tests together and feel uncomfortable that we’re creating so many books! But we’re ok with that duplication because we know that while it looks like a duplication of code, it is not a duplication of concepts. Each test creates books representing different ideas, e.g., a book written in English vs a book released on a certain date.

Benefits

Our setup method will be empty, and each test will be readable in isolation. Changing our application code (e.g., the book constructor) will only require changing the method in one place. Changing the setup or expectation of a single test will not cause all the tests to fail. The extracted helper methods have meaningful names that fit into the Three As pattern.

Guidelines

Here is a summary of the key guidelines that we followed, as well as additional guidelines:

  • Each test matches the Three As pattern: Arrange, Act, Assert. The three-part pattern (setup, action, expectations) should be easily distinguishable when looking at the test.

Arrange

  • Setup code does not include assertions.
  • Each test clearly indicates relevant differences from other tests.
  • Setup methods do not include any relevant differences (they are instead local to each test).
  • Boilerplate “noise” is extracted and easy to reuse.
  • Tests are run and fail independently. Tests are each their own tiny universe with all the context they need.
  • Avoid randomness that causes tests to be non-deterministic. Test failures should be deterministic to avoid flaky tests that fail intermittently.

Act

  • The SUT (Subject Under Test) and the main thing being tested (target behavior) are easy to identify.

Assert

  • Favor literals (hardcoded) values in assertions instead of variables. An exception is when well-named variables provide additional clarity.
  • Tests don’t have complicated logic or loops. Loops create interdependent tests. Complicated logic is brittle and hard to understand.
  • Assertions don’t repeat the implementation code.
  • Consider fewer assertions per test. Breaking up a test with a large set of assertions into multiple tests with fewer assertions provides more feedback on the failures. Multiple assertions may indicate too many responsibilities in the application code.
  • Prefer assertions that provide more information when they fail. For example, one assertion that the result matches an array provides more information than multiple assertions that count the items in the array and then verify each item individually. Tests stop on the first failure, so feedback from subsequent assertions is lost.

A Note about Design

Sometimes, it is difficult to follow the above guidelines because the tests are trying to tell you something about the application design. Some test smells that provide feedback to the application code design include:

If this:

  • Too much setup could indicate a large surface area being tested; too much is being tested.
  • Wanting to extract a variable (thus coupling tests) because a literal is being tested repeatedly may indicate the application has too many responsibilities.

Then:

  • Consider that the application code has too many responsibilities and apply the Single Responsibility principle.

If this:

  • Comments are necessary to make the test understandable

Then:

  • Rename a variable, method, or test name to be more meaningful
  • Consider application code refactoring to provide more meaningful names or split up responsibilities

Additionally, don’t be afraid to wait until removing duplication feels “right.” Prefer duplication until it’s clearer what the tests are telling you. If an extraction or refactor goes wrong, it may be best to inline code and try again.

A Note about Performance

One more reason developers are driven to extract code duplication is performance concerns. Certainly, slow tests are a cause for concern, but often, the worry of creating duplicate objects is overinflated, certainly when compared to the time spent maintaining brittle tests. Respond to the pain caused by a lot of test setup by redesigning the application code. This results in both better design and lightweight tests.

If you do encounter performance problems with tests, begin by investigating the reasons for the slowness. Consider whether the tests are telling you something about the architecture. You may find a performance solution that doesn’t compromise the test clarity.

Conclusion

DRY is a valuable principle to apply to both application code and test code. When applying DRY to tests, though, clearly distinguish between the three steps of a test: Arrange, Act, and Assert. This will help highlight the differences between each test and keep the boilerplate from making tests noisy. If your tests feel brittle (often break with application code changes) or hard to read, don’t be afraid to inline them and re-extract along more meaningful domain seams.

It is important to remember that good design principles apply to application and test code. Test code requires the same ease of maintenance and readability as application code, and while code duplication may not be so harmful to your tests, allowing duplication of concepts causes the same problems of maintainability in test code as in application code. Hence, the same level of care should be given to the test code.

About the Author

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Presentation: Several Components are Rendering: Client Performance at Slack-Scale

MMS Founder
MMS Jenna Zeigen

Article originally posted on InfoQ. Visit InfoQ

Transcript

Zeigen: My name is Jenna Zeigen. This talk is, several components are rendering. I am a staff software engineer at Slack, on our client performance infrastructure team. The team is only about 2 years old at this point. I was one of the founding members. I’ve been working on performance at Slack full time for a little bit longer than that. Before I was on the client performance infrastructure team, I was on Slack search team, where I worked a lot on the desktop autocomplete experience that you may know and love. It was on that team where I really cut my teeth doing render performance and JavaScript runtime performance. Since that feature does more work than you would expect an autocomplete to do on the frontend, and it has to do it in a very short amount of time. I had a lot of fun doing that and decided that performance was the thing for me.

Performance

What is performance? In short, we want to make the app go fast, reduce latency, have buttery smooth interactions, get rid of jank, no dropped frames. There’s all sorts of goals for performance work. As I like to say, my number one performance rule about how to make performance happen is to do less work. It all comes down to no matter what your strategy, you’re trying to do less work. Then the why, which is really why I wanted to have this slide. It seemed like there needed to be more words on it. The why is so that our users have a great experience. It’s really easy to get bogged down when you’re doing performance work in the numbers. You want that number of milliseconds to be a smaller number of milliseconds. You want the graph to go in the right direction. It’s important to keep in mind that we are doing performance work, because we want our users to have a good experience. In Slack’s case, we want channel switches to be fast. We want typing and input to feel smooth. We don’t want there to be input delay. It should feel instantaneous. Keep that in mind as we go through this talk and try to stay rooted in that idea of why we are doing all of this and why I’m talking about this.

Slack (React App on Desktop)

First, some stuff about Slack. Slack, at its core is a channel-based collaboration tool. There’s a lot of text. There’s also a lot of images, and files, and now video conferencing. The Slack desktop app isn’t native, it’s an Electron app, which means it’s a web application like you would have in a browser being rendered by a special Chromium instance via the Electron framework. This means the Slack desktop app is the same application that you would use in Chrome, or Firefox, or Safari, or your browser of choice. It’s using good-old HTML, CSS, and JavaScript, just like in a browser. This means that we’re also subject to the same performance constraints and challenges as we face when we are coding frontends for browsers.

How Do Browsers Even? (JavaScript is Single Threaded)

Now I’m going to talk a little bit about browsers. What’s going on inside of a browser? One of the jobs of a browser is to convert the code that we send it over the wire into pixels on a page. It does this by creating some trees and turning those trees into different trees. It’s going to take the HTML and it’s going to turn it into the DOM tree, the document object model. It’s also going to do something similar to the CSS. Then, by their powers combined, you get the render tree. Then the browser is going to take the render tree and go through three more phases. First, layout phase. We still need to figure out where all of the elements are going to go on the page and how big they’re supposed to be. Then we need to paint those things. The painting phase, which is representing them as pixels on the screen, and this is going to be a series of layers, which then get sent to the GPU for compositing or smooshing all the layers together. The browser will try its best to do this 60 times per second, provided there’s something that has changed that it needs to animate. We’re trying for 60 frames per second or 16.66 milliseconds, and 16 milliseconds is a magic number in frontend performance. Try and keep that number in mind as we go through these slides.

These 60 frames per second only happens in the most perfect of conditions, for you see, renders are constrained by the speed of your JavaScript. JavaScript is single threaded, running on the browser’s main thread along with all of the repainting, layout, compositing that has to happen. Everything that gets called in JavaScript in the browser is going to get thrown onto the stack. Synchronous calls are going to go right on, and async callbacks like event handlers, input handlers, click handlers, are going to get thrown into a callback queue. Then they get moved to that stack by the event loop once all the synchronous JavaScript is done happening. There’s also that render queue that’s trying to get stuff done, 60 frames per second, but renders can’t happen if there’s anything still on the JavaScript callback. To put it differently, the browser won’t complete a repaint if there’s any JavaScript still left to be called in the stack.

That means that if your JavaScript takes longer than 16 milliseconds to run, you could potentially end up with dropped frames, or laggy inputs if the browser has something to animate, or if you’re trying to type into an input.

Performance, a UX Perspective

Performance is about user experience. Let’s take it back to that. Google’s done a lot of research as they do on browsers and user experience. They’ve come up with a model of user experience called the RAIL model. This work was also informed by Jakob Nielsen and some of his research on how users perceive delay. According to the RAIL model, you want to, R, respond to users’ actions within 100 milliseconds, or your users are going to start feeling the lag. This means that practically, you need to produce actions within 50 milliseconds to give time for other work. The browser has a lot of stuff to do. It’s a busy girl. You got to give it some breathing room on either side to get your JavaScript done and get all the work done that you ask it to do. On the animation frame side, the A in RAIL is for animation, you need to produce that animation frame in 16 milliseconds, that magic 16 milliseconds, or you could end up dropping frames and blocking the loop and animations could start to feel choppy. This practically means that you need to get all that setup done in 10 milliseconds, since the browsers need about 6 milliseconds to actually render the frame.

Frontend Performance

I would be remiss if I didn’t take this slight detour. A few years ago, I was reading this book called, “The Every Computer Performance Book.” It said that, in my experience, the application is rarely re-rendered, unless the inefficiency is egregious, and the fix is easy and obvious. The alternative presented in this book was to move the code to a faster machine, or split the code over several machines. We’re talking about the client here. We’re talking about people’s laptops. We don’t have that luxury. That’s just simply a nonstarter for frontend. Unlike on the backend, we don’t have control over the computers that our code is running on. We can’t mail our users’ laptops that are up to spec and whatever. People can have anything from most souped up M2, all the way down to a 2-core machine with who knows what other software is running on that computer, competing for resources, especially if it’s a corporate owned laptop. We still got to get our software to perform well, no matter what. That’s part of the thrill of frontend performance.

React and Redux 101

I mentioned earlier that Slack is a React app, so some details about React. React is a popular, well maintained, and easy to use component-based UI framework. It promotes modularity by letting engineers write their markup and JavaScript side by side. It’s used across a wide variety of applications from toy apps, through enterprise software like Slack, since it allows you to iterate and scale quickly. Its popularity also means that it’s well documented and there’s solid developer tooling and a lot of libraries that we can bring in if we need to. Six years ago, when Slack was going through a huge rewrite and rearchitecture, it was like the thing to choose. Some details about React that are going to come in handy to know, components get data as props, or they can store data in component state. As you see here, the avatar component gets person and size as props. You can see in the second code block there, it’s receiving Taylor Swift and 1989 as some props. There’s not an example here of storing component state, but that’s also another way that it can deal with its data. Then, crucial detail, like core bit about React is that changes to props are going to cause components to re-render. When a component says, ok, one of my props is different, it’s going to then re-render so it can redisplay the updated data to you the user.

In a large application, like Slack, this fragmented type of storing data in a component itself, or even just purely passing data down via props, could get unwieldy. A central datastore is quite appealing. We decided to use a state management library called Redux, that’s a popular companion to React and is used to supplement component state. Instead, there’s a central store that components can connect to. Then data is read from Redux via selectors, which aid in computing connected props. A component can connect to Redux. You see that useSelector example there on the code block. We passed it the prop of ID and the component is using that ID prop to then say, Redux, give me that person by ID. That is a connected prop making avatar now a connected component.

Let’s explain this with a diagram. You have Redux in the middle, it’s the central datastore. Then there are a whole bunch of connected components that are reminiscent of Slack. Actions are going to get dispatched to Redux which causes reducers to run, which causes a Redux state to get updated. Dispatches are the result of interacting with the app or receiving information over the wire like an API over the WebSocket. Actions will continue to get dispatched, which, again, updates Redux. Then, when Redux changes, it sends out a notification to all the components that subscribe to it. I like to call this the Redux bat signal. Redux will send out its bat signal to all of the components that are connected to it. Then, everything that’s connected to Redux, every single component is going to then rerun all of its connections. All of the connected components are going to recalculate, see if any of them have updated. This is a caveat, it will only do this if it has verified that state has changed. That’s at least one thing. It will only do this if state has actually changed. Then, central tenant of React, components with change props will re-render. Again, if a component thinks that its data is now different, it will re-render. Here’s a different view, actions cause reducers to run, which then updates the store. The store then sends out the subscriber notification to the component, which then re-render. Then actions can then be sent from components or over the wire via API handlers. This process, this loop, this Redux loop that I like to call it, is going to happen every single time there is a dispatch, every single time Redux gets updated, that whole thing happens.

You might start to see how this could go wrong and start to cause performance issues. After that, we are seeing that Redux loops are just taking way too long to happen. Unsurprisingly, what we’re seeing just like, at rest, like you don’t even have to be doing anything. You could have your hands off the keyboard, and just like maybe the app is receiving notifications and stuff over the WebSocket or via API, just hands off the keyboard, even at p50, we are seeing that the Redux loop is taking 25 milliseconds, which is more than 16. We know that we’re already potentially dropping at least one frame, at least 50% of the time, that’s what p50 means. Then at p99, so 1% of the time, we are taking more than 200 milliseconds. We’re taking, in fact, 210 milliseconds to do all of this work, which means that we’ve blown through, we’ve doubled that threshold in which humans are going to be able to tell that something is going wrong. We’re going to start to drop frames. If you try to type into the input, they’re going to be feeling it.

What did we do? Like any good performance team, we profiled. The classic story here is you profile, you find the worst offending function, the thing that’s taking up the most amount of time. You rinse, repeat until your app is faster. In this case, what we had here was a classic death by a thousand cuts. You might say, there’s those little yellow ones, and that purple one. The yellow ones are garbage collections. The purple one is, we did something that caused the browser to be a little bit mad at us, we’ll just say that it’s a recalculation of styles. Otherwise, it’s just this pile of papercuts. We had to take some novel approaches to figuring out how to take the bulk out of this. Because there wasn’t anything in particular that was taking a long time, it was just a lot of stuff.

How can we just, again, make less work happen during every loop? We took a step back and figured out where performance was breaking down. Ultimately, it comes down to three main categories of things that echo the stages of the loop. One, every change to Redux results in a Redux subscriber notification firing. That’s the core problem with Redux. Then we spend way too long running selectors. There’s a lot of components on the page, they all have a lot of connections. Too much work is happening on every loop, just checking to see if we need to re-render. Then, three, we are spending too long re-rendering components that decide that they need to re-render, often unnecessarily. The first thing is, too many dispatches. For large enterprises, we can be dispatching hundreds of times per second. If you’re in a chatty channel, maybe with a bot in it that talks a lot to a channel, you can be receiving a lot every second. This Redux out of the box, it’s just going to keep saying, dispatch, update all the components, dispatch, update all the components. That just means a lot of updates. Every API call, WebSocket event, any clicks, switching the channel, subscriber notification, switching channels, sending messages, receiving messages, reactjis, everything in Slack: Redux, Redux, Redux. Then this leads to again, every connection runs every time Redux notifies. Practically, we ran some ad hoc logging, that we would never put in production. Practically, it’s 5,000 to 25,000 connected props in every loop. This is just how Redux works. This is why scaling it is starting to get to us. Even in 5,000, if every check takes 0.1 milliseconds, that’s a long task. We’ve blown through that 50 milliseconds. The 50 milliseconds is a long task. At that point, once you get to 50 milliseconds, like their browser performance APIs, if you hook into them, they’re like, maybe you should start making that a shorter task. Yes, just again, way too much work happening.

Then all of this extra work is leading to even more extra work. Because, as I said, we’re having unnecessary re-renders, which is a common problem in React land, but just have a lot of them. This happens because components are receiving or calculating props that fail equality checks, but they are deep-equal or unstable. This can happen, for example, if you calculate a prop via map, filter, reduce, what you get from a selector right out of the Redux store isn’t exactly what you need. You want to filter out everyone who isn’t the current member from this list of members today. If you run a map, as you might know, that returns a new array. Every single time you do it, that is a new array that is going to fail reference equality checks. That means the component thinks something is different and it needs to re-render. Bad for performance. There’s all differing varieties of this type of issue happening. Basically, components think that data is different when it actually isn’t.

Improving Performance by Doing Less Work

How are we making this better? Actually, doing less work. There are two attacks that we’ve been taking here. First, we’re going to target some problem components. There are components that we know are contributing to this pile of papercuts more than others, then, also, we know that these mistakes are happening everywhere, so we also need to take a more broad-spectrum approach to some of these things. First, targeting problem components. There’s one in particular that I’m going to talk about. What do you think is the most performance offending component in Slack? It is on the screen right now. It’s not messages. It’s not the search bar. It is the channel sidebar. We had a hunch about this. If you look at it, it doesn’t look that complicated, but we had a hunch that it might be a naughty component. Through some natural experiments, we discovered that neutralizing the sidebars or removing it from being rendered alleviated the performance problems with people who were having particularly bad performance problems. Kind of a surprise. Naively, at face value, sidebar looks like a simple list of channels. It looks like some icons, and maybe some section headings and some channel name, and then maybe another icon on the right. It was taking 115 milliseconds. This was me, I put my sidebar into bat mode, which was showing all of the conversations. Usually, I have it on unreads only performance tip, have your sidebar in unreads only. To make it bad, I made my sidebar, sidebar a lot longer. We found that there’s a lot of React and Redux work happening. This was a bit of a surprise to me. I knew the sidebar was bad, but I thought it was going to be the calculating what to render that was going to stick out and not the, we’re doing all this React and Redux work. Calculating what to render is taking 15 milliseconds, which is almost 16 milliseconds. Either way, this is not fun for anyone. There was definitely some extra work happening in that first green section, the React and Redux stuff.

Again, lots of selectors. We found through that same ad hoc logging that folks who had 20,000 selector calls, when we got rid of their sidebar, that dropped to 2,000. That is quite a 90% increase in improvement. That made us realize there’s some opportunities there. This is mainly because inefficiencies in lists compound quickly. There are 40 connected prop calculations in every sidebar item component, so the canonical like channel name with the icon, and that doesn’t even count all of the child components of that connected channel.

Forty times, if you have 400 things in your sidebar, that’s 16,000. A lot of work to try and dig into. We found that a lot of those selector calls were unnecessary. Surprise, isn’t it like revolutionary that we were doing work that we didn’t need to do? One of my specifically pet peeve, which is why it’s on this slide, is instead of checking experiments, so like someone had a feature flag on or didn’t. Instead of doing that once at the list level, we’re doing it in every single connected channel component, and maybe that was a reasonable thing for them to do. Maybe the experiment had something to do with showing the tool tip on a channel under some certain condition or something. We didn’t need to be checking experiment on every single one.

Then, also, there were some cases where we were calculating some data that was irrelevant to that type of channel. For instance, if you have the pref on to like show if someone’s typing in a DM in your sidebar, we only do that for DMs, it has nothing to do with channels. Yet, we would go and grab that pref, you would see like, does a person have that pref on? Even though it was a public channel, we were never going to use that data. We were just going to drop it on the floor. Surprise, we moved repeated work that we could up a level, we call it once instead of 400 times, and created more specialized components. Then we found some props that were unused. Then, all of this fairly banal work. I’m not standing up here and saying we did some amazing revolutionary stuff. It ended up creating a 30% improvement in sidebar performance, which I thought was pretty rad. We didn’t even touch that 15-millisecond bar on the right. Then, ok, but how did that impact overall Redux subscriber notification time? It made a sizable impact, over 10% across the board over the month that we were doing it. That was pretty neat to see that just focusing on a particular component that we knew was bad was going to have such a noticeable impact. People, even anecdotally, were saying that the performance was feeling better.

What’s Next: List Virtualization

We’re not done. We want to try revirtualizing the sidebar, this technique, which is to only render what’s going to be on the screen with a little bit of buffering to try and allow for smooth scrolling, actually had a tradeoff for scroll speed and performance. There’s some issues that you see if you scroll too quickly. We just were like, virtualization isn’t worth it, we want to focus on scroll speed. When actually now we’re seeing that maybe we took the wrong side of the tradeoff, so we want to try turning on list virtualization. List virtualization will be good for React and Redux performance, because fewer components being rendered means fewer connected props being calculated on every Redux loop, because there’s less components on the page trying to figure out if they need to re-render.

What’s Next: State Shapes and Storage

Another thing that we want to try that really targets that 15-millisecond section that we didn’t really touch with this work is to figure out if we can store our data closer to the shape that we needed for the sidebar. We store data like it’s the backend, which is reasonable. You get a list of channels over the wire, and like, I’ll just put it into Redux in that way. Then I will munge it however I need it for the use case that I in particular have. Engineers also tend to be afraid of storage. I think this is a reasonable fear that people have that like, “No, memory.” If I store this thing with 100 entries in it, we might start to feel it in the memory footprint, when in fact, it’s not the side of the tradeoff that you actually need at that point. We have this question of how can we store data, so it serves our UI better, so we cannot be spending so much time deriving the data that we need on every Redux loop? For example, why are we recalculating what should be shown in every channel section on every loop? Also, we store every single channel that you come across. This might be the fault of autocomplete. I’m not blaming myself or anything. We say, give me all the channels that match this particular query, and then you get a pile of channels back over the wire, and they’re not all channels that you’re in, and we store them anyway. Then to make your channel sidebar, we have to iterate through all of the channels that you have in Redux, while really the only ones we’re ever going to need for your channel sidebar is the ones that you’re actually in. Little fun tidbits like that.

Solutions: Batched Updates, Codemods, and Using Redux Less

As I said, just focusing on problem components isn’t going to solve the whole problem. We have a scaling problem with Redux, and we need to figure out what to do with that. It’s diffused, so it’s everywhere. It’s across components that are at the sidebar. We need some broader spectrum solutions. The first one that seems really intuitive, and in fact, they’re putting this in by default in React 18, is to batch updates. Out of the box, Redux every single time a dispatch happens and a reducer runs is then going to send out that bat signal to all of the components. Instead, we added a little bit of code to batch the updates. We’ve wrapped this call to this half secret React DOM API that flushes the updates.

We wrap this in a request animation frame call, which is our way of saying, browser, right before you’re about to do all that work to figure out where all the pixels have to go on the screen, run this code. It’s like a browser aware way of debalancing the subscriber notification, essentially. I like to joke that this is called the batch signal. Another thing that I think is pretty cool that we’re doing is codemods for performance. Performance problems can be detected and fixed at the AST level. We found that we can find when there are static unstable props. If someone is just creating a static object that they then pass as a prop to a child component, you can replace that out so it’s not getting remade on every single loop. Similarly, we could rewrite prop calculation for the selectors in a way that facilitates memoization. Another example is replacing empty values to an empty array. An empty array does not reference equal empty array, or empty object for that matter. We can replace it with constants that will always have reference equality.

We’re also trying to use Redux less. You might be wondering when this was going to come up. We are investigating using IndexedDB to store more evicted items that we can evict from the cache. Less data in Redux means fewer loops as a result of keeping the items in the cache fresh. Every time something gets stale, we need to get its update over the wire, which causes a dispatch, which causes the subscriber notification. Also, cache eviction is fun, but we could also not store stuff in Redux that we’re never going to use again. Finer-grained subscription would be cool, but it’s harder than it sounds. It would be great if we could say, this component only cares about data in the channel store. Let’s get it to subscribe only to the channel store. With Redux, it’s all or nothing.

Why React and Redux, Still?

Why are we still using Redux? This is a question that we’ve been asking a lot over the past year. We’ve started to tinker with proofs of concepts and found like, yes, this toy thing that we made with this thing that does finer-grained subscription, or has a totally different model of subscription than Redux, it’s a lot faster. Scale is our problem to begin with. Why are we still sticking with React and Redux at this point? React is popular, well maintained, and easy to use. It has thus far served us pretty well, with slather ever-growing team of 200 frontend engineers at this point, build features with confidence and velocity, we chose a system that kind of fits our engineering culture. It’s people friendly. It allows for people on the flip side to remain agnostic of the internals of the framework, which for the most part works for your average, everyday feature work. This is breaking down at scale as we push the limits. These problems are systemic and architectural. We’re faced with this question, we either change the architecture, or we put in the time to fix it with mitigation and education at the place where we have it now. We put in all these efforts into choosing a new framework, or we write our own thing that solves the problems that we have. This too would be a multiyear project for the team to undertake. We’d have to stop everything we’re doing and just switch to another thing if we really wanted to commit to this. There is no silver bullet out there. Changing everything would be a huge lift with 200 frontend engineers and counting. We want our associate engineers to be able to understand things. We want them to be able to find documentation and help themselves. Redux prefers consistency and approachability over performance. That’s the bottom line. Every other architecture makes different tradeoffs that we can’t be sure about, and how anything else would break down at scale either, once we start to see the emergent properties of that architecture that we chose.

Fighting a Problem of Scale at Scale

Where are we now? People love performance. No engineers are like, I don’t want my app to be fast. They’ll want the things that they write to, to work fast. We’re engineers, I think it’s in our blood, essentially. Let’s set our engineers up for success by giving them tools and helping them learn more about performance. I’ve really been taking the attack of creating a performance culture through education, tooling, and evangelism. React and Redux abstract away the internals, so you don’t really need to know about that whole loop. Some of you probably know more about how React and Redux work now than some frontend engineers. I believe that understanding the system contextualizes and motivates performance work, which is why I spent a lot of time in my talk explaining those things to you. The first thing that we can do to make people aware of the issues that are happening is to show them when they’re writing code that could be a performance liability. We’ve done this via Lint rules, remember with the codemods. You can find this stuff via AST and via static analysis. We bring this to their VS Code, with some Lint rules that show them when unstable properties are being passed to children, when unstable properties are being computed. When they’re passing functions or values that are breaking memoization, but they don’t have to. Not everything can be done via static analysis, though. We’ve created some warnings that go into people’s development consoles. While you might not be able to tell from the code itself when props are going to be unstable, we know what the values are at runtime, and we can compare the previous to the current value and be like, those were deep-equal, you might want to think about stabilizing them. We can also tell based on what’s in Redux when an experiment, for example, is finished, and we don’t need to be checking it on every single Redux loop. That’s just one less papercut in the pile.

Once they know what problems are in their code, we’ve been telling them about how to fix these things. These tips and tricks are fairly lightweight. None of these things is a particularly challenging thing to understand. Wrap your component in React memo if it’s re-rendering too much. Maybe use that EMPTY_ARRAY constant, instead of creating a new EMPTY_ARRAY on every single loop. Same with these other examples here. We’ve taken those Lint rules and those console warnings, and we’ve beaconed them to the backend, so now we have some burndown charts. While making people aware of the issues in the first step, sometimes you also need a good burndown chart. I’m not going to say that having your graph isn’t motivational. That’s why we all love graphs in performance land. Sometimes, yes, you do need a good chart that is going down to help light a fire under people’s tuchus. Say the graph is going in the right direction, it’s working. I like to joke that performance work is like fighting entropy, more stuff is going to keep popping up, it’s like Whack a Mole. For the most part, we’re heading in the right direction in the Lint rules and in the console warnings. That’s been really great to see.

I found in my career that performance has been built up as this problem for experts. There’s this hero culture that surrounds it. You go into the performance channel and you say, “I shaved 5 milliseconds off of this API call, look at me, look at how great I am.” This isn’t necessarily a good thing. It’s great to celebrate your wins. It’s great that you’re improving performance and reducing latency. We’re doing ourselves a disservice if we keep performance inaccessible, and that we keep this air around it that like the only way that we solve performance issues is by these huge architectural feats. We have to change the whole architecture if we want to fix performance problems. Because, again, engineers, your teammates care about performance, and they want to help and they want to pitch in. Let’s get a lot of people to fix our performance problems. We had a problem before this problem that came out of the scale of our app, we had all of these papercuts. If we get 200 engineers to fix 200 performance problems, that is 200 fewer papercuts in the pile.

Conclusion

As I was putting this story down on the slides, this has been rattling around in my head, it takes a lot of work to do less work. We could have taken the other side of the story and put in the work to rearchitect the whole application, and that would be a lot of work for my team. It would be a lot of work for the other engineers to readapt and change their way of working and change the code to use a “more performant framework.” Or we could trust our coworkers and teach them, and they’re good engineers. They work at your company for a reason. Let’s help them understand the systems that they are working on. Then they might start to see how the systems break down and then they could start to fix the system when it breaks down.

See more presentations with transcripts

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


MongoDB, Inc. (NASDAQ:MDB) Shares Acquired by Daiwa Securities Group Inc.

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

Daiwa Securities Group Inc. increased its holdings in shares of MongoDB, Inc. (NASDAQ:MDBFree Report) by 40.1% in the fourth quarter, according to its most recent 13F filing with the Securities and Exchange Commission. The fund owned 8,019 shares of the company’s stock after purchasing an additional 2,295 shares during the quarter. Daiwa Securities Group Inc.’s holdings in MongoDB were worth $3,279,000 at the end of the most recent reporting period.

Several other institutional investors and hedge funds have also recently bought and sold shares of the stock. Raymond James & Associates raised its stake in shares of MongoDB by 14.2% in the 4th quarter. Raymond James & Associates now owns 60,557 shares of the company’s stock valued at $24,759,000 after acquiring an additional 7,510 shares during the period. Nordea Investment Management AB lifted its stake in MongoDB by 298.2% during the fourth quarter. Nordea Investment Management AB now owns 18,657 shares of the company’s stock worth $7,735,000 after purchasing an additional 13,972 shares in the last quarter. Tokio Marine Asset Management Co. Ltd. boosted its position in shares of MongoDB by 9.7% during the third quarter. Tokio Marine Asset Management Co. Ltd. now owns 1,903 shares of the company’s stock worth $658,000 after buying an additional 168 shares during the period. Assenagon Asset Management S.A. grew its stake in shares of MongoDB by 1,196.1% in the fourth quarter. Assenagon Asset Management S.A. now owns 29,215 shares of the company’s stock valued at $11,945,000 after buying an additional 26,961 shares in the last quarter. Finally, Blueshift Asset Management LLC purchased a new stake in shares of MongoDB in the 3rd quarter valued at $902,000. 89.29% of the stock is currently owned by hedge funds and other institutional investors.

MongoDB Trading Down 2.4 %

NASDAQ:MDB opened at $327.47 on Friday. MongoDB, Inc. has a 52-week low of $212.52 and a 52-week high of $509.62. The stock’s 50 day moving average price is $390.81 and its 200 day moving average price is $390.32. The company has a debt-to-equity ratio of 1.07, a quick ratio of 4.40 and a current ratio of 4.40.

MongoDB (NASDAQ:MDBGet Free Report) last posted its quarterly earnings data on Thursday, March 7th. The company reported ($1.03) earnings per share for the quarter, missing the consensus estimate of ($0.71) by ($0.32). The firm had revenue of $458.00 million during the quarter, compared to analyst estimates of $431.99 million. MongoDB had a negative net margin of 10.49% and a negative return on equity of 16.22%. On average, sell-side analysts predict that MongoDB, Inc. will post -2.53 earnings per share for the current fiscal year.

Wall Street Analyst Weigh In

A number of brokerages recently commented on MDB. Stifel Nicolaus restated a “buy” rating and set a $435.00 target price on shares of MongoDB in a report on Thursday, March 14th. Tigress Financial lifted their price objective on shares of MongoDB from $495.00 to $500.00 and gave the company a “buy” rating in a report on Thursday, March 28th. Guggenheim increased their target price on shares of MongoDB from $250.00 to $272.00 and gave the stock a “sell” rating in a report on Monday, March 4th. Needham & Company LLC restated a “buy” rating and issued a $465.00 price target on shares of MongoDB in a research note on Tuesday, April 9th. Finally, UBS Group reiterated a “neutral” rating and issued a $410.00 price objective (down from $475.00) on shares of MongoDB in a research note on Thursday, January 4th. Two equities research analysts have rated the stock with a sell rating, three have issued a hold rating and nineteen have assigned a buy rating to the stock. According to MarketBeat.com, the company has an average rating of “Moderate Buy” and an average price target of $444.93.

Get Our Latest Stock Analysis on MongoDB

Insider Buying and Selling

In other news, CRO Cedric Pech sold 1,430 shares of the company’s stock in a transaction dated Tuesday, April 2nd. The shares were sold at an average price of $348.11, for a total value of $497,797.30. Following the completion of the sale, the executive now directly owns 45,444 shares of the company’s stock, valued at $15,819,510.84. The sale was disclosed in a filing with the Securities & Exchange Commission, which is available through this link. In other MongoDB news, Director Dwight A. Merriman sold 1,000 shares of the company’s stock in a transaction on Thursday, February 1st. The shares were sold at an average price of $404.20, for a total transaction of $404,200.00. Following the completion of the transaction, the director now owns 527,896 shares in the company, valued at approximately $213,375,563.20. The sale was disclosed in a legal filing with the SEC, which can be accessed through this link. Also, CRO Cedric Pech sold 1,430 shares of the firm’s stock in a transaction on Tuesday, April 2nd. The stock was sold at an average price of $348.11, for a total transaction of $497,797.30. Following the completion of the sale, the executive now owns 45,444 shares of the company’s stock, valued at approximately $15,819,510.84. The disclosure for this sale can be found here. Insiders sold a total of 92,802 shares of company stock worth $36,356,911 in the last 90 days. Company insiders own 4.80% of the company’s stock.

About MongoDB

(Free Report)

MongoDB, Inc, together with its subsidiaries, provides general purpose database platform worldwide. The company provides MongoDB Atlas, a hosted multi-cloud database-as-a-service solution; MongoDB Enterprise Advanced, a commercial database server for enterprise customers to run in the cloud, on-premises, or in a hybrid environment; and Community Server, a free-to-download version of its database, which includes the functionality that developers need to get started with MongoDB.

Featured Articles

Want to see what other hedge funds are holding MDB? Visit HoldingsChannel.com to get the latest 13F filings and insider trades for MongoDB, Inc. (NASDAQ:MDBFree Report).

Institutional Ownership by Quarter for MongoDB (NASDAQ:MDB)



Receive News & Ratings for MongoDB Daily – Enter your email address below to receive a concise daily summary of the latest news and analysts’ ratings for MongoDB and related companies with MarketBeat.com’s FREE daily email newsletter.

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Daiwa Securities Group Inc. Grows Stock Holdings in MongoDB, Inc. (NASDAQ:MDB)

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

Daiwa Securities Group Inc. grew its holdings in MongoDB, Inc. (NASDAQ:MDBFree Report) by 40.1% during the fourth quarter, according to the company in its most recent filing with the Securities and Exchange Commission. The firm owned 8,019 shares of the company’s stock after acquiring an additional 2,295 shares during the period. Daiwa Securities Group Inc.’s holdings in MongoDB were worth $3,279,000 as of its most recent SEC filing.

Other hedge funds have also made changes to their positions in the company. Vanguard Group Inc. lifted its holdings in MongoDB by 2.1% in the 1st quarter. Vanguard Group Inc. now owns 5,970,224 shares of the company’s stock worth $2,648,332,000 after buying an additional 121,201 shares in the last quarter. Jennison Associates LLC lifted its holdings in MongoDB by 87.8% in the 3rd quarter. Jennison Associates LLC now owns 3,733,964 shares of the company’s stock worth $1,291,429,000 after buying an additional 1,745,231 shares in the last quarter. State Street Corp lifted its holdings in MongoDB by 1.8% in the 1st quarter. State Street Corp now owns 1,386,773 shares of the company’s stock worth $323,280,000 after buying an additional 24,595 shares in the last quarter. 1832 Asset Management L.P. lifted its holdings in MongoDB by 3,283,771.0% in the 4th quarter. 1832 Asset Management L.P. now owns 1,018,000 shares of the company’s stock worth $200,383,000 after buying an additional 1,017,969 shares in the last quarter. Finally, Geode Capital Management LLC lifted its holdings in MongoDB by 3.8% in the 1st quarter. Geode Capital Management LLC now owns 967,289 shares of the company’s stock worth $225,174,000 after buying an additional 35,541 shares in the last quarter. Hedge funds and other institutional investors own 89.29% of the company’s stock.

Insider Transactions at MongoDB

In other news, CRO Cedric Pech sold 1,430 shares of the company’s stock in a transaction that occurred on Tuesday, April 2nd. The shares were sold at an average price of $348.11, for a total value of $497,797.30. Following the completion of the sale, the executive now directly owns 45,444 shares of the company’s stock, valued at approximately $15,819,510.84. The sale was disclosed in a document filed with the SEC, which is available at this hyperlink. In related news, CRO Cedric Pech sold 1,430 shares of the stock in a transaction that occurred on Tuesday, April 2nd. The shares were sold at an average price of $348.11, for a total transaction of $497,797.30. Following the completion of the sale, the executive now directly owns 45,444 shares of the company’s stock, valued at $15,819,510.84. The transaction was disclosed in a document filed with the SEC, which can be accessed through the SEC website. Also, CEO Dev Ittycheria sold 33,000 shares of the stock in a transaction that occurred on Thursday, February 1st. The shares were sold at an average price of $405.77, for a total transaction of $13,390,410.00. Following the sale, the chief executive officer now directly owns 198,166 shares of the company’s stock, valued at $80,409,817.82. The disclosure for this sale can be found here. In the last three months, insiders sold 92,802 shares of company stock valued at $36,356,911. Company insiders own 4.80% of the company’s stock.

MongoDB Price Performance

Shares of NASDAQ:MDB traded down $8.08 during trading on Friday, reaching $327.47. The company had a trading volume of 1,353,402 shares, compared to its average volume of 1,059,599. The firm’s fifty day moving average is $390.81 and its 200 day moving average is $390.70. The firm has a market capitalization of $23.85 billion, a PE ratio of -132.04 and a beta of 1.19. MongoDB, Inc. has a twelve month low of $212.52 and a twelve month high of $509.62. The company has a debt-to-equity ratio of 1.07, a quick ratio of 4.40 and a current ratio of 4.40.

MongoDB (NASDAQ:MDBGet Free Report) last issued its quarterly earnings results on Thursday, March 7th. The company reported ($1.03) earnings per share for the quarter, missing analysts’ consensus estimates of ($0.71) by ($0.32). The company had revenue of $458.00 million during the quarter, compared to analyst estimates of $431.99 million. MongoDB had a negative return on equity of 16.22% and a negative net margin of 10.49%. On average, equities analysts forecast that MongoDB, Inc. will post -2.53 EPS for the current year.

Analysts Set New Price Targets

A number of brokerages have commented on MDB. Stifel Nicolaus reiterated a “buy” rating and issued a $435.00 price objective on shares of MongoDB in a report on Thursday, March 14th. KeyCorp dropped their target price on shares of MongoDB from $490.00 to $440.00 and set an “overweight” rating for the company in a research report on Thursday. DA Davidson raised shares of MongoDB from a “neutral” rating to a “buy” rating and boosted their target price for the stock from $405.00 to $430.00 in a research report on Friday, March 8th. JMP Securities reissued a “market outperform” rating and issued a $440.00 target price on shares of MongoDB in a research report on Monday, January 22nd. Finally, Needham & Company LLC reissued a “buy” rating and issued a $465.00 target price on shares of MongoDB in a research report on Tuesday, April 9th. Two equities research analysts have rated the stock with a sell rating, three have given a hold rating and nineteen have assigned a buy rating to the company’s stock. According to MarketBeat.com, MongoDB presently has an average rating of “Moderate Buy” and an average target price of $444.93.

Read Our Latest Stock Report on MDB

MongoDB Profile

(Free Report)

MongoDB, Inc, together with its subsidiaries, provides general purpose database platform worldwide. The company provides MongoDB Atlas, a hosted multi-cloud database-as-a-service solution; MongoDB Enterprise Advanced, a commercial database server for enterprise customers to run in the cloud, on-premises, or in a hybrid environment; and Community Server, a free-to-download version of its database, which includes the functionality that developers need to get started with MongoDB.

Further Reading

Institutional Ownership by Quarter for MongoDB (NASDAQ:MDB)

Before you consider MongoDB, you’ll want to hear this.

MarketBeat keeps track of Wall Street’s top-rated and best performing research analysts and the stocks they recommend to their clients on a daily basis. MarketBeat has identified the five stocks that top analysts are quietly whispering to their clients to buy now before the broader market catches on… and MongoDB wasn’t on the list.

While MongoDB currently has a “Moderate Buy” rating among analysts, top-rated analysts believe these five stocks are better buys.

View The Five Stocks Here

These 7 Stocks Will Be Magnificent in 2024 Cover

With average gains of 150% since the start of 2023, now is the time to give these stocks a look and pump up your 2024 portfolio.

Get This Free Report

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Notes on Cody Rhodes, Natalya, Tessa Blanchard, and more – Gerweck.net

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

– PWInsider reports TNA has begun preliminary talks with Tessa Blanchard about returning to the company.

Cody Rhodes (via Sports Illustrated):

“Roman was nowhere to be found backstage in Gorilla when I got there after winning the WWE Championship.

I look forward to when we actually do have that moment and we get to see each another again.”

– Dave Meltzer, replying to a fan on X today, said that AEW‘s current television deal with Warner Bros. Discovery is set to expire on December 31, 2024.

Natalya posted:

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Aerospike raises $109M for its real-time database platform to capitalize on the AI boom

MMS Founder
MMS RSS

Posted on nosqlgooglealerts. Visit nosqlgooglealerts

NoSQL database Aerospike today announced that it has raised a $109 million Series E round led by Sumeru Equity Partners. Existing investor Alsop Louie Partners also participated in this round.

In 2009, the company started as a key-value store with a focus on the adtech industry; Aerospike has since diversified its offerings quite a bit. Today, its core offering is a NoSQL database that’s optimized for real-time use cases at scale.

In 2022, Aerospike added document support and then followed that up with graph and vector capabilities — two database features that are crucial for building real-time AI and ML applications.

“We were founded primarily as a real-time data platform that can work with data at really high scale, or, as we call it, unlimited scale,” Aerospike CEO Subbu Iyer said. “We’ve been fortunate enough that a lot of our customers have either started their journey at scale with us, or started the journey earlier and grown into the platform. So our premise has held good that real-time data and real-time access to data is going to be important pretty much across every industry. Our founding principles were really to deliver real-time performance with data at any scale, and the lowest [total cost of ownership] on the market.”

In part, Aerospike, which offers its service as a hosted platform and on-premises, is able to deliver on this promise through its hybrid memory architecture that allows it to augment the use of RAM to speed up data access with fast flash storage — or any combination of the two. Aerospike competitor Redis recently acquired Speedb to offer similar capabilities — also with an eye on helping its customers reduce costs.

Image Credits: Aerospike

Today, the company’s customers include the likes of Airtel, Transunion, Snap and TechCrunch parent company Yahoo.

Right now, though, it’s definitely the AI boom that is driving a lot of interest in Aerospike and the company wants to be in a position to capitalize on that through this new funding round.

Unsurprisingly, that means the company plans to use the new funding to accelerate its innovations around AI, which are mostly focused on its graph and vector capabilities. Iyer told me that Aerospike is specifically looking at combining those two capabilities.

“Going forward, there are some synergistic ways in which graph and vectors can come together,” he said. “A simple use case I use for this, for example, is if you’re looking for a specific document and you have embeddings and stored them in a vector database, you want to use a vector search to get to that specific document. But if you’re looking for a set of similar documents, a vector search can get you to the neighborhood and then a graph can get you a similar corpus of documents because of relationships and stuff.”

That, of course, is also what got investors interested in the company. Aerospike raised its last round in 2019 and according to the company’s CEO, it didn’t need to raise now, but there is a large opportunity for Aerospike to capitalize on now, something Sumeru co-founder and managing director George Kadifa also stressed.

“AI is transforming the economy and presents new opportunities for growth and innovation,” he said. “Aerospike, with its impressive customer base and performance advantage at scale, is uniquely positioned to become a foundational element for the next generation of real-time AI applications.”

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Using Neo4J’s graph database for AI in Azure | InfoWorld

MMS Founder
MMS RSS

Posted on nosqlgooglealerts. Visit nosqlgooglealerts

Once you get past the chatbot hype, it’s clear that generative AI is a useful tool, providing a way of navigating applications and services using natural language. By tying our large language models (LLMs) to specific data sources, we can avoid the risks that come with using nothing but training data.

While it is possible to fine-tune an LLM on specific data, that can be expensive and time-consuming, and it can also lock you into a specific time frame. If you want accurate, timely responses, you need to use retrieval-augmented generation (RAG) to work with your data.

RAG: the heart of Microsoft’s Copilots

The neural networks that power LLMs are, at heart, sophisticated vector search engines that extrapolate the paths of semantic vectors in an n-dimensional space, where the higher the dimensionality, the more complex the model. So, if you’re going to use RAG, you need to have a vector representation of your data that can both build prompts and seed the vectors used to generate output from an LLM. That’s why it’s one of the techniques that powers Microsoft’s various Copilots.

I’ve talked about these approaches before, looking at Azure AI Studio’s Prompt Flow, Microsoft’s intelligent agent framework Semantic Kernel, the Power Platform’s Open AI-powered boost in its re-engineered Q and A Maker Copilot Studio, and more. In all those approaches, there’s one key tool you need to bring to your applications: a vector database. This allows you to use the embedding tools used by an LLM to generate text vectors for your content, speeding up search and providing the necessary seeds to drive a RAG workflow. At the same time, RAG and similar approaches ensure that your enterprise data stays in your servers and isn’t exposed to the wider world beyond queries that are protected using role-based access controls.

While Microsoft has been adding vector search and vector index capabilities to its own databases, as well as supporting third-party vector stores in Azure, one key database technology has been missing from the RAG story. These missing databases are graph databases, a NoSQL approach that provides an easy route to a vector representation of your data with the added bonus of encoding relationships in the vertices that link the graph nodes that store your data.

Adding graphs to Azure AI with Neo4j

Graph databases like this shouldn’t be confused with the Microsoft Graph. It uses a node model for queries, but it doesn’t use it to infer relationships between nodes. Graph databases are a more complex tool, and although they can be queried using GraphQL, they have a much more complex query process, using tools such as the Gremlin query engine.

One of the best-known graph databases is Neo4j, which recently announced support for the enterprise version of its cloud-hosted service, Aura, on Azure. Available in the Azure Marketplace, it’s a SaaS version of the familiar on-premises tool, allowing you to get started with data without having to spend time configuring your install. Two versions are available, with different memory options built on reserved capacity so you don’t need to worry about instances not being available when you need them. It’s not cheap, but it does simplify working with large amounts of data, saving a lot of time when working with large-scale data lakes in Fabric.

Building knowledge graphs from your data

One key feature of Neo4J is the concept of the knowledge graph, linking unstructured information in nodes into a structured graph. This way you can quickly see relationships between, say, a product manual and the whole bill of materials that goes into the product. Instead of pointing out a single part that needs to be replaced for a fix, you have a complete dependency graph that shows what it affects and what’s necessary to make the fix.

A tool like Neo4j that can sit on top of a large-scale data lake like Microsoft’s Fabric gives you another useful way to build out the information sources for a RAG application. Here, you can use the graph visualization tool that comes as part of Neo4j to explore the complexities of your lakehouses, generating the underlying links between your data and giving you a more flexible and understandable view of your data.

One important aspect of a knowledge graph is that you don’t need to use it all. You can use the graph relationships to quickly filter out information you don’t need for your application. This reduces complexity and speeds up searches. By ensuring that the resulting vectors and prompts are confined to a strict set of relationships, it reduces the risks of erroneous outputs from your LLM.

There’s even the prospect of using LLMs to help generate those knowledge graphs. The summarization tools identify specific entities within the graph database and then provide the links needed to define relationships. This approach lets you quickly extend existing data models into graphs, making them more useful as part of an AI-powered application. At the same time, you can use the Azure Open AI APIs to add a set of embeddings to your data in order to use vector search to explore your data as part of an agent-style workflow using LangChain or Semantic Kernel.

Using graphs in AI: GraphRAG

The real benefit of using a graph database with a large language model comes with a variation on the familiar RAG approach, GraphRAG. Developed by Microsoft Research, GraphRAG uses knowledge graphs to improve grounding in private data, going beyond the capabilities of a standard RAG approach to use the knowledge graph to link related pieces of information and generate complex answers.

One point to understand when working with large amounts of private data using an LLM is the size of the context window. In practice, it’s too computationally expensive to use the number of tokens needed to deliver a lot of data as part of a prompt. You need a RAG approach to get around this limitation, and GraphRAG goes further, letting you deliver a lot more context around your query.

The original GraphRAG research uses a database of news stories, which a traditional RAG fails to parse effectively. However, with a knowledge graph, entities and relationships are relatively simple to extract from the sources, allowing the application to select and summarize news stories that contain the search terms, by providing the LLM with much more context. This is because the graph database structure naturally clusters similar semantic entities, while providing deeper context in the relationships encoded in the vertices between those nodes.

Instead of searching for like terms, much like a traditional search engine, GraphRAG allows you to extract information from the entire dataset you’re using, whether transcripts of support calls or all the documents associated with a specific project.

Although the initial research uses automation to build and cluster the knowledge graph, there is the opportunity to use Neo4j to work with massive data lakes in the Microsoft Fabric, providing a way to visualize that data so that data scientists and business analysts can create their own clusters, which can help produce GraphRAG applications that are driven by what matters to your business as much as by the underlying patterns in the data.

Having a graph database like Neo4j in the Azure Marketplace gives you a tool that helps you understand and visualize the relationships in your data in a way that supports both humans and machines. Integrating it with Fabric should help build large-scale, context-aware, LLM-powered applications, letting you get grounded results from your data in a way that standard RAG approaches can miss. It’ll be interesting to see if Microsoft starts implementing GraphRAG in its own Prompt Flow LLM tool.

Next read this:

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Behind Tata Steel’s deeptech strategy; How MongoDB is building solutions for India

MMS Founder
MMS RSS

Posted on nosqlgooglealerts. Visit nosqlgooglealerts

Hello,

More Indians featured on the Forbes World’s Billionaires List 2024 than ever before.

Among the 200 Indians on the list this year, 25 featured on it for the first time. The total wealth of Indians on the list reached $954 billion—up from the previous year’s total of $675 billion held by 169 people.

While the wealthiest was (no surprise) Mukesh Ambani, Zerodha co-founder Nikhil Kamath was the youngest billionaire on the list.

Meanwhile, the World Bank has upgraded India’s economic outlook, projecting the country’s economy to grow at 7.5% in 2024 on the back of robust activity in services and industry.

Elsewhere, PhonePe said its users can now make payments in Singapore through UPI. Under the collaboration with Singapore Tourism Board, Indian travellers can transact with over 8,000 merchants in the city-state.

In other news, Zypp Electric reported a 3X growth in its revenues—from Rs 115 crore earned in FY23 to Rs 353 crore made in the fiscal year gone by. The electric fleet service provider also hit operational profitability led by a jump in its fleet size and expanding operations beyond Delhi-NCR. 

ICYMI: Ola Electric’s autonomous scooter can take sharp turns.

Oh, and have you heard of Norman Bel Geddes? He’s partly the reason why our cities are dominated by cars and highways.

His 1939 exhibition called “Futurama” changed the course of global cities. 

In today’s newsletter, we will talk about 

  • Decoding Tata Steel’s deeptech focus
  • How MongoDB is building solutions for India
  • From Air Force officer to tech leader

Here’s your trivia for today: What is the most endangered species in the world?


Interview

Decoding Tata Steel’s deeptech strategy

Over the last five years, Tata Steel has built bonds with the startup community through its engagement programme, Innoventure. One of these challenges is decarbonisation as the energy-intensive steel industry has a large carbon footprint.

Now, Tata Steel is looking towards deeptech startups to iron out processes in the core areas of energy management, integration of AI and ML, robotics, sustainability etc. 

Learning from startups:

  • As per Dr Debashish Bhattacharjee, Vice President, Technology and R&D, Tata Steel, the company’s intent is to take advantage of the creativity, passion and agility of startups to move fast in technologies.
  • “We are engaged with startups working in advanced mobility like Hyperloop technology, AI, ML for large-scale modelling, etc. We also work with deeptech startups that have robots which can go underwater,” he adds.
  • By engaging with startups, it adapted its procurement system to suit the needs of startups and introduced nimbleness in its processes, as per Bhattacharjee.
Tata Steel Bhattacharjee

Funding Alert

Startup: SiftHub 

Amount: $5.5M

Round: Seed

Startup: Vodex 

Amount: $2M

Round: Seed

Startup: Arch0

Amount: $1.25M

Round: Pre-seed


Interview

How MongoDB is building solutions for India

New York-based MongoDB holds nearly half of the market share in the NoSQL databases category. For context, the NoSQL market has players such as Amazon DynamoDB and Apache Cassandra, and is expected to reach $86.3 billion by 2032, according to Allied Market Research. 

YourStory spoke with Sachin Chawla, the Vice President for India & South Asia at MongoDB, to understand the company’s plans for India. 

Key takeaways:

  • Founded by Dwight Merriman, Eliot Horowitz and Kevin Ryan in 2007, MongoDB began its India operations in 2013. 
  • MongoDB’s customers in India include Upstox, Darwinbox, Canara HSBC Life Insurance, Magicpin and TATA AIG General Insurance.
  • The company has initiated an academic programme aiming to train 500,000 students in India to help foster local talent. 
Sachin Chawla, VP for India & South Asia, MongoDB

Inspiration 

From Air Force officer to tech leader

Teja Manakame was one of 25 women to join the second batch of women in the Indian Air Force. After a six-year-stint, she joined the IT industry. She is now a senior leader at Dell Technologies and spearheads many important initiatives and projects.

Inclusivity:

  • Manakame’s first posting was in Mount Abu, where she was one among two women in a station of 300 personnel, including officers and airmen. 
  • In 2005, Manakame joined Dell Technologies as a manager and has since grown through the ranks and is now Vice President (IT) at the company. She was the first woman to be promoted in-house to a Director role in Dell IT in India
  • At Dell Technologies, Manakame has initiated and spearheads Tech CSR which works at the intersection of tech for social good.
women in tech - teja manakame

News & updates

  • Make in India: Tesla Motors will send a team to scout locations in India for a proposed $2-3 billion electric car plant, according to two people with direct knowledge of the electric vehicle company’s plans. The step towards making vehicles in India comes after India last month lowered tariffs on higher-priced imported EVs.
  • Total loss: Intel’s chip-making division accumulated $7 billion in operating losses in 2023. That’s a big increase from the $5.2 billion it lost in 2022, and while it made $18.9 billion in revenue in 2023, that number is down 31% from the $27.49 billion it made the year prior.
  • Job cuts: Amazon Web Services will cut several hundred jobs in its sales, marketing, and global services organisation, and a few hundred jobs on its physical stores technology team, executives in the tech giant’s cloud computing division informed employees.

What is the most endangered species in the world?

Answer: Vaquita Porpoise. The marine animal species has only about 10 individuals left in the wild.


We would love to hear from you! To let us know what you liked and disliked about our newsletter, please mail [email protected]

If you don’t already get this newsletter in your inbox, sign up here. For past editions of the YourStory Buzz, you can check our Daily Capsule page here

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Datastax acquires Langflow to make building GenAI applications easier – Techzine Europe

MMS Founder
MMS RSS

Posted on nosqlgooglealerts. Visit nosqlgooglealerts

DataStax, specializing in generative AI data stacks, has acquired the open-source visual framework Langflow. DataStax hopes the purchase will make it significantly easier for developers of all skill levels to create sophisticated generative AI tools, among other things by streamlining the development of retrieval augmented generation (RAG) applications.

The Python-based Langflow platform has a user-friendly interface, allowing developers to drag and drop components without code. It provides a GUI for LangChain, a suite of products that helps developers build and deploy reliable GenAI apps faster. According to Datastax, developers can use Langflow to build RAG applications more efficiently, reducing deployment time from weeks to hours. This accelerated development process should promise better results with fewer errors.

According to DataStax, the acquisition will mitigate some challenges inherent in AI application development. These challenges include AI’s rapid evolution, lack of established best practices, and reliance on legacy data and code, which hinder developers’ ability to bring their ideas to fruition.

Tip: In AI development, never lose your RAG

Rich ecosystem of custom components

DataStax hopes to address these challenges by integrating LangChain and data framework LlamaIndex, as well as providing flexible deployment options through Astra DB, the company’s NoSQL database. These integrations should streamline the development and deployment of production-ready RAG applications using Python and a rich ecosystem of custom components.

Integrating Langflow with RAGStack, DataStax’s out-of-the-box RAG solution, is also possible. This provides enterprise support for companies deploying RAG at scale. RAG allows new data sources to connect directly to AI models while reducing hallucinations, potentially saving time and effort.

Chet Kapoor, CEO and chairman of DataStax expressed enthusiasm for the acquisition, highlighting its potential to empower developers and companies to realize their ambitions in generative AI. “Langflow is focused on democratizing and accelerating generative AI development for any developer or company, and in joining DataStax, we’re working together to enable developers to put their wild new generative AI ideas on a fast path to production.”

Rodrigo Nader, Langflow’s CEO, echoed these sentiments, expressing excitement about collaborating with DataStax to further develop and expand the Langflow platform. Despite the acquisition, Langflow will continue to operate as a separate entity, focusing on product innovation and community collaboration.

Also read: DataStax makes AI tuning on enterprise data much faster in Astra DB

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


As MongoDB Struggles, Bearish Option Trade Makes Sense | Investor’s Business Daily

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

MongoDB (MDB) broke down to a new low Wednesday and continues to show a deterioration in relative strength.




X



Traders looking for a bearish option trade could look at a bear call spread on MongoDB stock.

A bear call spread involves selling an out-of-the-money call and buying a further out-of-the-money call.

The strategy can be profitable if the stock trades lower, sideways, and even if it trades slightly higher. That, as long as it stays below the short call at expiry.

A May-expiry bear call spread on MongoDB stock using the 400-410 strike prices could be sold for around $1.10 Wednesday.

Max Profit If MDB Closes Below 400

Traders selling the spread would receive $110 in option premium, which is also the maximum possible gain. The maximum loss would be $890.

The spread will achieve the maximum profit if MongoDB stock closes below 400 on May 17. In that case the entire spread would expire worthless, allowing the trader to keep the $110 option premium.

The maximum loss will occur if MDB closes above 410 on May 17. That would see the premium seller lose $890 on the trade.

While some option trades have the risk of unlimited losses, a bear call spread is a risk-defined strategy, and you always know the worst-case scenario in advance.

A stop loss could be set if MongoDB trades above 370, or if the spread value rises from $1.10 to $2.20.

Trade Equals Shorting Four MDB Shares

Because this is a bearish position, traders who think MongoDB stock could move higher from here should not enter this trade. The position starts with a delta of -4, meaning it is roughly equivalent to being short four shares of MongoDB.

According to the IBD Stock Checkup, MongoDB stock is ranked No. 10 in its industry group. It has a Composite Rating of 76, an EPS Rating of 80 and a Relative Strength Rating of 49.

MongoDB is due to report earnings around May 30, so this trade should not have any earnings risk.

Please remember that options are risky, and investors can lose 100% of their investment.

This article is for education purposes only and not a trade recommendation. Remember to always do your own due diligence and consult your financial advisor before making any investment decisions.

Gavin McMaster has a Masters in Applied Finance and Investment. He specializes in income trading using options, is very conservative in his style and believes patience in waiting for the best setups is the key to successful trading. Follow him on X/Twitter at @OptiontradinIQ

YOU MIGHT ALSO LIKE:

Option Trade On Coinbase Could Return 31% In Less Than 3 Weeks

Bullish Diagonal Spread Looks For Low-Cost Exposure To Google

Palo Alto Stock Today: How To Trade A Long Strangle

Can Apple Rise 17%? This Option Trade Offers Such A Return

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.