MongoDB, Inc. (MDB) Is a Trending Stock: Facts to Know Before Betting on It

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

MongoDB (MDB) has recently been on Zacks.com’s list of the most searched stocks. Therefore, you might want to consider some of the key factors that could influence the stock’s performance in the near future.

Over the past month, shares of this database platform have returned +0.4%, compared to the Zacks S&P 500 composite’s -1.6% change. During this period, the Zacks Internet – Software industry, which MongoDB falls in, has lost 3.2%. The key question now is: What could be the stock’s future direction?

While media releases or rumors about a substantial change in a company’s business prospects usually make its stock ‘trending’ and lead to an immediate price change, there are always some fundamental facts that eventually dominate the buy-and-hold decision-making.

Revisions to Earnings Estimates

Rather than focusing on anything else, we at Zacks prioritize evaluating the change in a company’s earnings projection. This is because we believe the fair value for its stock is determined by the present value of its future stream of earnings.

We essentially look at how sell-side analysts covering the stock are revising their earnings estimates to reflect the impact of the latest business trends. And if earnings estimates go up for a company, the fair value for its stock goes up. A higher fair value than the current market price drives investors’ interest in buying the stock, leading to its price moving higher. This is why empirical research shows a strong correlation between trends in earnings estimate revisions and near-term stock price movements.

MongoDB is expected to post earnings of $0.49 per share for the current quarter, representing a year-over-year change of +113%. Over the last 30 days, the Zacks Consensus Estimate remained unchanged.

The consensus earnings estimate of $2.34 for the current fiscal year indicates a year-over-year change of +188.9%. This estimate has remained unchanged over the last 30 days.

For the next fiscal year, the consensus earnings estimate of $2.80 indicates a change of +19.7% from what MongoDB is expected to report a year ago. Over the past month, the estimate has changed -0.2%.

Having a strong externally audited track record, our proprietary stock rating tool, the Zacks Rank, offers a more conclusive picture of a stock’s price direction in the near term, since it effectively harnesses the power of earnings estimate revisions. Due to the size of the recent change in the consensus estimate, along with three other factors related to earnings estimates, MongoDB is rated Zacks Rank #2 (Buy).

The chart below shows the evolution of the company’s forward 12-month consensus EPS estimate:

12 Month EPS

Revenue Growth Forecast

While earnings growth is arguably the most superior indicator of a company’s financial health, nothing happens as such if a business isn’t able to grow its revenues. After all, it’s nearly impossible for a company to increase its earnings for an extended period without increasing its revenues. So, it’s important to know a company’s potential revenue growth.

For MongoDB, the consensus sales estimate for the current quarter of $402.75 million indicates a year-over-year change of +20.7%. For the current and next fiscal years, $1.61 billion and $1.95 billion estimates indicate +25.1% and +21.6% changes, respectively.

Last Reported Results and Surprise History

MongoDB reported revenues of $423.79 million in the last reported quarter, representing a year-over-year change of +39.6%. EPS of $0.93 for the same period compares with -$0.23 a year ago.

Compared to the Zacks Consensus Estimate of $389.93 million, the reported revenues represent a surprise of +8.69%. The EPS surprise was +106.67%.

The company beat consensus EPS estimates in each of the trailing four quarters. The company topped consensus revenue estimates each time over this period.

Valuation

Without considering a stock’s valuation, no investment decision can be efficient. In predicting a stock’s future price performance, it’s crucial to determine whether its current price correctly reflects the intrinsic value of the underlying business and the company’s growth prospects.

While comparing the current values of a company’s valuation multiples, such as price-to-earnings (P/E), price-to-sales (P/S) and price-to-cash flow (P/CF), with its own historical values helps determine whether its stock is fairly valued, overvalued, or undervalued, comparing the company relative to its peers on these parameters gives a good sense of the reasonability of the stock’s price.

As part of the Zacks Style Scores system, the Zacks Value Style Score (which evaluates both traditional and unconventional valuation metrics) organizes stocks into five groups ranging from A to F (A is better than B; B is better than C; and so on), making it helpful in identifying whether a stock is overvalued, rightly valued, or temporarily undervalued.

MongoDB is graded F on this front, indicating that it is trading at a premium to its peers. Click here to see the values of some of the valuation metrics that have driven this grade.

Bottom Line

The facts discussed here and much other information on Zacks.com might help determine whether or not it’s worthwhile paying attention to the market buzz about MongoDB. However, its Zacks Rank #2 does suggest that it may outperform the broader market in the near term.

Want the latest recommendations from Zacks Investment Research? Today, you can download 7 Best Stocks for the Next 30 Days. Click to get this free report

MongoDB, Inc. (MDB) : Free Stock Analysis Report

To read this article on Zacks.com click here.

Zacks Investment Research

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Mongodb Inc (MDB) Down 1.67% in Premarket Trading – InvestorsObserver

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

News Home

Thursday, October 26, 2023 07:58 AM | InvestorsObserver Analysts

Mentioned in this article

Mongodb Inc (MDB) Down 1.67% in Premarket Trading

Mongodb Inc (MDB) has fallen Thursday morning, with the stock falling -1.67% in pre-market trading to 323.84.

MDB’s short-term technical score of 91 indicates that the stock has traded more bullishly over the last month than 91% of stocks on the market. In the Software – Infrastructure industry, which ranks 58 out of 146 industries, MDB ranks higher than 85% of stocks.

Mongodb Inc has fallen 1.19% over the past month, closing at $343.11 on September 28. During this period of time, the stock fell as low as $331.61 and as high as $374.67. MDB has an average analyst recommendation of Strong Buy. The company has an average price target of $432.83.

Overall Score - 60
MDB has an Overall Score of 60. Find out what this means to you and get the rest of the rankings on MDB!

Mongodb Inc has a Long-Term Technical rank of 72. This means that trading over the last 200 trading days has placed the company in the upper half of stocks with 28% of the market scoring higher. In the Software – Infrastructure industry which is number 52 by this metric, MDB ranks better than 52% of stocks.

Important Dates for Investors in MDB:

-Mongodb Inc is set to release earnings on 2023-12-05. Over the last 12 months, the company has reported EPS of $-3.45.

-We do not have a set dividend date for Mongodb Inc at this time.

Click Here To Get The Full Report on Mongodb Inc (MDB)

You May Also Like

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Sustainable Product Development Using Agile and Value Stream Mapping

MMS Founder
MMS Ben Linders

Article originally posted on InfoQ. Visit InfoQ

Sustainable product development can be done by combining agile with concepts from the circular economy in our daily work. Value stream mapping can be extended to incorporate circular economy principles to optimize the flow of materials, information, and energy usage.

Ines Garcia spoke about sustainable product development using agile and value stream mapping at XP 2023.

The aim of sustainable product development is to build products that endure over time, Garcia mentioned. We can support sustainable product development using concepts from the circular economy in our daily work.

Being sustainable is not just about the product itself. It’s also about the people, time, the energy involved in the process, and the consumption that its existence requires, Garcia said. If any single part of the process cannot be maintained over time, it’s unsustainable, Garcia argued.

Garcia explained that when agile is truly embedded in the fabric of an organisation, the practices enable businesses to be adaptable, efficient, and sustainable:

By embracing circular economy principles, agile practitioners can reduce their environmental footprint, focus on value impact, and minimize waste. Agile promotes collaboration, innovation, and continuous improvement, all of which align with the goals of the circular economy.

Designing products for re-use involves considering their entire lifecycle, from raw materials to end-of-life disposal. Think up and downstream activities, which includes modular design, using durable and recyclable materials, and implementing take-back programs, Garcia said.

According to Garcia, the benefits of designing products for re-use are reduced resource consumption, minimized waste generation, cost savings through product longevity (this also includes software!), and the creation of new revenue streams through product-as-a-service models.

Value stream mapping is a common agile practice in our toolbox. Garcia came up with the concept of extending it to incorporate circular economy principles by mapping the entire value stream, identifying areas of waste and inefficiency, and evaluating the feasibility of circular strategies at each step:

You bring together representatives of each vertical section of your system map and collaborate to outline time spent in between each vertical area. Think of it being in a queue ready to be consumed by the next vertical step. This way you can co-create ways to streamline it.

The common practice of value stream mapping only looks at one variable: time. Garcia enriched it with variables beyond “time”; including materials and energy, to find the leverage points and how to streamline it throughout the whole value chain.

Organisations can identify opportunities to reduce waste, extend product lifetimes, and recover valuable materials. Value stream mapping helps optimize the flow of materials, information, and energy within a circular value stream, Garcia concluded.

InfoQ interviewed Ines Garcia about designing products for re-use and and infusing sustainability in retrospectives.

InfoQ: How can we design products for re-use and what benefits can that bring?

Ines Garcia: As an example of moving the concept of the product from material-based to value-based, let’s look at lightning. Putting our circular economy hats on, lightning as product-as-a-service is a circular business model which for example Signify adopted.

The circular economy approach includes:

  • Performance-based payments: Customers pay a monthly fee based on lighting system performance, reducing resource consumption and promoting longer product life cycles.
  • Reusability and recyclability: the manufacturer therefore designs lighting fixtures for easy repair, replacement, and eventual reuse/recycling, minimizing waste generation.
  • Energy efficiency: professional management of lighting systems optimizes energy consumption, resulting in significant energy savings and reduced emissions.
  • Reduced capital costs: adopting similar models eliminates the need for upfront capital investment, making efficient solutions more accessible to businesses and organizations.

InfoQ: How can we infuse sustainability and circularity into agile retrospectives?

Garcia: You may have heard of “Planet as a Stakeholder” retrospective format suggested by Marjolein Pilon, which uses green software engineering to find answers and practical ideas on reducing carbon footprint as a Scrum team. This format can also be used to support circular development.

You can create your own take on the core of Circular Economy:

  • Keep products and materials in use for as long as possible
  • Reduce waste and pollution and
  • Regenerate natural systems

You may want to design the retrospective to start with a “Planet Check-In” to raise awareness of the environmental impact, then move to review green software engineering principles, then gather individual input on the current state and desired improvements, and bring it to a close by collectively choosing actions that promote circularity and sustainability.

This helps teams reflect on their practices and identify opportunities for positive change. I believe that any concept of sustainability can be downscaled to industries, supply chains, organisations, departments, teams, communities, households, and individuals.

About the Author

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


MongoDB, Inc. (NASDAQ:MDB) Shares Purchased by Raymond James Financial Services …

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

Raymond James Financial Services Advisors Inc. boosted its stake in MongoDB, Inc. (NASDAQ:MDBFree Report) by 91.3% during the 2nd quarter, according to its most recent 13F filing with the Securities & Exchange Commission. The firm owned 3,564 shares of the company’s stock after acquiring an additional 1,701 shares during the period. Raymond James Financial Services Advisors Inc.’s holdings in MongoDB were worth $1,465,000 as of its most recent SEC filing.

Other institutional investors have also recently added to or reduced their stakes in the company. 1832 Asset Management L.P. increased its position in MongoDB by 3,283,771.0% during the 4th quarter. 1832 Asset Management L.P. now owns 1,018,000 shares of the company’s stock worth $200,383,000 after purchasing an additional 1,017,969 shares during the period. Price T Rowe Associates Inc. MD boosted its stake in shares of MongoDB by 13.4% during the 1st quarter. Price T Rowe Associates Inc. MD now owns 7,593,996 shares of the company’s stock worth $1,770,313,000 after acquiring an additional 897,911 shares in the last quarter. Norges Bank purchased a new stake in shares of MongoDB during the 4th quarter worth approximately $147,735,000. Champlain Investment Partners LLC purchased a new stake in shares of MongoDB during the 1st quarter worth approximately $89,157,000. Finally, T. Rowe Price Investment Management Inc. boosted its stake in shares of MongoDB by 77.4% during the 4th quarter. T. Rowe Price Investment Management Inc. now owns 568,803 shares of the company’s stock worth $111,964,000 after acquiring an additional 248,133 shares in the last quarter. Institutional investors and hedge funds own 88.89% of the company’s stock.

Analyst Ratings Changes

Several equities research analysts have recently commented on the company. Scotiabank assumed coverage on MongoDB in a research note on Tuesday, October 10th. They set a “sector perform” rating and a $335.00 price target on the stock. Citigroup increased their target price on MongoDB from $430.00 to $455.00 and gave the company a “buy” rating in a report on Monday, August 28th. Royal Bank of Canada reiterated an “outperform” rating and set a $445.00 target price on shares of MongoDB in a report on Friday, September 1st. JMP Securities increased their target price on MongoDB from $425.00 to $440.00 and gave the company a “market outperform” rating in a report on Friday, September 1st. Finally, Canaccord Genuity Group increased their target price on MongoDB from $410.00 to $450.00 and gave the company a “buy” rating in a report on Tuesday, September 5th. One investment analyst has rated the stock with a sell rating, three have given a hold rating and twenty-two have assigned a buy rating to the company. According to data from MarketBeat.com, MongoDB currently has an average rating of “Moderate Buy” and an average target price of $415.46.

Read Our Latest Analysis on MDB

MongoDB Stock Down 6.5 %

Shares of MongoDB stock opened at $329.33 on Thursday. The business has a 50-day moving average of $358.30 and a 200-day moving average of $342.00. The company has a debt-to-equity ratio of 1.29, a current ratio of 4.48 and a quick ratio of 4.48. The company has a market capitalization of $23.50 billion, a PE ratio of -95.18 and a beta of 1.13. MongoDB, Inc. has a 1-year low of $135.15 and a 1-year high of $439.00.

MongoDB (NASDAQ:MDBGet Free Report) last posted its quarterly earnings results on Thursday, August 31st. The company reported ($0.63) EPS for the quarter, topping analysts’ consensus estimates of ($0.70) by $0.07. The business had revenue of $423.79 million during the quarter, compared to the consensus estimate of $389.93 million. MongoDB had a negative return on equity of 29.69% and a negative net margin of 16.21%. As a group, equities analysts forecast that MongoDB, Inc. will post -2.17 EPS for the current fiscal year.

Insider Buying and Selling

In other MongoDB news, Director Dwight A. Merriman sold 2,000 shares of MongoDB stock in a transaction that occurred on Tuesday, October 10th. The stock was sold at an average price of $365.00, for a total value of $730,000.00. Following the completion of the transaction, the director now directly owns 1,195,159 shares in the company, valued at $436,233,035. The sale was disclosed in a legal filing with the SEC, which is accessible through this hyperlink. In other MongoDB news, Director Dwight A. Merriman sold 6,000 shares of MongoDB stock in a transaction that occurred on Friday, August 4th. The stock was sold at an average price of $415.06, for a total value of $2,490,360.00. Following the completion of the transaction, the director now directly owns 1,207,159 shares in the company, valued at $501,043,414.54. The sale was disclosed in a legal filing with the SEC, which is accessible through this hyperlink. Also, Director Dwight A. Merriman sold 2,000 shares of MongoDB stock in a transaction that occurred on Tuesday, October 10th. The shares were sold at an average price of $365.00, for a total value of $730,000.00. Following the completion of the transaction, the director now owns 1,195,159 shares of the company’s stock, valued at $436,233,035. The disclosure for this sale can be found here. Over the last three months, insiders sold 187,984 shares of company stock worth $63,945,297. Corporate insiders own 4.80% of the company’s stock.

MongoDB Company Profile

(Free Report)

MongoDB, Inc provides general purpose database platform worldwide. The company offers MongoDB Atlas, a hosted multi-cloud database-as-a-service solution; MongoDB Enterprise Advanced, a commercial database server for enterprise customers to run in the cloud, on-premise, or in a hybrid environment; and Community Server, a free-to-download version of its database, which includes the functionality that developers need to get started with MongoDB.

See Also

Institutional Ownership by Quarter for MongoDB (NASDAQ:MDB)



Receive News & Ratings for MongoDB Daily – Enter your email address below to receive a concise daily summary of the latest news and analysts’ ratings for MongoDB and related companies with MarketBeat.com’s FREE daily email newsletter.

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Presentation: Scaling Defenses Amidst Evolving Threat Landscape

MMS Founder
MMS Aditi Gupta

Article originally posted on InfoQ. Visit InfoQ

Transcript

Gupta: Around five-and-a-half years back, my husband and I were expecting our first kid. All we could think of in that moment was, how are we going to bring this baby into the world: the day of the birth, and how this baby is going to have such a huge impact on our lives? We knew that raising kids was hard, but we greatly underestimated how hard. Over the last five years, I realized that as my kid grew, there were new challenges every day and we had to constantly adapt our parenting toolkit to raise them into safe and well-functioning adults. Why am I talking about my kids in a technical presentation? It’s because over the last five years of raising my kid, and building different services, what I have observed is that raising kids is a lot like operating a service under evolving threat landscape. The different services, just like kids, operate under constantly changing and unpredictable environment. There are always new threats. With kids, they’re either trying to poke their fingers into electrical sockets, or they’re putting everything in their mouth, running up the stairs. The threats are constantly evolving, and our approach to keep them safe also needs to evolve. The effort of raising a child and operating such services is often underestimated. Both of these operate under a constantly changing and unpredictable environment. The kind of services that I’m referring to here are like DDoS prevention, fraud prevention, things where your threat landscape is constantly evolving. These are the sorts of services that I work on at Netflix.

Background

I am a Staff Security Software Engineer in the customer trust team at Netflix. In our team, we build scalable defense systems, with the goal to stop any malicious activity that can negatively impact or harm customer trust. Some examples of such attacks are when fraudsters try to use stolen credentials to do unauthorized logins or account takeover. I specifically work on leading our anti-DDoS efforts at Netflix. I love to build systems to solve security problems. I’m also a mom of two small boys, and what I have realized in raising these two small humans and building different services is that what works now may not work later. Our security solutions may lose its effectiveness as attacks evolve. With time, as threats evolve, our security system must evolve too. While working on these services at Netflix, we experience this constantly changing nature of attacks quite often.

The Operational Burden

In this graph, the x-axis is the time axis, and the y-axis is the detection rate. When we launch the service, and until that point, that detection rate from that service is zero. Once we launch it, that goes up, and we are happy. We are doing great. Over time that saturates. Then the attackers notice and they evolve. When they evolve, our detection rate goes down. Then we’re like, what’s happening? We modify our defenses and then the rate goes up. Then the attackers respond by changing their techniques. This is a constant rollercoaster. What’s important to note here is that if we don’t evolve, our detection rate is going to go down. What does this mean for us as service owners? The operational burden increases. We constantly find ourselves in a firefighting mode. What do I mean by operations here? Operations stem from two things. Either we are not blocking substantially bad traffic, which leads to an increase in fraud, account takeovers, or a decrease in service availability, for example, in DDoS. Or, on the other end of the spectrum is we start blocking too aggressively, which leads to blocking good user traffic that manifests in the form of increased customer service call. You know that your operational burden is out of control when you find yourself making statements like, “I’m constantly in a firefighting mode. Every time I sit down to code, I get interrupted by a page. There’s too much unplanned operations work this quarter. I don’t know if we are seeing more attacks, or if my service has an issue. I have no time to do feature development. I’m not sure quite sure what happened with this attack.” I believe I’m not the only one here who found themselves in this situation. I’m sure there are more of you out there.

Example: Challenges at Netflix

While working on the DDoS service at Netflix, this is where I found myself, that the operational overhead was going up, and I was constantly in a firefighting mode. Looking at a service, I realized that our challenges were in three main categories. The first one was visibility. At that point, I realized we didn’t really have a good understanding of our DDoS landscape. We knew what was happening in the moment, how the service was performing at that moment. If we look at an entire threat landscape, how many DDoS attacks were we seeing, how were they related? We didn’t have an idea. How a service was responding to these over a long period of time, were we doing better? Were we doing worse? We didn’t know. The second one was investigations. That’s when our service did not do a good job in blocking the DDoS traffic. Why was that? I’m a backend engineer and my data skills are not that sharp. It took me hours to write Hive queries, look at the data. The tables were extremely big, so these were very slow. It took several hours to even figure out what was going on. I ended up writing queries which were wrong, so I had to rewrite them. This was a big time sinkhole. Then, the third one is operations, which is basically about, we didn’t have an idea of what was the false positive or the false negative rate that we are seeing with our service? Do we need to fine-tune our defenses? How do we need to fine-tune it?

What Are the Options?

Faced with all of these, I thought to myself, what are our options now? The first one is, either I figure out how to scale operations, which is, we hire more folks. That’s not always an option given the resourcing budgets and everything. Or we can build tooling to solve this. The second option is, my service is not doing great so let me just build a new one. While that may be the right solution in some cases, what I have observed is that in most of the cases, when you build a new version, it’s like, we end up repeating the same mistakes. It’s very hard to get feature parity, and it becomes this vicious cycle. What I have realized is that systems don’t change, mechanisms change. The core implementation of your service is not what leads it to become ineffective over time. It’s the mechanisms, it’s the rules, the models that you integrate with. This is the key point of this presentation. If we have to figure out a way of, how do we scale these defenses in a threat landscape that is constantly evolving, we need to figure out a way to scale operations. What are my options here? At Netflix, we think beyond the core business logic. We think of it as an entire defense ecosystem. What do I mean by that? A defense ecosystem is a continuously evolving ecosystem. It observes the attack behavior and how we are responding to it. It learns the new attack patterns, and then it responds. It adapts by modifying and deploying new rules and models to these constantly evolving threats.

The Defense Ecosystem – Decisioning Service

How do we build this defense ecosystem? I’m going to talk about the four components of the defense ecosystem. The first component is the decisioning service. This is the service that contains the core business logic that is used to decide whether a request is fraudulent or not. This is what your important flows like, for example, the login flow integrates with. The operational challenges from this decisioning service, they manifest in four forms. Either, we are blocking good users or we are not blocking enough bad traffic, our decisioning is slow, or the availability is low. When we are building our defense service, our decisioning service, we have to design to reduce operational overhead.

When we do this, we build this service with the following key objectives. We want to have a service that has low false positive rate and a high detection rate. When either of these things are not working as intended, we need to design in a way that we can respond fast. We want a service to have low latency so that it can do faster decisioning. We want it to be highly available. Just consider the DDoS defense service, if that service is not highly available, how can it guarantee the availability of your entire infrastructure? These are our five key objectives. It’s easier said than done to have all of these like right, because if you look at it, some of them are inherently tradeoffs. If I want to have something that’s highly accurate, then I need to spend more time looking at the request, running a whole bunch of machine learning models, and that would lead to high latency. What we have realized is that building all of this into just one service is often not the right solution, so we look at it from a layered defense approach, where we break our service down into various components, and we deploy them at various layers in our infrastructure.

Let’s look at an example. Let’s look at login attacks. There are three login attacks that I’ll talk about. The first is credential stuffing, which is the high-volume login traffic where someone gets pulled off a stolen credential dump, and they’re just spraying the infrastructure with millions of requests. If they get few of them right, then that’s all they want. The unauthorized login is when someone has the username, password of an account that they don’t own and they are using that in an unauthorized manner. The third one, account takeovers, is a step further. That’s when they not only log into your account, but they also kick you out by changing the email and password, so now they basically own the account. When you’re building defenses against attacks like login, you could think of defenses in sort of a funnel. At the top level is the edge. That’s where the requests enter our infrastructure. This is where we deploy a solution that’s low latency. It doesn’t have to be 100% accurate as long as it blocks 70% of the traffic, with super low false positive rate, we are good. As we go down in our infrastructure, we now have more leniency in terms of latency but we also have higher requirements in terms of accuracy because now we have more context. We have opened up a request. We have a whole bunch of different information about this request. Now our decisioning can be much better. What’s important to note here is that when we are looking at this layered defense, we have different contexts at different layers. We have different requirements in terms of latency, accuracy. We want to be able to block as early as possible when our request is coming into the infrastructure, but as we go down the funnel, we have more context for a better decision.

The second one is the rules and configurations that we need for our service. Where do we store them? Because this will define how the operations show up later. One option is to keep this in our code. Here’s a hypothetical example. We are trying to figure out if a request is from a bot. The request comes to a login endpoint. We take the IP address. We see how risky this IP address is. If it’s more than a certain limit, which is 5 here, then we say that, ok, block it. The advantage of having this in code is that we can have much more complex logic, but it also means that the response is going to be slower. Every time we need to make any changes, we have to modify the code, do a new build, and deploy it. That takes time. The second option is, have this as a static configuration. This is slightly better in terms of response. For example, if you need to change the minimum risk score from 5 to 7, you just need to go and make a change here and just redeploy the service. Now it’s much easier to keep all the configurations in one place, and you don’t need to make code changes.

The third one is using dynamic configuration. This could either be in a key-value pair. Here, when you make a change, that is instantaneous. The moment you make a change, your service takes it up and your response is super-fast. At the same time, too many key-value pairs can lead to a config explosion. You’ll start to lose visibility of what all configurations do you have in place? Are they conflicting? If there are multiple of them, how do you make sure which one takes precedence? The dynamic configuration doesn’t necessarily always have to be key-value paired, you could also think of it as a JSON file or a YAML file so that you have a better visibility of what’s going on. That can get loaded into your system frequently. In our services at Netflix, we use a combination of all of these three, based on how new the service is, how mature the service is. If it’s still in the early stages, we start with putting things in code, or as FaaS properties, but as things mature, we try to pull this out into a configuration file.

For these, when we are looking at the operational considerations, there are certain things we need to keep in mind. The first is that we should be able to respond fast. With some options, for example, like the dynamic configuration we can respond much faster, as compared to the code where it takes slightly longer to respond. We need to have our configuration storage in a way that it’s highly available, because our decisioning service can’t really function without that. We also need to think about visibility, so what I was talking about in terms of like the key-value pairs. If you have too many of them, you really don’t know what’s going on. Are they conflicting? Which one is actually taking effect? Having that visibility into the entirety of your configuration is also important. The fourth one, the self-serve. That may not be applicable to all fraud services, but in some of the fraud services what we have realized is that fraud service is used not just by one flow, but by multiple different flows, which are owned by different teams. In terms of operations, sometimes if we are the only one who can make these changes, it’s slow, and it’s a lot of operational overhead on our end. By making our configuration self-serve, we give this ability to different teams to make changes for their own flows.

The third one that I wanted to talk about was dependencies. To reduce the operational overhead and be able to scale our defenses, it’s important to reduce our dependencies, because the SLA of our service is going to be defined by the SLAs of what we depend on. It might be tempting to use a shiny, new technology which is out there. We have to stop and rethink, how is this going to affect the SLA of our service? We have to reduce fanout. The paved path may not always be the right solution for such services. Let’s look at an example. For example, if we are talking about the DDoS service, and the DDoS service uses Kafka streams to generate the rules. If there’s a DDoS happening, and that brings down the Kafka service, then your service will not be able to have any new rules, which means it will not be able to stop the DDoS attacks, and that renders the service useless. It’s important to reduce fanout. It’s important to look at the dependencies of these services to make sure that they don’t go down as the threat landscape evolves and changes.

The Defense Ecosystem – Intelligence Component

The second component is the intelligence component. This is basically what helps us identify what is bad and what to block. This is the brains of your system. It’s like the rules, the models, the feature store, tables. This is like the secret sauce, so I can’t go too deep into it. It also depends on which service because we have various kinds of different services. The key thing to note here is that the intelligence becomes stale with time. If you think of the antivirus software, if you don’t update the signatures, that software is effectively useless. It’s similar, you need to update the intelligence or the systems will stop being effective. We do this mostly manually now, by revisiting our rules and configurations every so often. We are also working on developing EDS, which will allow us to semi-automatically look at what’s working, what’s not working, and then respond to it, instead of manually doing it all the time.

The Defense Ecosystem – Observability

The third component is the observability component. This is one of my favorite ones. Earlier this year, we saw a significant increase in the number of DDoS attacks, which were causing SPS impact. SPS is streams per second, and this is the key metric that we use to measure the availability of Netflix. There is a very interesting Eng blog post on this as well. We saw an increase in DDoS, which were causing SPS impact. Around the same time, Russia invaded Ukraine. What happened then was there was also an increase in the DDoS activity around the world. The question that our leadership was asking us was, is there a correlation between this recent increase in DDoS at Netflix, and the Ukraine-Russia conflict? Honestly, we didn’t know the answer to that question. The reason that was the case was because we did not have a long-term visibility into the DDoS attack trends. We only look back a few weeks. Other than that, we didn’t have the data so we couldn’t really answer that question. We were like, let’s go ahead and solve this problem, because this is important to know. That’s when we started investing in an observability system. Let’s dive deep into what this meant.

First is, when we are looking at understanding the threat landscape, there’s micro-visibility, which is, you take an attack and you zoom in on that. You see how that attack is behaving. How are we responding to it, and build metrics around that. Let’s look at a DDoS example. The blue line here shows a DDoS attack. There’s baseline traffic. There’s a huge spike. Then, after a while, the attack goes down. The red line is the blocked traffic. As you see, we are really not blocking much. Then as the DDoS begins, we slowly start ramping up. Then after a point, we are blocking all of it, and that’s when the DDoS attack stops, and goes away. When we are building metrics for this, the first challenge that we ran into was, how do we even identify the attack in data? When we are looking at the data, we are looking at the request level. We don’t know if this request is a DDoS request, or if it’s a normal request. What we did was we developed a spike detection algorithm. We looked at the service traffic, and whenever there was a sharp increase, and a sharp decrease, we said that, ok, this is when a DDoS attack starts, this is when a DDoS attack stops, and this is one DDoS. Now, once we had that, now we could take those values and now figure out what was going on with this particular DDoS. Then we looked at, how long was this attack? What was the volume of the traffic? What was the spread of key features that we use to block a DDoS? Then we also looked at the response. What was our block rate? How long was our response time? Was there any service impact? If yes, how much? This is the micro-visibility into a DDoS attack.

The second one is macro-visibility. Now if I zoom out, and now I’m looking at all these different DDoS attacks over a much longer period of time, what is it that I’m seeing? As an example, we build out metrics to now figure out, how many daily DDoS do we see? When we were building out the metrics, we broke our metrics into two components. The first one was high-level metrics. High-level metrics is something that is useful not just to us, but also to the leadership to understand what is going on in our infrastructure in terms of DDoS. This will show what the attack trends are. Which endpoints are more hit by DDoS? How are we responding to it? What’s the cost impact, and so on? The second kind of metrics is the operational metrics. This is something that’s used by us as service owners and operators to understand how our different services are doing. How well are we responding to these DDoS attacks? Which rules, which models are the most effective, are the least effective? If we have to look at multiple attacks and dive deep, what metrics do we need? This is what we built in terms of the macro-visibility for the DDoS landscape.

This was a lot. When we started off, we basically had nothing. We just had an idea of, yes, we want to improve the visibility of our system. We want to understand things better. We want to improve our investigations. The question was, where do we even start? We started with the key questions. What are the key questions that we need answers to? We listed out all of these key questions. Next, we were like, what are the data requirements that we need to answer these questions? Do we have this data already, or do we need to build these data pipelines? For the second case, then we went ahead and built the data pipelines and the workflows so that now we have all the data that we need to answer the key questions. As the final step, we built dashboards, so that this data was now easily accessible to us to consume.

The Defense Ecosystem – Response

Let’s move on to the final and the fourth component of the defense ecosystem, which is the response component. In terms of response, there are two kinds of response. First is the proactive response. This is when we are doing continuous monitoring of our system to see how it’s performing. How are the false positives looking like? How are the false negatives looking like? Which rules are effective? Which rules are not effective? Then, respond by tuning these rules and models, and updating our defenses. The second one is the reactive response. This is when things don’t go as expected. There is an alert, there’s an escalation, and we need to investigate it. One of the things that we use is Elasticsearch, but sometimes that doesn’t have all the context. Like I mentioned earlier, when we had to do investigations, I spent hours writing Hive queries to pull data from a table, which was huge, so it was super slow. Instead, what we did was we built notebooks. In these notebooks, now I could specify that, ok, this is when the attack happened, this is when it started. This is when it ended. This is the target service that it hit. Now, why didn’t I block enough, tell me? This notebook would then run. It would look at the traffic that we did not block, and it will break it down by various reasons on why we didn’t block it. While we’re still in a preview mode where we were observing what’s going on, but not quite blocking it. Or maybe we were just ramping up, we didn’t quite get there in the first few minutes or so. Or, if there are different reasons. We can have different reasons on why we wouldn’t block something. This notebook now showed that to us. Now, the investigation which used to take us 6 to 8 hours to run would now take 15 to 30 minutes. That is a huge efficiency saving in terms of the developer time. Some of the early wins that we saw when we invested in building these observability and response systems were, we now had a much better visibility of what was going on in our infrastructure. Our investigation and response was much faster. As a side effect of building the data foundations that we needed to build the metrics for visibility, and so on, we ended up building data foundation, which would also help us at improving our services’ effectiveness. It would lead to much better signal generation.

Caveats

The first caveat is, when should we build these various components of the defense ecosystem? If I have to go back to that kid analogy. To raise a kid, you need an ecosystem. You need the school, the pediatrician, the enrichment classes. You don’t start thinking of that from day zero, you wait till your kid is a bit older and when the timing is right, that’s when you start investing in such things. The same is true for a defense ecosystem. When you’re just building out the service, you invest in the decisioning service and some bit of the intelligence. As the service matures, it evolves, the operational overhead starts to increase, eventually, that’s when you also start building into it the observability system, the response system to make things much better. The second learning that we had was that it is important to staff the team right. When we started off with the service, we just had backend engineers working on it, with some support from the data analytics, but it was still mostly the backend engineers. That’s why we focused so highly on the decisioning service part of it, but not so much on the observability and the response. When the need for that started, we also brought on our data engineers, data analytics, and also improved our operations. Once we did that, we saw a significant increase in how we operated as a team, and how our different services were doing.

Key Takeaways

Let’s review. If there’s just one thing that I would like you to take away from this talk, is that it’s important to think beyond the core business logic. It’s important to invest in the entire defense ecosystem. What I mean by that is, it’s important not just to have the decisioning service, but also the intelligence component, the observability component, and the response component. Yes, think beyond the core business logic. Think about the entire defense ecosystem.

Questions and Answers

Knecht: Your talk is an awesome focus on how you’re still solving this problem at Netflix. Can you talk a little bit about why distributed denial of service or DDoS, is a particularly difficult problem to solve?

Gupta: With DDoS, what happens is we get a huge amount of requests in a very short amount of time. There are two things, first is we have to respond very fast, because if we take our own sweet time to respond, the damage is done. Every second is important. We have to respond very fast. The second thing is we have to respond with very limited context. When you have to respond fast you don’t have the luxury to run a lot of machine learning models, like open up the request, look at the payload. You operate with very less information. With very less information, we have to respond fast, and we have to make sure that we don’t have false positives, because often when you’re operating with less information, you tend to be very aggressive and that can lead to blocking good users. We don’t want that. Trying to make sure that we are responding fast with limited context, and not have false positives, that’s the main challenge that I feel with defending against DDoS attacks.

Knecht: You mentioned false positives and false negatives, I think you have the other side of that too. Can you talk about how you measure false negatives of your system or metrics, or things that you use to measure how well your system is doing in blocking?

Gupta: In one of the previous slides, I showed how a DDoS attack looks like. We have a baseline number which is what we usually see in terms of traffic. When there’s a DDoS, there’s a spike that lasts for a while, and then it goes away. When we look at our defenses, we also look at the metrics of how much traffic we’ve blocked. That also follows a similar trend. You have a baseline, then when there’s a DDoS, we start blocking a lot more traffic. Then once the DDoS goes away, we go back to our normal. That difference of the spike traffic is the net volume of the DDoS we see. Then, the traffic we block during the spike is what we block. The difference is basically what we did not block, that is the false negative. That’s one metric that we use to know that we are not blocking enough bad traffic. We also have alerts on the service availability, so if there’s a DDoS and we are not able to respond fast enough, then we do get a service impact. On that, we get paged. Those are the two things that we look at.

Knecht: I think DDoS and protecting a service that’s as popular as Netflix probably has a lot of ambiguity in it, or certainly does have a lot of ambiguity in it. You might not hear about something like we accidentally blocked a user or something like this. How do you measure false positive rates in systems where it is difficult to identify the ground truth data?

Gupta: I think for those, we look at two things. First is, what is the expected traffic pattern? With Netflix, we know that there’s an expected way that the traffic is supposed to look like. If we deploy a new configuration and look at just the blocks by that configuration, if we see that that is also following the same trend, then that means it’s most likely blocking good users. That’s one of the signals that we use. The second is that if something goes wrong and we are blocking good users, we do get contacted by customer service. We work very closely with our customer service folks. If we start seeing that we deployed a configuration and the calls to customer service went up, that’s one strong indicator that we have on, “Ok, something wasn’t right, this is what we deployed. Let’s turn it off and investigate.”

See more presentations with transcripts

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


How Is The Market Feeling About MongoDB? By Benzinga – Investing.com UK

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

How Is The Market Feeling About MongoDB?
© Reuters. How Is The Market Feeling About MongoDB?

Benzinga – by Benzinga Insights, Benzinga Staff Writer.

MongoDB’s (NYSE:MDB) short percent of float has fallen 9.3% since its last report. The company recently reported that it has 3.58 million shares sold short, which is 5.85% of all regular shares that are available for trading. Based on its trading volume, it would take traders 2.16 days to cover their short positions on average.

Why Short Interest Matters
Short interest is the number of shares that have been sold short but have not yet been covered or closed out. Short selling is when a trader sells shares of a company they do not own, with the hope that the price will fall. Traders make money from short selling if the price of the stock falls and they lose if it rises.

Short interest is important to track because it can act as an indicator of market sentiment towards a particular stock. An increase in short interest can signal that investors have become more bearish, while a decrease in short interest can signal they have become more bullish.

MongoDB Short Interest Graph (3 Months)

As you can see from the chart above the percentage of shares that are sold short for MongoDB has declined since its last report. This does not mean that the stock is going to rise in the near-term but traders should be aware that less shares are being shorted.

Comparing MongoDB’s Short Interest Against Its Peers
Peer comparison is a popular technique amongst analysts and investors for gauging how well a company is performing. A company’s peer is another company that has similar characteristics to it, such as industry, size, age, and financial structure. You can find a company’s peer group by reading its 10-K, proxy filing, or by doing your own similarity analysis.

According to Benzinga Pro, MongoDB’s peer group average for short interest as a percentage of float is 5.41%, which means the company has more short interest than most of its peers.

Did you know that increasing short interest can actually be bullish for a stock? This post by Benzinga Money explains how you can profit from it.

This article was generated by Benzinga’s automated content engine and was reviewed by an editor.

© 2023 Benzinga.com. Benzinga does not provide investment advice. All rights reserved.

Read the original article on Benzinga

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


How Is The Market Feeling About MongoDB? – Benzinga

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

Loading…
Loading…
Loading…

MongoDB’s MDB short percent of float has fallen 9.3% since its last report. The company recently reported that it has 3.58 million shares sold short, which is 5.85% of all regular shares that are available for trading. Based on its trading volume, it would take traders 2.16 days to cover their short positions on average.

Why Short Interest Matters

Short interest is the number of shares that have been sold short but have not yet been covered or closed out. Short selling is when a trader sells shares of a company they do not own, with the hope that the price will fall. Traders make money from short selling if the price of the stock falls and they lose if it rises.

Short interest is important to track because it can act as an indicator of market sentiment towards a particular stock. An increase in short interest can signal that investors have become more bearish, while a decrease in short interest can signal they have become more bullish.

See Also: List of the most shorted stocks

Loading…
Loading…
Loading…

MongoDB Short Interest Graph (3 Months)

As you can see from the chart above the percentage of shares that are sold short for MongoDB has declined since its last report. This does not mean that the stock is going to rise in the near-term but traders should be aware that less shares are being shorted.

Comparing MongoDB’s Short Interest Against Its Peers

Peer comparison is a popular technique amongst analysts and investors for gauging how well a company is performing. A company’s peer is another company that has similar characteristics to it, such as industry, size, age, and financial structure. You can find a company’s peer group by reading its 10-K, proxy filing, or by doing your own similarity analysis.

According to Benzinga Pro, MongoDB’s peer group average for short interest as a percentage of float is 5.41%, which means the company has more short interest than most of its peers.

Did you know that increasing short interest can actually be bullish for a stock? This post by Benzinga Money explains how you can profit from it.

This article was generated by Benzinga’s automated content engine and was reviewed by an editor.

Loading…
Loading…
Loading…

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Alphabet, Microsoft earnings show AI rewriting cloud businesses – The Stack

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

AI, says Microsoft, is “redefining our role in business applications” with partners adding Copilots as a “process transformation layer” on top of existing systems.

October 25, 2023

. 12:18 PM

3 min read

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Microsoft Announces Radius: a New Open Application Platform for the Cloud

MMS Founder
MMS Steef-Jan Wiggers

Article originally posted on InfoQ. Visit InfoQ

Microsoft has recently announced Radius, a cloud-native application platform designed to facilitate collaboration between developers and platform engineers who assist them in delivering and managing cloud-native applications. This platform ensures that these applications adhere to corporate standards for cost-efficiency, operational efficiency, and security as the default approach.

Radius is an open-source project from the company’s Azure Incubation team that supports deploying applications across private cloud, Microsoft Azure, and Amazon Web Services. This team previously also launched other open-source projects, such as Dapr, KEDA, and Copacetic, available at github.com via the Cloud Native Compute Foundation (CNCF).

Radius adapts to the specific needs of developers by supporting established technologies such as Kubernetes, pre-existing infrastructure tools like Terraform and Bicep, and integration with current continuous integration and continuous delivery (CI/CD) systems such as GitHub Actions.

Mark Russinovich, chief technology officer of Microsoft Azure, explains:

Radius enables developers to understand their applications, and it knows your application is more than just Kubernetes. Radius helps developers see all the components that comprise their application, and when they add new components, Radius automatically connects those components to their application by taking care of permissions, connection strings, and more.

Overview of Radius (Source: Microsoft blog)

Radius additionally guarantees that the cloud infrastructure supporting applications aligns with cost, operational, and security prerequisites. These prerequisites are documented in recipes, formulated by IT operators, platform engineers, and/or security engineers assisting cloud-native developers. Furthermore, Radius associates an application with its underlying infrastructure, enabling it to provide an application diagram illustrating the interconnections between the application and infrastructure – allowing users to view and intuitively understand what makes up an application.

A respondent on a Hacker News thread on Radius posted a comment:

I suppose this sounds useful if you’ve got a really complicated cloud deployment, but it’s sad to me that people would need software to understand their own application deployments.

At the same time, in a LinkedIn post by Russinovich announcing the launch of Radius, Alex Neihaus, an Azure and AWS Cloud Infrastructure Architect, commented:

I admire the effort. But it reads as another attempt to unify devs and infrastructure, this time in a deployment model (cloud, private) agnostic way.

Dream on.

In addition, Patt Gunn, a systems programmer, commented on the name of the application platform:

This is not a good name for a platform; there’s an internet protocol (for authentication) called radius. You’re going to confuse people and make it hard for people looking for the other one to do good internet searches.

Lastly, more details on Radius are available on the documentation pages.

About the Author

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Node.js 21 Release Updates V8 Engine, WebSteams Module and Test Runner

MMS Founder
MMS Agazi Mekonnen

Article originally posted on InfoQ. Visit InfoQ

The Node.js team has recently released Node.js 21. This release introduces changes such as updates to the V8 JavaScript engine, stable fetch and WebStreams modules, an experimental flag for altering module defaults, a built-in WebSocket client, and enhancements to the test runner. Notably, as an odd-numbered release, Node.js 21 will not receive LTS (long-term support) status.

The fetch and WebStreams modules have been promoted to stable within Node.js 21. This promotion impacts modules like WebStreams, FormData, Headers, Request, Response, and fetch.

Additionally, the release includes an experimental WebSocket client that can be enabled with the --experimental-websocket flag, aligning with browser-compatible standards to simplify JavaScript development for applications that run on both frontend and backend.

Node.js 21 includes an update of the V8 engine, now at version 11.8, which brings improved performance and new language features. Notable features include Array grouping, ArrayBuffer.prototype.transfer, and WebAssembly extended-const expressions. This upgrade helps reduce dependencies on external libraries.

The test runner in Node.js has been enhanced to support glob expressions when specifying the --test parameter. This allows for more efficient and flexible test execution, making it easier to run tests across multiple directories using powerful glob patterns.

A new flag, --experimental-default-type, has been introduced. This flag allows developers to customize the default module system, enabling them to specify whether input files should be interpreted as ES modules or CommonJS.

The Node.js 21 release contains several performance-focused enhancements, including on URL, fetch, streams, node:fs, and HTTP. These refinements are intended to contribute to the overall runtime performance.

The integration of the global navigator object in Node.js 21 enhances web interoperability, allowing developers to access hardware concurrency information through navigator.hardwareConcurrency.

Rafael Gonzaga, a member of the Node.js Technical Steering Committee (TSC), encourages users to explore Node.js 21, stating:

“If you’re interested in getting access to interesting new features early, Node.js 21 is a great way to test and see what’s coming. Our release schedule specifically covers this. If you’re already in active deployment or if you are planning for it, Node.js 20 and 18 LTS are for you.” Node.js 21 becomes the current release, with this status lasting for six months until April 2024. Node.js 16 (LTS) has reached its end of life (EOL), prompting developers to plan upgrades to Node.js 18 (LTS) or Node.js 20 (LTS) for continued support and compatibility with the latest features.

The full list of changes is available in the Node.js 21 pull request within the associated GitHub project.

About the Author

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.